Cryptography-Digest Digest #468, Volume #9       Mon, 26 Apr 99 18:13:04 EDT

Contents:
  Quadibloc V now available! (John Savard)
  Re: public-key management and distribution (Medical Electronics Lab)
  Re: function help (Jim Felling)
  function help ([EMAIL PROTECTED])
  Re: 128 bit DES ("Douglas A. Gwyn")
  Re: True Randomness & The Law Of Large Numbers ("Trevor Jackson, III")
  Digital Watermarks? (Fiji)
  Re: True Randomness & The Law Of Large Numbers ("Trevor Jackson, III")
  Re: Double Encryption is Patented! (from talk.politics.crypto) (DJohn37050)
  Re: True Randomness & The Law Of Large Numbers (R. Knauer)
  Re: function help ([EMAIL PROTECTED])

----------------------------------------------------------------------------

From: [EMAIL PROTECTED] (John Savard)
Subject: Quadibloc V now available!
Date: Mon, 26 Apr 1999 17:16:58 GMT

Ah, yes. I'm just churning out these cipher designs...

Quadibloc V is closer to a "professional" design than any of my previous
designs. It uses only one of my 8-bit S-boxes derived from Euler's
constant, and it also uses a key-dependent S-box, but a short one, with
only 16 entries, each one 64 bits long.

It's at:

http://members.xoom.com/quadibloc/co040708.htm

And it just uses one type of round, not the kind of intricate structure
that in Quadibloc II and Quadibloc III almost guarantees invincibility even
when a cipher is designed by one of modest attainments. Even the key
schedule, although it includes "key augmentation", is simplified.

In my original design, I didn't use the intermediate results from the right
half of the block; but this leads to an interesting chosen-plaintext attack
on a single round: change a pair of bytes in the left half, and find which
values leave the right half unchanged. At first, I considered a simplistic
fix, taking the intermediate results and the final results fed into S2,
concatenating those eight bytes, and XORing that with the right half at the
start of the round. That still left a chosen-ciphertext version of the
attack possible, but with extra information accompanying each success.
Finally, I realized that I couldn't throw away information, or try to
restore the information with reduced diffusion, and so I added in the S-box
entries that you see rotated by 16 bits either way to make a proper
f-function.

Anyhow, since this is a "straightforward" design, there may be
opportunities for cryptanalysis this time. It's taken me quite a while to
learn enough to get here...

John Savard ( teneerf<- )
http://members.xoom.com/quadibloc/index.html

------------------------------

From: Medical Electronics Lab <[EMAIL PROTECTED]>
Subject: Re: public-key management and distribution
Date: Mon, 26 Apr 1999 13:05:18 -0500

Anthony King Ho wrote:
> 
> Thanks for reading this post.
> 
> Is there a convenient, user-friendly, and secure way to run my own
> certificate agent, for public-key management and distribution on the
> Internet?  I was thinking of this: user goes to a web page and uses his/her
> e-mail address to request a password, a random password will be generated
> and sent to that e-mail address.  The user then uses this password he/she
> received to log a secure web page, the system remembers the e-mail address
> from the password, the user then inserts the public-key.  But as you see,
> this is quite insecure, as e-mail is easy to intercept and spoof and the
> password is sent in plaintext.  On the other hand, I want this to be more
> convenient than using fingerprint or driver license...  Do you have any
> suggestion?

When the user goes to the web page, send an applet that generates
a public key.  It will include your public key as well.  After getting the 
public key, send the user an ecrypted random number.  Then have the user 
enter something (like a password or name or product ID) and combine that
with the random number.  Encrypt the combination with your public key,
send it back to you and recover the password.

When the user goes to the secure page, you could use DH or some other
key exchange to create a session key.  Have the user send the password
encrypted with the session key.  If it jives with the preliminary
password you have a good link.

This is pretty standard key exchange.  You could give away a program
instead of an applet that would do the key exchange as well.  If you
use a mailer, then you can print a random number in the package or
include it on a CD or floppy and get a pseudo authentication (you
still don't know who opened the package).

Patience, persistence, truth,
Dr. mike

------------------------------

From: Jim Felling <[EMAIL PROTECTED]>
Reply-To: [EMAIL PROTECTED]
Subject: Re: function help
Date: Mon, 26 Apr 1999 15:17:56 -0500



[EMAIL PROTECTED] wrote:

> does this function have an inverse?

Yes.

>
>
> A' = A + B xor C + D

What it is depends on how that expression is intended to parse.

A' =( (A+B)  xor C) + D       inverse is ((A' - D) xor C) - B
A'= (A+B) xor (C+D)           inverse is  (A' xor (C+D)) -B
A' = A +(B xor C) +D           inverse is  A' -( (B xor C) +D)
A'= A + ( B xor (C+D))        inverse is A' -( B xor (C+D))

>

>
>
> Without changing A,B,C or D.
>
> Originally I though 'C - D xor B - A' but that doesn't work.
>
> Thanks,
> Tom
> --
> PGP public keys.  SPARE key is for daily work, WORK key is for
> published work.  The spare is at
> 'http://members.tripod.com/~tomstdenis/key_s.pgp'.  Work key is at
> 'http://members.tripod.com/~tomstdenis/key.pgp'.  Try SPARE first!
>
> -----------== Posted via Deja News, The Discussion Network ==----------
> http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own


------------------------------

From: [EMAIL PROTECTED]
Subject: function help
Date: Mon, 26 Apr 1999 17:24:19 GMT

does this function have an inverse?

A' = A + B xor C + D

Without changing A,B,C or D.

Originally I though 'C - D xor B - A' but that doesn't work.

Thanks,
Tom
--
PGP public keys.  SPARE key is for daily work, WORK key is for
published work.  The spare is at
'http://members.tripod.com/~tomstdenis/key_s.pgp'.  Work key is at
'http://members.tripod.com/~tomstdenis/key.pgp'.  Try SPARE first!

============= Posted via Deja News, The Discussion Network ============
http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own    

------------------------------

From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: 128 bit DES
Date: Mon, 26 Apr 1999 18:23:43 GMT

"Gurripato (x=nospam)" wrote:
> On Fri, 23 Apr 1999 23:07:10 GMT, "Douglas A. Gwyn"
> <[EMAIL PROTECTED]> wrote:
> >[EMAIL PROTECTED] wrote:
> >> DES only has a 56 bit key, so double des is 112.  3DES or triple
> >> des is considered secure but there are better ciphers check out ...
> >"Better" how?
>         Meaning perpahs that they are faster or more resistant to
> cryptanalysis.

I can give you a really fast crypto algorithm, if it doesn't have
to be secure.

So how do you determine how "resistant to cryptanalysis" those
various ciphers really are?

------------------------------

Date: Tue, 27 Apr 1999 03:30:32 -0400
From: "Trevor Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: True Randomness & The Law Of Large Numbers

[Warpath: ENABLE]


R. Knauer wrote:
> 
> On Mon, 26 Apr 1999 05:52:11 GMT, "Douglas A. Gwyn" <[EMAIL PROTECTED]>
> wrote:
> 
> >No, what the document says (thanks for posting the URL) is that
> >for Level 3 or 4 security, this is one of a handful of tests that
> >will put the system into the "error state" (preventing its use
> >until reset) whenever one of the tests detects a pattern whose
> >a priori probability is below some tiny threshold (something
> >like 1 in 1,000,000).
> 
> Once again you are falling into the trap Feller warned us about,
> namely attempting to infer the ensemble average from the time average.

Garbage.  The issue has nothins to do with average variants.  You keep
dragging this red herring out as if it meant something in this context. 
The only utility the concept has is as a smoke screen for your own
ignorance.

> 
> The time average does NOT tell you anything about the process. Only
> the ensemble average can do that. Only until you analyze a
> sufficiently large number of sequences can you decide with reasonable
> certainty that the process is failing to conform to the specifications
> for a TRNG, by for example not obeying the p = 1/2 specification to
> within reasonable certainty.
> 
> For every sequence of 20,000 bits that fails the Monobit Test, there
> is a complementary sequence in the ensemble that offsets it, which
> means that the TRNG is performing to specification.

More garbage.  The theoretical potential existence of a complementary
sequence does not help at all in interpreting the meaning of a sequence
that fails the Monobit Test.  You don't HAVE the orginal sequence and
its complement.  You DO HAVE a peculiar sequence.  The
meaning/importance/reason-for-testing is that the source of the sequence
is suspect and should not be used,not matter how carefully you've
analyzed the expected output of the generator.

The ACTUAL output is trying to tell you something.  *LISTEN*!

> 
> The fact that such an offsetting complementary sequence does not
> appear in the test sequence has nothing to do with its existence in
> terms of the capability of the TRNG to generate all possible finite
> sequences equiprobably. Wait long enough and it will turn up sometime,
> proving that the TRNG is not malfunctioning.

Suuure.  Tell you what; you generate random numbers using any mechanism
you think is suitable and wait for a sequence containing the exact
binary complement of this message.  Then you can post again.

Quote:  "If you wait LONG ENOUGH (emphasis added) it will turn up
sometime, proving that the TRNG is not malfunctioning."

> 
> >Notice the tests are not totally
> >independent, so in case of a catastrophic malfunction several of
> >them might trigger the error state at the same time.

Irrelevant.  We do not care that multiple warnings might appear.  We
care greatly that a warning may not appear when merited.  In excluding
that potential we'll always create overlap.  Failuing to define
overlapping tests is negligence.

In laying out fields of fire you always create overlap so as not to
povide cracks in the coverage that an enemy might exploit.  Consider the
construction of chainwall.  It has irregularities for exactly the same
reason.

Stop complaining about the features of the tests as if they were
defects.

> 
> According to FIPS-140, failure of any one of the tests, including the
> Monobit Test, is sufficient to trigger an error at those levels. If
> the test is not really any good, then it is not really any good no
> matter what the level of alert it is used for. If anything, all it
> will do is generate false alarms.
> 
> I recognize the need for alarms, but I do not think the Monobit Test
> as presented is the proper way to go about it.

Why have you faled to specify in detail your version of "the proper way
to go about it"?  Would it have anything to do with the fact that you do
not have a clue?

That is not to say that
> much tighter restrictions would not be useful, like 20,000 1s or 0s.
> But then it would be the Long Run Test.
> 
> The fact is that the Monobit Test is too weak as it stands. And all
> the statistical snake oil in the world is not going to cover that up.
> 
> >It all seems fairly sensible if one wants to halt the system
> >shortly after it starts generating "busts", rather than letting
> >inadequately secure key streams compromise a lot of traffic.
> 
> OK, let's say we go along with that. You get a "bust", so you stop the
> TRNG for inspection and find nothing wrong with it. What do you do
> with that "bust" sequence? Do you use it as part of the keystream when
> you start back up or do you discard it?
> 
> If you keep it, you have violated your beloved specifications for
> statistical testing. If you discard it, you have violated the
> specification for the TRNG. You have given your adversary a reason not
> to consider "abnormal" sequences. We have already discussed the
> mistake that is.
> 
> >(As a matter of historical record, such "busts" have been
> >instrumental in cryptanalyzing a variety of machine systems.)
> 
> Then the TRNG was broken. The "busts", as you call them, are not the
> reason for cryptoanalytic weakness themselves.

Yes they are.  Your idiocy continues to amaze.  By your definition any
generator is acceptable, even if it exhibits repeatably extreme
bahavior, as long as _theoretically_ it should not produce extreme
behavior.  Your criteria for acceptability are ridiculous.

> 
> >> Define "consistently" in analytic terms.
> >> Define "suitable" in analytic terms.
> 
> >These are English words; look them up in a dictionary.
> 
> As much as I am a proponent of the dictionary, I wanted to make sure
> you did not have some arcane meaning for those terms.
> 
> BTW, one does not usually rely on the dictionary for the definition of
> mathematical terms - and I did say to define then in *analytic* terms.
> 
> >> 2) The TRNG subsystems are shown experimentally to operate within
> >> specifications.
> 
> >You have never said *how* you could do this, without in effect
> >applying one or more statistical tests.
> 
> You are as bad as Mok-Kong Shen.
> 
> I never said I was against the use of statistical methods in general.
> I said I was suspect of using simplistic small sample statistical
> tests, like the FIPS-140 Monobit Test, on the output sequence of a
> TRNG, even for warning purposes. There are better tests for generating
> warnings, tests which are designed to alert you to a specific hardware
> malfunction.
> 
> That distinction should be wide enough to drive a truck thru, yet you
> guys consistently get it wrong.

_Who_ is getting it wrong?  Look in a mirror.

 That tells me that you are not really
> reading what I am writing, but just interpreting what I say based on
> your personal bias about the matter.
> 
> You are like a religious fundamentalist who is absolutely convinced
> that the earth was created from nothing 6,000 years ago because it
> says so in the Bible. You would not give up that dogmatic position if
> the Pope himself told you that it was just a myth.
> 
> Pope Feller, along with Cardinals Li & Vitanyi, have tried in vain to
> tell you that simplistic small sample statistical tests applied
> directly to output sequences tell you nothing about the mathematical
> characteristics of the underlying generation process, but you
> consistently ignore even infallible pronouncements from such
> authorities and continue to beat us over the head with the Statistical
> Heresy.

Well I, personally, am speaking (typing) Ex Cathedra while seated in my
navel.


> 
> >What if it is computing the wrong thing?
> 
> The simple answer to your question is that a quantum computer cannot
> compute the wrong thing - it is either computing the right thing, or
> it not computing anything at all.
> 
> You might benefit from reading that book on quantum computing by
> Williams & Clearwater.
> 
> >That's the kind of
> >brokenness we're concerned about (and that the FIPS-140 tests
> >are intended to detect).
> 
> The Monobit Test, as it stands in FIPS-140, does NOT "detect" anything
> regarding "brokenness".
> 
> If anything, I would be far more inclined to use the Long Run Test,
> since it is testing for specific hardware errors, like a floating or
> shorted output stage.
> 
> Ask yourself exactly what the Monobit Test is testing for, and whether
> it is adequate to do the task with reasonable certainty, even under
> relaxed conditions. If you will do that, you will come face to face
> with the meta-mathematical problem here, namely an attempt to infer
> the properties of a sequence generation process from the time average
> of its output. That assumes that the time average of a given sequence
> has something to do with the ensemble average, the latter which does
> characterize the process.
> 
> But as Feller warns, the time average of a given sequence has
> *nothing* (his word, not mine) to do with the ensemble average, and
> therefore by implication the time average of a sequence has nothing to
> do with the proper characterization of the generation process itself.

Wrong.  Completely silly.  I know you are to smart t try and defend that
one because your out-of-context usage is so blatant that you have to be
doing it on purpose.

> 
> Attempting to characterize a sequence generation process from the time
> average of a given output is pure snake oil, no matter how many
> statistics bibles you thump on the table.
> 
> Bob Knauer
> 
> "A fear of weapons is a sign of retarded sexual and emotional maturity."
> -- Sigmund Freud

------------------------------

From: Fiji <[EMAIL PROTECTED]>
Subject: Digital Watermarks?
Date: Mon, 26 Apr 1999 15:24:01 -0400

Where can I get some information about companies/products dealing with
digital watermarks? I am hoping for some feedback from users who have used
said products.

thanks,
-Fiji


------------------------------

Date: Tue, 27 Apr 1999 05:41:42 -0400
From: "Trevor Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: True Randomness & The Law Of Large Numbers

Douglas A. Gwyn wrote:
> 
> > > >Notice the tests are not totally
> > > >independent, so in case of a catastrophic malfunction several of
> > > >them might trigger the error state at the same time.
> "Trevor Jackson, III" wrote:
> > Irrelevant.  ...
> 
> It's not totally irrelevant, because it means that the combination
> of 3 or 4 tests doesn't improve the overall test coverage as much
> as we would think if we assumed that the tests were independent.

Agreed that, in general, the issue of test overlap is quite relevant. 
However, in the context used, that of defaming the _applicability_ of
the tests, the issuse is not relevant.

> 
> > Stop complaining about the features of the tests as if they were
> > defects.
> 
> *I* wasn't complaining, just noting a feature of the test battery.

Right.  No dispute there.

In fact it is always the case that standardized tests have properties
that are not defined and that dictate usage by a competent operator.  No
standardized test can be applied blindly.

My personal views on the FIPS seris of standards is that they have some
use as minimum filters.  I.e., they form a floor on reliability. 
Anything that cannot pass that standard is not worth testing further.  A
serious practitioner will do far more than the FIPS tests, but those
results are even harder for someone other than the designer to
interpret.

Thus the utility of the FIPS tests may lie in the fact that they form a
standard that is both well defined and well understood (with a few
glaring exceptions to the latter).

------------------------------

From: [EMAIL PROTECTED] (DJohn37050)
Subject: Re: Double Encryption is Patented! (from talk.politics.crypto)
Date: 26 Apr 1999 20:58:37 GMT

It is the claims that matter.  In this case, they are able to show that every
bit of ciphertext depends on every bit of plaintext.  This is not true of
normal CBC and is novel.
Don Johnson

------------------------------

From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: True Randomness & The Law Of Large Numbers
Date: Mon, 26 Apr 1999 21:21:37 GMT
Reply-To: [EMAIL PROTECTED]

On Mon, 26 Apr 1999 19:04:43 GMT, "Douglas A. Gwyn" <[EMAIL PROTECTED]>
wrote:

>Surely, to take an
>extreme example, XORing the plaintext with a key stream
>that is all 0-bits produces an easily cryptanalyzable
>ciphertext.

Yet there is no reason for the cryptanalyst to conclude that it is the
intended message. If you say that it is highly unlikely that a
ciphertext would be fully intelligible, you would be correct, but that
cannot be used as analytic evidence that the cipher text is the same
as the message.

There are several other factors here. For one thing, although we use
the term OTP cipher we really mean a stream cipher that mimics the OTP
in its essential factors but has stronger mixing algorithms than
simple XOR. So the example you gave is not relevant.

Anyway, the TRNG would be protected from malfunctions of the type
which would result in a keystream of all 0s or all 1s, since they are
conditions that can be traced back to bad hardware. A Long Run Test
would be appropriate to catch those kinds of malfunctions.

My critique has been solely focused on the FIPS-140 Monobit Test as
the quintessential simplistic small sample statistical test I argue is
not useful for certifying the operation of a TRNG. I would not even
use it as a diagnostic tool.

> It is not the design algorithm that was broken,
>but it was the specific encoder device at that specific time.

You are saying that the suitablilty of a keystream for strong crypto
is a property of the keystream and not the generator. That is
incorrect.

>FIPS-140 does not specify a particular encipherment algorithm,
>and given that encipherment is normally accomplished by a
>collection of 2-transistor gates, any failing transistor of
>which can produce an exploitable bust, what could you do to
>look for a "specific hardware malfunction"?  If you do it with
>more transistors, you just add more opportunities for failure.

We have discussed techniques like TMR. IN any event, why not design
the hardware such that any malfunction causes it to fail in a known
way, such as outputting all 1s or all 0s?

Most equipment fails internally by outputing high-level or low
level-level noise which could be made to result in all 1s or all 0s at
the final output. A Long Run Test would alarm that condition.

The reason I like the Long Run Test over the Monobit Test is that with
the Long Run Test I am testing for an expected condition of failure,
whereas with the Monobit Test I am only testing for a "probabilistic"
condition that may or may not be the result of failure.

>> The Monobit Test, as it stands in FIPS-140, does NOT "detect"
>> anything regarding "brokenness".

>Oh, yes, it does.  It is a *probabilistic* detection, not an
>absolute certainty; the latter is not realizable in practice.

I think we are saying the same thing, namely that some standard
statistical tests can be used to detect potential error conditions. We
have said that all along. Now we are saying it is time to decide which
ones to use, and for what reasons.

I am taking the position now that I require an expected condition of
failure to justify the test and not some probabilitsic condition that
may or may not be caused by an unexpected condition of failure.

Unless you can relate the test directly to the expected failure mode
with reasonable certainty, I consider it just more snake oil.

Bob Knauer

"A fear of weapons is a sign of retarded sexual and emotional maturity."
-- Sigmund Freud


------------------------------

From: [EMAIL PROTECTED]
Subject: Re: function help
Date: Mon, 26 Apr 1999 20:43:27 GMT


> Your question is confusing!
> If you're mapping 4 bits into 1, then of course that's noninvertible.
> If you're mapping 4 bits into 4 bits, then what are the other 3 rules
> (for B', C', D')?
> If you don't actually mean, inverse function, then what *do* you
> mean?
> Is "xor" supposed to have precedence over "+", or vice-versa, or
> are the operators evaluated left-to-right, or right-to-left?
> Why do you say "without changing A, B, C, or D" -- why would we
> think they are not constant?

Sorry I meant to write

((A xor B) + C) xor D)

Tom
--
PGP public keys.  SPARE key is for daily work, WORK key is for
published work.  The spare is at
'http://members.tripod.com/~tomstdenis/key_s.pgp'.  Work key is at
'http://members.tripod.com/~tomstdenis/key.pgp'.  Try SPARE first!

============= Posted via Deja News, The Discussion Network ============
http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own    

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to