Cryptography-Digest Digest #671, Volume #10       Fri, 3 Dec 99 06:13:01 EST

Contents:
  Re: NP-hard Problems (David A Molnar)
  Re: Elliptic Curve Public-Key Cryptography (Paul Rubin)
  Re: Encrypting short blocks (Markus Peuhkuri)
  Re: Encrypting short blocks (Markus Peuhkuri)
  how to combine hashes to build a 128-bit key? (Juergen Thumm)
  Re: Noise Encryption ("Tim Wood")
  Re: brute force versus scalable repeated hashing ("Tim Wood")
  Re: brute force versus scalable repeated hashing ("Tim Wood")
  Re: more about the random number generator (Guy Macon)
  Re: Random Noise Encryption Buffs (Look Here) (Dave Knapp)
  Re: Random Noise Encryption Buffs (Look Here) (Dave Knapp)
  Re: Random Noise Encryption Buffs (Look Here) (Dave Knapp)
  Re: Why Aren't Virtual Dice Adequate? (Guy Macon)
  Re: Random Noise Encryption Buffs (Look Here) (Anthony Stephen Szopa)

----------------------------------------------------------------------------

From: David A Molnar <[EMAIL PROTECTED]>
Subject: Re: NP-hard Problems
Date: 3 Dec 1999 05:34:33 GMT

Bill Unruh <[EMAIL PROTECTED]> wrote:
> In <[EMAIL PROTECTED]> [EMAIL PROTECTED] (James Pate Williams, 
>Jr.) writes:

>>What are some problems that are NP-hard but not NP-complete? I know

> Well, it is not known if there are any NP hard problems. But assuming
> that say factoring is NP hard, I believe it is not NP complete.

This does not match my understanding of NP hard problems at all. :-(
As in "head explodes at reading previous two posts" out of joint
with what I know. Let me write what I think and hopefully we can resolve
this.  

>From what I understand :

When we say that a problem P is "NP-hard", we mean that 

a) we have a decision problem P defined with respect to some language of
   strings L. That is, we have an alphabet $\Sigma$ and a string 
   $s \in \Sigma^*$ -- a string $s$ consisting of some concatenation
   of symbols in the alphabet and of unbounded length -- and we are
   asking the question "Is the string s in the language L??" 

b) an efficient reduction exists from this problem over L to every other
   decision problem P' for an NP-language L'. Let's expand on what
   that means :

        a "reduction" is a process for rewriting problem questions
        for one problem in terms of those for another problem. 
        In this case, we want something which will let me
        take a query for a problem P' 

        (1)     "Is this string s in L' ??"

        and `reformat' it such that I can create a new string 
        and new query like so 

        (2)     "Is this string f(s) in L ??"

        (f being some rewriting process or function) 

        and knowing the answer to query (2) will give me
        the answer to query (1). 

        Put another way - say I know how to solve (2). So I figure
        out how to use it to solve (1). What I "do to figure it
        out" can be identified with a "reduction." 


     efficient : means that the reduction takes a polynomial
        number of steps in some useful parameter, like the 
        length of the input. 


SO if a problem P is NP-hard, it has this really cool property :        
        if we can solve P, we can solve all problems in NP. 
        This is because for every problem in NP, there exists
        a reduction which will let us take our method of 
        solving P and use it to solve anything we want. 



Notice at this point that we haven't said anything about whether P is
deciding a language which is actually IN the class NP or not. That's
where NP-complete comes in :


We say a problem is "NP-complete" if it is 

        a) NP-hard (as above)
        b) concerning a language in NP

The NP-complete problem everyone always points to is SATISFIABILITY
: given this logical expression, what values make it true? Karp 
showed back in the 70's that you can simulate a universal Turing Machine
by building enough formulae. So SATISFIABILITY is NP-hard.

My confusion at the statement "it is not known if there are any
problems which are NP hard" is that since Karp, lots of ppl have
gone through and shown problems NP-complete and NP-hard in
something like the sense I'm outlining here. 

The original question was 

        "Are there NP-hard problems which are not NP-complete?"

>From my understanding, this is equivalent to :

        "Are there problems which would let us solve all other 
        problems in NP if we could do them easily, but are not
        themselves in NP ?"


I think so. Now let me try something I'm not so sure of and try to 
give an example. 

Consider the problem of enumerating *all* bit-strings of length $k$. 
There's $2^k$ of them, so it takes exponential time. Also exponential
space. Let's say you have a magic box which takes one step to 
enumerate them all and also search them for whatever you happen to want. 

Say you want to see if this string $s$ is in an NP-language L. You
just tell your box to write out EVERY string of length $2^|s|$, strike out
anything not in L, and then search for $s$. Because this is a magic box, 
it takes no time at all. 

You can solve any problem in NP this way by phrasing it as a decision /
language question : "Is this in the language of Bob's private
keys?" (well, maybe not). But the problem the box really solves is no
way in NP. So it is NP-hard without being NP-complete. 

Probably there are better examples. 


Just as an aside -- factoring is not known to be NP-complete. Neither
is discrete log. This means an efficient means of solving the factoring
problem may not imply that any other problems (besides RSA and other
useful cryptosystems) are easy. Ditto for discrete log.

At the same time, it's the cryptosystems which are based on these
peculiar "we don't know" problems which persist. While the systems
based on NP-complete problems like knapsacks and lattice basis 
reduction haven't done nearly as well. Weird...

-David

------------------------------

From: [EMAIL PROTECTED] (Paul Rubin)
Subject: Re: Elliptic Curve Public-Key Cryptography
Date: 3 Dec 1999 06:36:15 GMT

DJohn37050 <[EMAIL PROTECTED]> wrote:
>I guess there is the basic problem with publishing timings.  What a
>customer wants is to solve a problem with HIS constraints, whatever
>they are.  And the constraints can be severe in one factor and
>largely irrelevant in another.  For a different customer, they can be
>reversed.
>
>For example, for some customers the critical resource is time.  For
>others it may be energy.  Money cost is always a factor.  Code space
>and data space are possible.  The point being that a customer wants
>HIS answer to fit in a certain size box.  We all know programming
>techniques that are time/memory tradeoffs.

I agree.  Timings, code space, data space, and money cost should 
ALL be published, not just timings.

------------------------------

From: Markus Peuhkuri <[EMAIL PROTECTED]>
Subject: Re: Encrypting short blocks
Date: 03 Dec 1999 09:46:04 +0200

>>>>> "Dan" == Dan Schwartz <[EMAIL PROTECTED]> writes:

Dan> It sounds like you don't need to decrypt the messages,
Dan> i.e. derive M1 from E1.  If that's the case, just pad each

        That is right.  The actual problem can be found from my
        previous posts (nobody commentted on them, that is the reason
        I'm trying different problem formulation).

        <URL:http://www.deja.com/getdoc.xp?AN=541137829>
        <URL:http://www.deja.com/getdoc.xp?AN=535815716>
        <URL:http://www.deja.com/getdoc.xp?AN=547024238>
        
        Basicly it is about removing sensitive information of data to
        ease requirements in handling of data for research purposes
        where one may not identify one subject but one must still be
        able to know whether it is question about same of different
        subjects. 
           
Dan> random, making the likelihood of a collision between any two
Dan> messages roughly 1 in 2^N.

        Unfortunatly, based on my experiments that is not case.  As
        good encryption algorithm makes the result look like random
        bits, taking only part of it produces much more collisions
        (approximatly 38%, is that 1/e?).  I've tried both MD5 (digest
        from data | secret) and blowfish (encrypting data).

        If I could afford using more bits for the result there would
        be no problem, but I have to fit the data in same number of
        bits.  I considered using some kind of lookup-table (assign
        numbers in random order and have table to store each
        individual mapping) but there are some problems
        - data amount is quite large, so it might not be feasible (or
          scalable)
        - taken same data in two different location sharing the same
          secret, should produce same scrambled result.


-- 
Markus Peuhkuri ! [EMAIL PROTECTED] ! http://www.iki.fi/puhuri/
========================================================================
Never underestimate the power of human stupidity
               ... and don't forget you are human too.

------------------------------

From: Markus Peuhkuri <[EMAIL PROTECTED]>
Subject: Re: Encrypting short blocks
Date: 03 Dec 1999 09:54:04 +0200

>>>>> "Eric" == Eric Lee Green <[EMAIL PROTECTED]> writes:

Eric> variable-sized blocks of data, simply re-start the stream
Eric> cipher for each block of data. If you are doing fixed-size

        I'm not sure, but aren't the stream ciphers basicly OTPs where
        the "OTP" is generated depending on the key?  And if I use same
        pad for different plaintexts, by guessing/knowing one or some
        of plaintexts, all encrypted data can be decrypted without
        knowing key?

Eric> MD5 or SHA1, rather than like something you would use a

        Basicly digest algorithm or one-way hash would be ok for me.
        It just has to work with small, variable size blocks.  For
        further information about my problem, see my reply to Dan
        Schwartz <URL:news:[EMAIL PROTECTED]>.

Eric> cipher for. Get a good book on cryptography. I'm using Bruce

        I have "Applied Cryptography" and I have read it through and
        browsed several times.  Also I've browsed through "Handbook of
        Applied Cryptography", but I didn't found solution.  That is
        the reason I'm asking here, not because I'm lazy and want
        someone to do the work for me.  I thought here I could find
        some experts.  Or at least some pointers.

Eric> least you know it'll be because of something stupid that you
Eric> did, rather than because of a flaw in the algorithm.

        I know crypto is hard to do right.  That is the reason I like
        to have ready, analyzed solution.  My attempt to modify
        blowfish to use variable size blocks failed grossly enough
        that I won't try it again.  Never. :-}

-- 
Markus Peuhkuri ! [EMAIL PROTECTED] ! http://www.iki.fi/puhuri/
========================================================================
Never underestimate the power of human stupidity
... and don't forget you are human too.  

------------------------------

From: Juergen Thumm <[EMAIL PROTECTED]>
Subject: how to combine hashes to build a 128-bit key?
Date: Fri, 03 Dec 1999 10:06:54 +0100

i hash a passphrase using both md5 and sha,
resulting in a 16-byte md5 and 20-byte sha digest.
(i do not want to rely only on md5 or only on sha)

now i want to build 16 bytes (128 bit) key material
from this, to feed a symmetric cipher.

the question is: how to reduce the 36 bytes digest
material to 16 bytes?

options are:

   1. truncate sha digest to 16 bytes,
      then XOR md5 and sha digest.

      pro: mixes nearly all bytes together.
           (the entropy is still reduced, of course)

      con: whenever two bytes in both digests
           are the same, they zero-out each other.

   2. sequential bytemix:

      target[0]   = md5[0]
      target[1]   = sha[0]
      target[2]   = md5[1]
      target[3]   = sha[1]
      ...
      target[14]  = md5[7]
      target[15]  = sha[7]

      and drop the remaining 8 bytes of md5
      and 12 bytes of sha digest.

      pro: completely keeps the bit distribution
           as supplied by md5/sha.

      con: more than half of the material
           is simply discarded.

both options seem to result in an entropy
reduction of 55.5%. but, for the purpose
of feeding a 128-bit symmetric cipher key,
are they really equally good?

--
Juergen Thumm   [jthumm(@)lycosmail.com]
address altered to deflect spam.



------------------------------

From: "Tim Wood" <[EMAIL PROTECTED]>
Subject: Re: Noise Encryption
Date: Fri, 3 Dec 1999 09:29:49 -0600

Erm, no, I think the Key-randomness is critical, if you can guess the key
you can recover the message.

Also you must never reuse the same key or part of the same key (although you
are bound to have some repetition 01 in two keys for instance, this is fine)

OTP is as secure as your key generation actually. OTP=Secure if
KEYgen=Random.

I would however be interested in your scheme for a secure system using
JavaScript and compiler? Could you post a little more elaborately or was
that just some strange comment not meant to be taken literally?

whats  "MvH M WxX" ?


tim


Mattias Wecksten wrote in message <[EMAIL PROTECTED]>...
>Cross-posting a little...
>
>I hope I enter this thread at the right point.
>
>I started to get curious about why this conversaion spun off at all?
>When using a OTP the key-randomness is not critical (btw. OTP=sequre).
>Transfering the key is.
>
>
>MvH M WxX
>
>* Suddenly I realized that it was possible to create a secure system for
use on
>any
>  free web server at all, only using JAVAScript and an off-line compiler *
>



------------------------------

From: "Tim Wood" <[EMAIL PROTECTED]>
Crossposted-To: comp.security.misc
Subject: Re: brute force versus scalable repeated hashing
Date: Fri, 3 Dec 1999 09:32:06 -0600


Douglas A. Gwyn wrote in message <[EMAIL PROTECTED]>...
>Tim Wood wrote:
>> You must however appreciate how easy it is to misrepresent someone
>> by changing the position of even one word ;-) or leaving out ...
>
>It's the responsibility of the redactor to ensure that the meaning
>is not significantly altered when abridging the quoted text.

Then we agree.



------------------------------

From: "Tim Wood" <[EMAIL PROTECTED]>
Crossposted-To: comp.security.misc
Subject: Re: brute force versus scalable repeated hashing
Date: Fri, 3 Dec 1999 09:36:40 -0600


Johnny Bravo wrote in message <[EMAIL PROTECTED]>...
>On Wed, 1 Dec 1999 10:20:16 -0600, "Tim Wood" <[EMAIL PROTECTED]>
>wrote:
>
>>You must however appreciate how easy it is to misrepresent someone by
>>changing the position of even one word ;-) or leaving out important
concepts
>>or provisos ... there is nothing to say everyone has read the earlier
post.
>
>  It hardly matters, the only thing I was responding to was the
>language I specifically quoted.  It wouldn't matter if the previous
>post was 500k words long.
>
>  Johnny Bravo
>

Three swear words within a million is a lot different from 3 swear words
within ten.

Also the context of the language used is often important,

       " you f**king di**head."

or

       "He called me a f**king di**head I think it was out of line"

You get my point, anyhow this is the wrong place and time to argue about
this. And it's boring, I just felt that you had misrepresented him slightly
and redress is hard to come by on Usenet.

later

tim



------------------------------

From: [EMAIL PROTECTED] (Guy Macon)
Subject: Re: more about the random number generator
Date: 03 Dec 1999 05:01:43 EST

In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] (CLSV) wrote:

>In Kolmogorov complexity an incompressible string is 
>as random as you can get. The intuition is that a string that is
>optimally compressed has no redundancy or patterns that you can
>use to compress it. Because it has not (useful) patterns it is
>also random. If you could decompress a string into a larger
>random string. That larger string obviously has some patterns
>so it isn't random.

If a random string has one of all possible values, there is a
very tiny chance that it will randomly come up with all zeros
or all ones.  There is a better chance that it will randomly
come up with a compressible pattern, but I don't know how to
compute the odds.


------------------------------

From: Dave Knapp <[EMAIL PROTECTED]>
Subject: Re: Random Noise Encryption Buffs (Look Here)
Date: Fri, 03 Dec 1999 10:17:48 GMT

On Sun, 28 Nov 1999 07:51:24 GMT, "Douglas A. Gwyn" <[EMAIL PROTECTED]>
wrote:

>John Savard wrote:
>> Not all hidden-variables theories directly conflict with quantum
>> mechanics; any such theory that is not testable "might" be true.
>
>(1) Hidden variables have been ruled out by experiments such as
>Aspect's.  Quantum randomness is unavoidable.

Once again, I feel compelled to point out that Aspect's experiment
only rules _local_ hidden variables. Nonlocality can also explain the
experiment, though (in my opinion) nonlocality is a bigger problem
than randomness.

As a result, most physicists will agree that quantum mechanics
probably requires true randomness.

  -- Dave


------------------------------

From: Dave Knapp <[EMAIL PROTECTED]>
Subject: Re: Random Noise Encryption Buffs (Look Here)
Date: Fri, 03 Dec 1999 10:21:39 GMT

On Sun, 28 Nov 1999 10:33:18 -0500, "Trevor Jackson, III"
<[EMAIL PROTECTED]> wrote:

>Now AFAIK, no amount of measurement of a single nuclei will permit any kind of 
>prediction of
>its future emissions.  We can predict the statistical behavior of collections of 
>nuclei, but
>that's not an "explanation" of the behavior any more than predicting the decay of the 
>orbits
>of a collection of satellites is an "explanation" of the process.

This is so wrong I am not sure even where to start...

We can predict, with (in principle) perfect accuracy the wavefunction
of the nucleus at all times.

What we cannot predict is the location at which a particle will be
observed at a specific time in the future.

But your assertion that the above means we cannot model the system is
absolutely false.

I recommend you read up on Bell's inequality and the Aspect
experiment.

  -- Dave


------------------------------

From: Dave Knapp <[EMAIL PROTECTED]>
Subject: Re: Random Noise Encryption Buffs (Look Here)
Date: Fri, 03 Dec 1999 10:24:14 GMT

On Sun, 28 Nov 1999 12:35:18 GMT, Tom St Denis <[EMAIL PROTECTED]>
wrote:

>In article <[EMAIL PROTECTED]>,
>  "Douglas A. Gwyn" <[EMAIL PROTECTED]> wrote:
>> Obviously.  But when you post your personal hallucinations,
>> you should label them as such, not pretend that they are facts.
>
>And 500 years ago the world was flat, and that was fact.

And 100 years ago the Universe was entirely predictable, and _that_
was a fact.  It has since been shown false.

So your point is...?

  -- Dave


------------------------------

From: [EMAIL PROTECTED] (Guy Macon)
Crossposted-To: sci.math
Subject: Re: Why Aren't Virtual Dice Adequate?
Date: 03 Dec 1999 05:25:41 EST

In article <826ur2$dqh$[EMAIL PROTECTED]>, [EMAIL PROTECTED] (r.e.s.) wrote:

>In practice, though, who would use a "pure OTP" without
>further strengthening? (Even if the OTP is theoretically
>"unbreakable", it seems appropriate to say that any
>OTP *implementation* can, in practice, be relatively
>strong or weak.)

Maybe it's my background as an engineer, but if I was actually
implementing an OTP (instead of using PGP and just discussing
OTP as a learning experience), I would go full overkill on the
randomizer, XORing in everything from the number of microseconds
between keystrokes to a the digitized output of my local AM
station, the theory being that if any one of my "random" inputs
is true random, the OTP will be random.  I would then pad my
plaintext to a standard length and XOR it with the OTP (which will
probably be on a CD-R).  If (given the limitations that have been
pointed out) an OTP is unbreakable, how can it be strengthened?



------------------------------

From: Anthony Stephen Szopa <[EMAIL PROTECTED]>
Subject: Re: Random Noise Encryption Buffs (Look Here)
Date: Fri, 03 Dec 1999 02:31:07 -0800
Reply-To: [EMAIL PROTECTED]

Guy Macon wrote:

> In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] (Anthony Stephen 
>Szopa) wrote:
> >
> >Tom St Denis wrote:
> >
> >> In article <[EMAIL PROTECTED]>,
> >>   "Douglas A. Gwyn" <[EMAIL PROTECTED]> wrote:
> >> > Tom St Denis wrote:
> >> > > If I took two exact copies [leave the copying theory behind here] of
> >> > > an atom, and placed them in two exact same environments.  Would they
> >> > > not decay the same way?  If so, that's hardly random at all.
> >> >
> >> > The simple answer is, no, two identically prepared quantum systems,
> >> > constrained as tightly as nature allows, need not evolve along the
> >> > same path.
> >> >
> >>
> >> That's like saying each time you went back in time [the exact same
> >> time] you would observe a different state.  Which means a atom can
> >> never be in one state at any time.  Kinda like an omni-state..
> >>
> >> Tom
> >>
> >> Sent via Deja.com http://www.deja.com/
> >> Before you buy.
> >
> >That's correct.
> >
> >The atom has all possible states until you observe it.
> >
> >Thus it is the act of observing that produces the randomness.
> >
> >And no two observations are exactly the same.
> >
> >So, guess what?
> >
> >You have just discovered true randomness.
> >
>
> This raises an interesting question about human psychology.
> For some reason, various people have a deep need to not
> believe in randomness or unbreakable codes.  The more rational
> among us are content with pointing out the practical difficulties
> of using atomic decay or One Time Pads, or the many other ways to
> obtain information but there are others who show the following
> attributes;
>
> [1] They know in their hearts that unbreakable codes and/or
>     randomness cannot possibly exist.
>
> [2] They have never been taught critical thinking skills.
>
> I wonder why some of us have this deep need to believe?

If they have a need to believe and leave it at that then perhaps they are suffering 
from lazy
brain.

If they have a need to believe but then try to prove otherwise, I will give them 
credit for making
an attempt to expand our knowledge, and I congratulate them.



------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to