Cryptography-Digest Digest #166, Volume #10       Fri, 3 Sep 99 11:13:03 EDT

Contents:
  Re: 512 bit number factored ([EMAIL PROTECTED])
  Re: Home Invasion Bill Drives U.S. Computer Users across border (pbboy)
  Re: 512 bit number factored (Bob Silverman)
  Re: Odp: THINK PEOPLE (pbboy)
  Re: 512 bit number factored (SCOTT19U.ZIP_GUY)
  Encryptor 4.1 reviews please. (pbboy)
  Re: 512 bit number factored (DJohn37050)
  Re: THINK PEOPLE (Frank Gifford)
  Initial authentication of a Network Control Center (was Using Diffie-Hellman to 
encode keys) (Thierry Moreau)
  Automated way to find the encryption algorithm ("Lukas Lord")
  Re: Encryptor 4.1 reviews please. (SCOTT19U.ZIP_GUY)
  Re: Can we have randomness in the physical world of "Cause and Effect" ? (Tim Tyler)

----------------------------------------------------------------------------

From: [EMAIL PROTECTED]
Subject: Re: 512 bit number factored
Date: Fri, 03 Sep 1999 11:51:16 GMT

In article <7qnj7i$[EMAIL PROTECTED]>,
  [EMAIL PROTECTED] (Paul Rubin) wrote:
> Wei Dai <[EMAIL PROTECTED]> wrote:
> >Now a question of my own: does anyone actually use 512-bit keys for
> >e-commerce, as CWI's press release claims?
>
> Yes, I spend a fair amount of time looking at SSL certificates and
> occasionally still see some 512 bit ones.  It's nothing like the 95%
> that CWI claimed, though.  More like 10%, from the sample I've looked
> at.
>
> You can tell the size of an SSL key by connecting to the web site with
> MS Internet Explorer and clicking on the lock icon, and viewing "key
> exchange" in the SSL properties dialog.  This is with MSIE 4.0; I
> don't have an MSIE 5 browser handy and I think they've changed the
> interface somewhat, but they still show the info.  Netscape 4.5
> unfortunately doesn't show the key length.

A large number of corporate-bank and even some inter-bank payment links
use 512 bit RSA (or even symmetric technologies). In value, and probably
in volume these links eclipse any Internet-based eCommerce. I believe
S.W.I.F.T.'s keys are longer, but then they move something like USD 9
trillion/day..

These links used to be called EFT or EDI, but have recently been renamed
eCommerce. :-)

  -Terje


Sent via Deja.com http://www.deja.com/
Share what you know. Learn what you don't.

------------------------------

From: pbboy <[EMAIL PROTECTED]>
Crossposted-To: alt.privacy.anon-server
Subject: Re: Home Invasion Bill Drives U.S. Computer Users across border
Date: Fri, 03 Sep 1999 08:57:21 -0400



Anonymous wrote:

> Privacy Concerns - http://www.angelfire.com/biz/privacyconcerns/index.html
>
> Home Invasion Bill Drives U.S. Computer Users to Canadian Privacy Firm
> Zero-Knowledge Systems
>
>      MONTREAL--(BUSINESS WIRE)--Aug. 24, 1999--Zero-Knowledge Bombarded
> With Requests to Release Freedom(TM) Following Disclosure of 'Cyberspace
> Electronic Security Act'      A US Justice Department proposal to secretly
> enter its citizens' homes and disable security features on their computers

At the risk of sounding completly oblivious to current events I must ask:  Is
this real?


------------------------------

From: Bob Silverman <[EMAIL PROTECTED]>
Subject: Re: 512 bit number factored
Date: Fri, 03 Sep 1999 12:47:19 GMT

In article <[EMAIL PROTECTED]>,
  [EMAIL PROTECTED] (Wei Dai) wrote:
> In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED]

> > So, supposing you write a similar comment 9 years from now, how could
> > you fill in the blank:
> >
> > It has been well known since 1999 that _____ bit keys were breakable
> > and within computer capabilities.

<snip>

Today, 20
> thousand computers (500 MIPS each at 1/4 the price of a 1990 computer)
> for a year lets you factor a 700 bit number.

Yes and no.

A 700 bit number is about 730 times as difficult as 512-bits
in terns of *time* and  27 times as difficult in terms of space.

Each of these 20K computers will need 2 to 3 Gbytes of memory
(for the sieving phase)

In 1990 my Sparc-10 on my desk had 32M of RAM.  Now,  my
dual-proc P-450 has 256M.   We *might* see workstations & desktops
with 2-3Gbytes in 10 years,  but I doubt that they will be common
enough to gather 20,000 of them for a year.  I don't see most
applications needing that kind of memory.  512M???  Sure!  But
not 3G.

It took a very large Cray (C90)  10 days and about 2.4 Gbytes
of memory to handle the matrix.  I don't see Crays getting
significantly faster in the next 9 years.  We might see a factor of
4 to 5, but I doubt more than that.

With C90 hardware, the matrix for 700 bits would take 7300 days
and require about 60 Gbytes of memory.

Everyone seems to always forget about scaling the space requirements
and solving the matrix.

I don't see 700 bits being done within 10 years without an
algorithmic improvement.



>
> If we assume no further algorithmic improvements and that computing power
> per dollar continues to increase at a factor of 1.5 per year, then 9
> years from now an effort similar to RSA-155 (about 50 CPU-years) should
> be able to break 600-650 bit numbers.

I agree that numbers around 625-650 bits will be at the edge
of what is feasible in 10 years.  But larger than that?  The space
requirement will become a binding constraint.

(barring algorithmic improvements)

If you disagree with my assessment of common computer
capability in 10 years,  you may substitute your own numbers.

But increase in the number and speed of computers will not
alone suffice for 700 bits.


--
Bob Silverman
"You can lead a horse's ass to knowledge, but you can't make him think"


Sent via Deja.com http://www.deja.com/
Share what you know. Learn what you don't.

------------------------------

From: pbboy <[EMAIL PROTECTED]>
Subject: Re: Odp: THINK PEOPLE
Date: Fri, 03 Sep 1999 09:10:20 -0400



[EMAIL PROTECTED] wrote:

>     It is very sad, since this time David is  correct. His
> method would secure the message, the other ones lack that
> ability in this case. However; it's kind of like bringing a gun to a
> judo match.

If the enemy is a Judo GrandMaster and I wasn't, I'd want that gun.

Quite honestly, i really don't understand what this discussion is about.
The only thing i require in an encryption program is _security_.  Time
really doesn't play a roll in what i do.  The largest files i've encrypted
is about 20MB, it doesn't take too long.

I'm not defending anyone, this is just my opinion.





------------------------------

From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: 512 bit number factored
Date: Fri, 03 Sep 1999 14:23:45 GMT

In article <7qog0k$aj4$[EMAIL PROTECTED]>, Bob Silverman <[EMAIL PROTECTED]> wrote:
>In article <[EMAIL PROTECTED]>,
>  [EMAIL PROTECTED] (Wei Dai) wrote:
>> In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED]
>
>> > So, supposing you write a similar comment 9 years from now, how could
>> > you fill in the blank:
>> >
>> > It has been well known since 1999 that _____ bit keys were breakable
>> > and within computer capabilities.
>
><snip>
>
>Today, 20
>> thousand computers (500 MIPS each at 1/4 the price of a 1990 computer)
>> for a year lets you factor a 700 bit number.
>
>Yes and no.
>
>A 700 bit number is about 730 times as difficult as 512-bits
>in terns of *time* and  27 times as difficult in terms of space.
>
>Each of these 20K computers will need 2 to 3 Gbytes of memory
>(for the sieving phase)
>
>In 1990 my Sparc-10 on my desk had 32M of RAM.  Now,  my
>dual-proc P-450 has 256M.   We *might* see workstations & desktops
>with 2-3Gbytes in 10 years,  but I doubt that they will be common
>enough to gather 20,000 of them for a year.  I don't see most
>applications needing that kind of memory.  512M???  Sure!  But
>not 3G.
     Sounds a awful lot like the famour Bill Gates quote. How
did it go. Something like no one could possible use more that
650k bytes for anything. However I think histroy his prvoed
him wrong.
>
>It took a very large Cray (C90)  10 days and about 2.4 Gbytes
>of memory to handle the matrix.  I don't see Crays getting
>significantly faster in the next 9 years.  We might see a factor of
>4 to 5, but I doubt more than that.
>
>With C90 hardware, the matrix for 700 bits would take 7300 days
>and require about 60 Gbytes of memory.
>
>Everyone seems to always forget about scaling the space requirements
>and solving the matrix.
>
>I don't see 700 bits being done within 10 years without an
>algorithmic improvement.
    A way out. I like that you can save face that way.
>
>
>
>>
>> If we assume no further algorithmic improvements and that computing power
>> per dollar continues to increase at a factor of 1.5 per year, then 9
>> years from now an effort similar to RSA-155 (about 50 CPU-years) should
>> be able to break 600-650 bit numbers.
>
>I agree that numbers around 625-650 bits will be at the edge
>of what is feasible in 10 years.  But larger than that?  The space
>requirement will become a binding constraint.
>
>(barring algorithmic improvements)
>
>If you disagree with my assessment of common computer
>capability in 10 years,  you may substitute your own numbers.
>
>But increase in the number and speed of computers will not
>alone suffice for 700 bits.
>
>


David A. Scott
--
                    SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
                    http://www.jim.com/jamesd/Kong/scott19u.zip
                    http://members.xoom.com/ecil/index.htm
                    NOTE EMAIL address is for SPAMERS

------------------------------

From: pbboy <[EMAIL PROTECTED]>
Subject: Encryptor 4.1 reviews please.
Date: Fri, 03 Sep 1999 09:41:16 -0400

Has anyone used or heard of Encryptor 4.1 by Dr. Peter Sorvas & Bill
Giovinetti?  Here's their page:

http://ourworld.compuserve.com/homepages/psorvas/

They don't offer the source code, though.  There is little information
about the program itself on the page, mainly decscriptions of the
algorithms used.   I just need some _knowledgeable_ opinions about this
program, considering the little technical information available.


Related to above:

How would a layperson, such as myself, evaluate an encryption program to
see if it is secure?  Would a search for new files before and after the
process enable one to see if there were copies made to parts of the
drive?  Where w/sh/could one look for weaknesses?
I know these are very broad questions, but i feel a bit uneasy about
entrusting my files to a program that i do not completly trust or
understand.  Until i can program one for myself (hopefully in a few
months), i have to rely on ones offered by strangers and opinions of
them, again, by strangers.     I'm sure you can understand my paranoia.

Thanks!

pbboy


------------------------------

From: [EMAIL PROTECTED] (DJohn37050)
Subject: Re: 512 bit number factored
Date: 03 Sep 1999 14:34:58 GMT

OK, some here are some reworded questions on RSA key size for Bob, Wei and
anyone else to comment on:
1. My understanding is that the GNFS has 2 steps: (A) Gathering equations,
which can be done in parallel with little memory and (B) Solving the matrix,
which cannot be totally done in parallel and takes lots of memory.  If someone
just did (A) and reported it, would you use that key?
2. Do you want to depend on the fact that today (B) cannot be done in parallel
to estimate what can be done in 10 years?
3. Do you want to depend on the fact that today (B) takes lots of memory to
estimate what can be done in 10 years?

My personal (probably simplistic) answers are (right now, unless I hear more):
no, no, no.
Don Johnson

------------------------------

From: [EMAIL PROTECTED] (Frank Gifford)
Subject: Re: THINK PEOPLE
Date: 3 Sep 1999 09:40:16 -0400

In article <7qnc2g$fp2$[EMAIL PROTECTED]>,
David A Molnar  <[EMAIL PROTECTED]> wrote:
>Frank Gifford <[EMAIL PROTECTED]> wrote:
>> over the entire message multiple times.  So if you have all except the last
>> 100 bytes of the encryption, you are unable to decrypt anything at all.
>
>If you have another message which you know has an identical last 100
>bytes to the message you want to recover, _and_ a deterministic encryption
>scheme + all or nothing transform, then you cut and paste and it'll work.

If you don't have the final 100 bytes of the target encrypted message, how
do you know that they are identical to the last 100 bytes of some other
message?

-Giff

-- 
Too busy for a .sig

------------------------------

From: Thierry Moreau <[EMAIL PROTECTED]>
Reply-To: [EMAIL PROTECTED]
Subject: Initial authentication of a Network Control Center (was Using Diffie-Hellman 
to encode keys)
Date: Fri, 03 Sep 1999 09:03:51 -0400

Hi,

May I suggest the following abstraction for your application security
requirements:

You have a Network Control Center (NCC) which must obtain a "handle" in
a set of end-point nodes. The "handle" takes the form of an activated
network control software installed on the end-point node and set up with
symmetric keys (long term DES keys).

Upon installation of the software on the end-node, you must authenticate
the NCC with which the symmetric keys will be shared. You correctly
identified the NCC public key X as being the achile's heel in this
authentication requirement.

Out of the limits of cryptography in this application is the threat of
bogus software, one with the same "look and feel" as the genuine one,
but with a spoofed NCC public key.

Since you seem to be in a private application world, you should
implement due diligence control over the final release of the software,
but anything done in this area is limited.

Next, you may rely on built-in end-node mechanisms for server
authentication. Such mechanisms are a scarce resource as well! E.g. a
browser verifying a server certificate before a download is done (but
certificate verification is so easy to turn off by the end-node user, as
a trick to shut down annoying error messages!).

Actually, your most promising resource may come from the fact that the
installation process is done *manually* at the end-nodes. You should
have the person call (I mean a telephone call, initial authentication
always need some un-automated process step) an agent of the NCC when (or
soon after) doing the installation. You then have an opportunity to
authenticate the end-node, e.g. using the SAKEM procedure
(http://www.connotech.com/sakem.htm).

More fundamentaly, your ideal solution would be an authentication in the
reverse direction: authenticating the NCC during a telephone call. Then,
it is the end node who should announce a public key (send the pair
<end-node IP address, end-node public key> to whoever might prentend to
be a NCC, and filter-out bogus NCC applicants using the SAKEM
procedure). But this might not be as practical in your case as it might
be in other cases (your application looks like initial security
configuration of SNMP nodes). In particular, the end-user doing the
installation is not necessarily as concerned with security as you would
like.

In conclusion, since shortcuts are almost always needed upon initial
setting up of a secure system, I suggest the end-node authentication
using out-of-band verification of identity (the SAKEM procedure). This
would make the creation of a bogus NCC a very difficult task.

Most of the subtle attack scenarios (hopefully all of them!) that you
are concerned about have been considered in the development process for
the SAKEM procedure.

A final suggestion: make the end-node software such that a single NCC
can be configured, and verify that each and every end-node is registered
with the legitimate NCC by a given date.

Good luck!

- Thierry Moreau
President
CONNOTECH Experts-Conseils Inc.
9130 Place de Montgolfier
Montreal, Qc
Canada H2M 2A1

Tel.: +1-514-385-5691
Fax: +1-514-385-5900
e-mail: [EMAIL PROTECTED]

Eric Lee Green wrote:
> 
> DJohn37050 wrote:
> > Also, it seems to me you are open to a man in the middle attack.
> 
> Am I? Here's how it works. X is generated on the server. If you want to install
> the software on workstations, you generate an installation image on the server
> which then is placed on a diskette and carried to the target machines, or
> transmitted over via ssh or some other secure method (e.g. an SSL-enabled
> browser). This install image has X on it. The target machine generates Y. They
> do the Diffie-Hellman handshake to establish their DES key 'd'. Then everything
> else forever afterwards is done via DES, including negotiating DES session
> keys, or even negotiating a new DES master key that's not related to the known
> Y value. Note that generating a seperate diskette or image for each machine to
> be installed, with a pre-defined 'd' for that machine, is not feasible because
> it could be thousands of machines being installed from a single image.
> 
> Hmm... let's see what kind of attacks there could be with a common X:
> 
> John         Paul    Ringo          Shared(network)  Shared(secret)
>  x                                                 X=g**x mod N
>               y                      Y=g**y mod N
>                       z              Z=g**z mod N
>  k_y=Y**x mod N
>               k_y=X**y mod N
>  k_z=Z**x mod N       k_z=X**z mod N
> 
> Hmm, if a machine is compromised and X uncovered, a man-in-the-middle attack
> could occur. Otherwise, Paul and Ringo won't arrive at the same 'k' as the
> attacker. In addition, this is not on the public Internet so we can with some
> certainty assume that the IP address for John is valid and has not been
> hijacked (I run the 'arpwatch' utility and use a switched network fabric in my
> network, and I assume that any network running secure traffic will have similar
> characteristics), and we can encode this on the install image also. Of course,
> assuming anything is ridiculous, and if { a) the attacker gets his hands on the
> diskette and thus gets X, or b) the image is transmitted via an insecure
> channel and the attacker gets X, or c) gets X from one of the client machines,}
> and d) the network architecture allows address "spoofing" (to negate the
> hardwired IP address in the image), a man-in-the-middle attack is quite
> possible.
> 
> Note that I'm concerned about somebody spoofing being John. I don't care if
> someone pretends to be Paul or Ringo, because while John can screw Paul and
> Ringo, Paul or Ringo cannot screw John in this particular application. (Unlike
> real life, grin). John always initiates the connections and tells Paul or Ringo
> what to do, Paul and Ringo never initiate connections or tell John what to do.
> John does recieve data from Paul and Ringo, but stashes that data aside and
> otherwise does nothing with it.
> 
> Hmm. Perhaps multiple x's, generated each time an image is generated. Thus if
> you do a bulk install of 500 machines off of one image with X_1 on the image,
> John then initiates the DH chat with each machine to do the initial key
> transfer.  But there may be multiple images for multiple architectures
> outstanding at any given time, so this also means that the client will need to
> transmit an identifier for "which X?" as part of sending Y to John, so that
> John knows which x to use for that client's key so that their values of k are
> matched. This would avoid the problem of the attacker getting X off of a
> machine installed last month, then using it to do a man-in-the-middle attack on
> a machine installed this month (assuming a new install image was generated for
> the machine installed this month).
> 
> Of course, if John is compromised, the whole game is up, since he has copies of
> everybody's keys. Since this all must run unattended, we can't use SPEKE or
> such to require a passphrase in addition to the stored values. I prefer not to
> think about it (sigh), since a compromised John could push a new /etc/passwd
> out to Paul or Ringo and really screw them :-(. Thus if I were attacking this
> setup, I would attack John, not the encryption.
> 
> -- Eric Lee Green   [EMAIL PROTECTED]

------------------------------

From: "Lukas Lord" <[EMAIL PROTECTED]>
Subject: Automated way to find the encryption algorithm
Date: Fri, 03 Sep 1999 13:56:46 GMT

I'm interested to know if there exists tools or methods to do the following:

Lets say I can get information about as many unencryped value / encrypded
value pairs as I want in the form below

<integer> <string of 48 characters>

Is there a way to find what the encoded value of a certain unencoded value
will be?

Lukas Lord





------------------------------

From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: Encryptor 4.1 reviews please.
Date: Fri, 03 Sep 1999 15:45:52 GMT

In article <[EMAIL PROTECTED]>, pbboy <[EMAIL PROTECTED]> wrote:
>Has anyone used or heard of Encryptor 4.1 by Dr. Peter Sorvas & Bill
>Giovinetti?  Here's their page:
>
>http://ourworld.compuserve.com/homepages/psorvas/
>
>They don't offer the source code, though.  There is little information
>about the program itself on the page, mainly decscriptions of the
>algorithms used.   I just need some _knowledgeable_ opinions about this
>program, considering the little technical information available.
>
>
>Related to above:
>
>How would a layperson, such as myself, evaluate an encryption program to
>see if it is secure?  Would a search for new files before and after the
>process enable one to see if there were copies made to parts of the
>drive?  Where w/sh/could one look for weaknesses?
>I know these are very broad questions, but i feel a bit uneasy about
>entrusting my files to a program that i do not completly trust or
>understand.  Until i can program one for myself (hopefully in a few
>months), i have to rely on ones offered by strangers and opinions of
>them, again, by strangers.     I'm sure you can understand my paranoia.
>
>Thanks!
>
>pbboy
>

  There is little you can do without the source code if your an ameutor.
However if the code does not modify the lenght of file you can do some
checking. Many have a mode that even if you encrypt the same file
you get different results. Unless you can turn this feature off it is very
hard to check the output.
 One good way fast way assuming the above is true. Is to encrypt
then change the out put and decrypt this will tell you the block size.
IF it uses the standard governemnt approved chaining methods.
After that make small changes in the input file and do XOR of output
with files encrypted that where encrypted when file different. Look at
results and run the DIEHARD tests. Or try to compress output. If
a method  encrypts text and does compress to a smaller file it is
likely very weak. You can also see what other attacks have been
done on the method. But here you have to be very careful because
powerful people are in control of encryption and the governament does
not want you to use secure encryption. For example David Wagner has
belittled my method several times and even said his Slide Attack shows
that it would be dead. OF course he was lying. But people seldom remember
the lyes. His only real defense was that he could not read my code. Even
though I supply the total source code very word and it compiles and runs.
Yet this encryption god could not decipher it? makes one wonder how the
hell he can read anything encrypted if he can't look at working source code.
Yes the ametur has to be real careful because it is not in the intersest of
the experts to let the people of the world communicate in a open and
free manner.





David A. Scott
--
                    SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
                    http://www.jim.com/jamesd/Kong/scott19u.zip
                    http://members.xoom.com/ecil/index.htm
                    NOTE EMAIL address is for SPAMERS

------------------------------

Crossposted-To: sci.physics
From: Tim Tyler <[EMAIL PROTECTED]>
Subject: Re: Can we have randomness in the physical world of "Cause and Effect" ?
Reply-To: [EMAIL PROTECTED]
Date: Fri, 3 Sep 1999 14:22:21 GMT

In sci.physics Douglas A. Gwyn <[EMAIL PROTECTED]> wrote:
: Tim Tyler wrote:

:> In MWI there is no process equivalent to "wave function collapse" - a
:> notion that the EPR "paradox" hinges upon.

: The Multiple-Worlds equivalent of the Copenhagen collapse is a
: splitting of the world-path of the system.

I don't think so.  "Splitting" (and converging) goes on all the time.
Notions of an "observer" are not involved.  Consequently the ideas seem to
be completely different to me.

My statement that there's "nothing equivalent" in the MWI may be a bit
strong - but certainly one way of looking at the MWI is to consider the
idea that "wave functions never collapse".

: EPR in no way depends on the Copenhagen interpretation

Yes, sorry, I didn't mean to imply that it did.  If you look at
explanations of the EPR, you will often find it explained in terms of wave
function collapse, though.

: [...] and further it is generally considered that Aspect et al. have
: demonstrated that the EPR weirdness actually does occur [...]

Yes, of course.

: [...] so if MWI differs in that prediction then it is wrong.

The CI and MWI make /virtually/ identical predictions.

The MWI /does/ differs in that it does not include any notion of an
"observer".  While the interpretations agree on almost everything, they
differ on predictions about whather human beings (i.e. observers) can be
in quantum wave-states, and interfere with one another.

MWI says "yes" and CI says "no".  As the "effective wavelength" of a human
being is rather large, the corresponding interference patterns are not
easy to observe - but an experiment to distinguish between the theories
is at least possible.

It seems obvious to me that MWI will eventually prove to be correct - the
CI has always been the height of metaphysical nonsense ;-)

[followups to sci.physics only - any relevance to sci.crypt has been lost]
-- 
__________
 |im |yler  The Mandala Centre  http://www.mandala.co.uk/  [EMAIL PROTECTED]

Never call a man a fool; instead borrow from him.

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to