Re: Welome to the Internet, here's your private key

2002-02-05 Thread Dean Povey

At 04:24 PM 2/4/2002 -0800, Bill Frantz wrote:
At 2:09 PM -0800 2/4/02, [EMAIL PROTECTED] wrote:
 1) A typical message would have a 20-byte nonce random number, which
 computed to a 20-byte SHA1 and then encrypted with RSA resulting in 20-byte
 signature (basic message plus 40-byte infrastructure overhead, signature
 plus nonce).

I think an RSA signature can be no smaller than the key modulus, so an RSA 
^^^ larger
sig with a 1024-bit key is going to be 128 bytes plus some overhead, no 
matter what. I think you (Lynn) meant DSA here. Or maybe you did mean RSA, 
given that you then go on to DSA... I don't know.

But the analysis still holds of course, the modulus is just an upper 
bound, and  this is just me being pedantic.

-- 
Dean Povey,  |em: [EMAIL PROTECTED]|  JCSI: Java security toolkit
Senior S/W Developer |ph:  +61 7 3864 5120| uPKI: Embedded/C PKI toolkit
Wedgetail Communications |fax: +61 7 3864 1282|   uASN.1: ASN.1 Compiler
Brisbane, Australia  |www: www.wedgetail.com  | XML Security: XML Signatures 



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



[Mojonation-devel] Re: mojonation?

2002-02-05 Thread R. A. Hettinga


--- begin forwarded text


Status:  U
From: Myers W. Carpenter [EMAIL PROTECTED]
To: mojonation [EMAIL PROTECTED],
[EMAIL PROTECTED]
Subject: [Mojonation-devel] Re: mojonation?
Sender: [EMAIL PROTECTED]
List-Help: mailto:[EMAIL PROTECTED]?subject=help
List-Post: mailto:[EMAIL PROTECTED]
List-Subscribe:
https://lists.sourceforge.net/lists/listinfo/mojonation-devel,
mailto:[EMAIL PROTECTED]?subject=subscribe
List-Id: For developers hacking Mojo Nation code
mojonation-devel.lists.sourceforge.net
List-Archive: http://www.geocrawler.com/redir-sf.php3?list=mojonation-devel
Date: 04 Feb 2002 21:04:41 -0500

On Mon, 2002-02-04 at 16:29, Oscar Haeger wrote:
 Hello.
 I'm just curious about what's happened to mojonation. The webpage seems to
 be down, I can't contact any relay-servers or any of the mojonation central
 servers. There is no action on either the forum or any of the lists. It's
 like the whole mojonation devel-team along with most of the active users
 just vanished. Anyone care to tell me and/or the list what's going on?

Here's the story as much as I know:

Evil Genius for a Better Tommorrow who created Mojo Nation, as a company
is in hibernation after running out of funding.  All the developers were
let go.

Zooko, one of those developers, has been working on MN by himself for
the last few months.  He is currently having some net connectivity
problems and hasn't been seen on IRC for about a week.

I wrote a GUI for Mojo Nation that works under windows and linux (and
maybe Mac OS X too).

Zooko is planning on creating a fork called MNet (I think it needs a
better name).  The plan was to make use of the Mojo Nation servers that
were up.

I don't have any idea what's going on with the server hosting
mojonation.net, or the metatrackers.  They are run by EGBT, as well as
the Token server. If they are down, we'll probably put new metatrackers
up but we can't replace the Token server (never have had the source for
it.)  Mojonation will work without the Token server.

myers



___
Mojonation-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/mojonation-devel

--- end forwarded text


-- 
-
R. A. Hettinga mailto: [EMAIL PROTECTED]
The Internet Bearer Underwriting Corporation http://www.ibuc.com/
44 Farquhar Street, Boston, MA 02131 USA
... however it may deserve respect for its usefulness and antiquity,
[predicting the end of the world] has not been found agreeable to
experience. -- Edward Gibbon, 'Decline and Fall of the Roman Empire'

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Cringely Gives KnowNow Some Unbelievable Free Press... (fwd)

2002-02-05 Thread Eugene Leitl



-- Eugen* Leitl a href=http://leitl.org;leitl/a
__
ICBMTO: N48 04'14.8'' E11 36'41.2'' http://www.leitl.org
57F9CFD3: ED90 0433 EB74 E4A9 537F CFF5 86E7 629B 57F9 CFD3

-- Forwarded message --
Date: Tue,  5 Feb 2002 11:10:49 +0100 (CET)
From: Robert Harley [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Subject: Re: Cringely Gives KnowNow Some Unbelievable Free Press...

Eugene Leitl wrote:
If you want to see EC used you need to describe a specific algorithm
which has the following three properties:

(1) widely agreed to be unencumbered, [...]

No problem.  Use a random secure curve over a binary field with a
polynomial basis (or over a random prime field).  Do it in software
using standard text-book algorithms.  Use a protocol that is
in the clear such as plain Diffie-Hellman key exchange.

That is what we do e.g., sharing a symmetric key for
encryption/decryption using Rijndael.  The only patent that is
relevant for us is our own one (pending) on a fast method for
generating random secure curves.


EL:
(2) significantly better than RSA (this generally means faster)

This seems a little bit simplistic...  Whatever way you cut it, RSA
will beat ECC on public key ops (encryption, signature verification...)
whereas ECC will beat RSA on private key ones (decryption, signing...)

The speed argument can be compelling or not depending on what you want
to do.  On standard PCs doing occasional crypto operations, both ECC
and RSA are plenty fast.  The speed issue is on small devices like
hand-helds or mobile phones, and on heavily loaded servers.

For instance with signatures, people might want to sign on slow client
devices which take 1 second with ECC but many seconds with RSA; and
then verify the signatures on servers fast enough for thousands per
second.  It is easier to upgrade your servers if needed than to
upgrade the users who aren't using your service because it is too slow.

One hears of crazy stuff like network setups which time out when you
try to do DH with 1024-bit RSA on some slow hand-helds, but work fine
when doing equivalent DH with 163-bit ECC.

In our work with Mailwatcher, servers talk to each other doing equal
numbers of encryptions and decryptions for many users.  Even according
to RSA Security's own numbers, ECC wins easily in this case!



EL:
(3) has seen a significant amount of analysis so that we can have
some reasonable confidence it's secure.

The FUD campaign on this point has been too successful.

Serious study of factoring and related stuff dates from the early 19th
century with people like Gauss.  Serious study of elliptic curves and
related stuff dates from... umm... the early 19th century with people
like... umm... Gauss.

Modern study of discrete-log based cryptosystems dates from the
invention of public-key systems in 1976.  Modern study of factoring
based cryptosystems dates from the invention of RSA in 1977.  ECC is
in the former family, although it dates from 1985.

The relative paucity of results on ECC is a good thing.  There has
been no progress on breaking ECC with properly chosen curves.  On the
contrary there is a negative result that discrete logs using generic
group operations do take exponential time.  The problem in this
area is with some people sailing too close to the wind and picking
curves with tonnes of useful properties to speed up their
cryptosystems (and, unfortunately, attacks on some of them).


For RSA, it was claimed in 1977 that factoring a certain 426-bit
number would take quadrillions of years.  This in spite of the fact
that a sub-exponential algorithm for factoring was already known.
Knowledge was pretty fuzzy about its runtime and practicality.  Some
widely-deployed big-budget systems were designed using RSA as low as
320 bits.  That's easily broken.

During the 1980's, the research community perfected the early
sub-exponential methods and the 426-bit number was broken in 1994 by a
team led by Arjen Lenstra (I was in it...)

There was also a new faster algorithm, the NFS.  Knowledge was pretty
fuzzy about its runtime and practicality.  During the 1990's, the
research community perfected it.  Then recently 512 bits was broken.

The current record, as of a few days ago, is 524 bits attained by Jens
Franke et al. when completing the factorization of 2^953+1 (there were
three other factors previously found by Paul Zimmerman, Torbjorn
Granlund and myself).  It would be feasible to break 600 bits or more
with some effort.


Things have been quiet on the new algorithms front for a few years.
But at Crypto last August, Dan Bernstein announced a new design for a
machine dedicated to NFS using asymptotically fast algorithms and
optimising memory, CPU power and amount of parallelism to minimize
asymptotic cost.  His conclusion, recently detailed in a paper, should
be a bombshell, but apparently everyone is asleep:

DB:
This reduction of total cost [...] means that a [approx 3d-digit]

RE: Welome to the Internet, here's your private key

2002-02-05 Thread Arnold G. Reinhold

I'd argue that the RSA and DSA situations can be made equivalent if 
the card has some persistent memory. Some high quality randomness is 
needed at RSA key generation.  For the DSA case, use 256 bits of 
randomness at initialization to seed a PRNG using AES, say. Output 
from the PRNG could be then used to provide the nonces for DSA.  For 
extra credit, PRNG seed could be xor'd periodically with whatever 
randomness is available on chip.

The resulting DSA system requires about the same randomness at 
initialization as RSA. The additional vulnerability introduced 
requires breaking AES to exploit, even if no further randomness is 
available.  All things considered, I'd trust an AES PRNG more than a 
smart card RNG whose long term quality I cannot assess. Better to use 
both, of course.

Arnold Reinhold



At 3:09 PM -0700 2/4/02, [EMAIL PROTECTED] wrote:
One could claim that one of the reasons for using RSA digital signatures
with smart cards rather than DSA or EC/DSA is the DSA  EC/DSA requirement
for quality random number generation as part of the signature process.

...


Cards with quality random numbers ... can

1) do on card key-gen
2) use DSA or EC/DSA
3) remove dependency on external source to include random number in message
to be signed.

DSA  EC/DSA because they have a random number as parting of the signing
process precludes duplicate signatures on the same message ... multiple
messages with the same content  same exact signature is a replay. DSA 
EC/DSA doing multiple signings of the same content will always result in a
different signature value.

I've heard numbers on many of the 8bit smartcards ... power-cycle the card
each time it is asked to generate a random number  do random number
generation 65,000 times and look at results. For some significant
percentage of 8bit cards it isn't unusual to find 30 percent of the random
numbers duplicated.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



[Mojonation-users] MojoNation public network shutting down (fwd)

2002-02-05 Thread R. A. Hettinga


--- begin forwarded text


Status:  U
Date: Tue, 5 Feb 2002 10:51:25 +0100 (MET)
From: Eugene Leitl [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
cc: [EMAIL PROTECTED], [EMAIL PROTECTED],
   forkit! [EMAIL PROTECTED], [EMAIL PROTECTED]
Subject: [Mojonation-users] MojoNation public network shutting down
  (fwd)
Sender: [EMAIL PROTECTED]

-- Eugen* Leitl a href=http://leitl.org;leitl/a
__
ICBMTO: N48 04'14.8'' E11 36'41.2'' http://www.leitl.org
57F9CFD3: ED90 0433 EB74 E4A9 537F CFF5 86E7 629B 57F9 CFD3

-- Forwarded message --
Date: Mon, 04 Feb 2002 14:52:06 -0800
From: Jim McCoy [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Subject: [Mojonation-users] MojoNation public network shutting down

After more than a year of testing the public prototype for the MojoNation
technology platform we are shutting down the public network.  The MojoNation
technology will continue to be available via the soon to be announced Mnnet
project, of which more information will be made available at the CodeCon
conference.

It is expected that MNnet will remove several of the remaining centralized
features of the MojoNation technology and result in a somewhat simplified
version of the current system along with native Uis and other fun features.

More info will be made available over the next couple of weeks as details
are worked out.


-Jim




___
Mojonation-users mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/mojonation-users

--- end forwarded text


-- 
-
R. A. Hettinga mailto: [EMAIL PROTECTED]
The Internet Bearer Underwriting Corporation http://www.ibuc.com/
44 Farquhar Street, Boston, MA 02131 USA
... however it may deserve respect for its usefulness and antiquity,
[predicting the end of the world] has not been found agreeable to
experience. -- Edward Gibbon, 'Decline and Fall of the Roman Empire'

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Welome to the Internet, here's your private key

2002-02-05 Thread Dean Povey






At 04:24 PM 2/4/2002 -0800, Bill Frantz wrote:
At 2:09 PM -0800 2/4/02, [EMAIL PROTECTED] wrote:
 1) A typical message would have a 20-byte nonce random number, which
 computed to a 20-byte SHA1 and then encrypted with RSA resulting in 20-byte
 signature (basic message plus 40-byte infrastructure overhead, signature
 plus nonce).

I think an RSA signature can be no smaller than the key modulus, so an RSA
^^^ larger
sig with a 1024-bit key is going to be 128 bytes plus some overhead, no
matter what. I think you (Lynn) meant DSA here. Or maybe you did mean RSA,
given that you then go on to DSA... I don't know.

But the analysis still holds of course, the modulus is just an upper
bound, and  this is just me being pedantic.

--
Dean Povey,  |em: [EMAIL PROTECTED]|  JCSI: Java security toolkit
Senior S/W Developer |ph:  +61 7 3864 5120| uPKI: Embedded/C PKI toolkit
Wedgetail Communications |fax: +61 7 3864 1282|   uASN.1: ASN.1 Compiler
Brisbane, Australia  |www: www.wedgetail.com  | XML Security: XML Signatures



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Esperamos su visita en:

Web corporativa: http://www.telefonica-data.es
Revista Pulso On Line: http://www.telefonica-data.es/pulso
T.I.C: http://www.tdatacenter.com/es

Este mensaje se dirige exclusivamente a su destinatario y puede contener
informacion privilegiada o confidencial cuya divulgacion esta prohibida en
virtud de la legislacion vigente. Si ha recibido este mensaje por error, le
rogamos que nos lo comunique inmediatamente por esta misma via y proceda a su
destruccion.



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Cringely Gives KnowNow Some Unbelievable Free Press... (fwd)

2002-02-05 Thread Eugene Leitl








-- Eugen* Leitl a href=http://leitl.org;leitl/a
__
ICBMTO: N48 04'14.8'' E11 36'41.2'' http://www.leitl.org
57F9CFD3: ED90 0433 EB74 E4A9 537F CFF5 86E7 629B 57F9 CFD3

-- Forwarded message --
Date: Tue,  5 Feb 2002 11:10:49 +0100 (CET)
From: Robert Harley [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Subject: Re: Cringely Gives KnowNow Some Unbelievable Free Press...

Eugene Leitl wrote:
If you want to see EC used you need to describe a specific algorithm
which has the following three properties:

(1) widely agreed to be unencumbered, [...]

No problem.  Use a random secure curve over a binary field with a
polynomial basis (or over a random prime field).  Do it in software
using standard text-book algorithms.  Use a protocol that is
in the clear such as plain Diffie-Hellman key exchange.

That is what we do e.g., sharing a symmetric key for
encryption/decryption using Rijndael.  The only patent that is
relevant for us is our own one (pending) on a fast method for
generating random secure curves.


EL:
(2) significantly better than RSA (this generally means faster)

This seems a little bit simplistic...  Whatever way you cut it, RSA
will beat ECC on public key ops (encryption, signature verification...)
whereas ECC will beat RSA on private key ones (decryption, signing...)

The speed argument can be compelling or not depending on what you want
to do.  On standard PCs doing occasional crypto operations, both ECC
and RSA are plenty fast.  The speed issue is on small devices like
hand-helds or mobile phones, and on heavily loaded servers.

For instance with signatures, people might want to sign on slow client
devices which take 1 second with ECC but many seconds with RSA; and
then verify the signatures on servers fast enough for thousands per
second.  It is easier to upgrade your servers if needed than to
upgrade the users who aren't using your service because it is too slow.

One hears of crazy stuff like network setups which time out when you
try to do DH with 1024-bit RSA on some slow hand-helds, but work fine
when doing equivalent DH with 163-bit ECC.

In our work with Mailwatcher, servers talk to each other doing equal
numbers of encryptions and decryptions for many users.  Even according
to RSA Security's own numbers, ECC wins easily in this case!



EL:
(3) has seen a significant amount of analysis so that we can have
some reasonable confidence it's secure.

The FUD campaign on this point has been too successful.

Serious study of factoring and related stuff dates from the early 19th
century with people like Gauss.  Serious study of elliptic curves and
related stuff dates from... umm... the early 19th century with people
like... umm... Gauss.

Modern study of discrete-log based cryptosystems dates from the
invention of public-key systems in 1976.  Modern study of factoring
based cryptosystems dates from the invention of RSA in 1977.  ECC is
in the former family, although it dates from 1985.

The relative paucity of results on ECC is a good thing.  There has
been no progress on breaking ECC with properly chosen curves.  On the
contrary there is a negative result that discrete logs using generic
group operations do take exponential time.  The problem in this
area is with some people sailing too close to the wind and picking
curves with tonnes of useful properties to speed up their
cryptosystems (and, unfortunately, attacks on some of them).


For RSA, it was claimed in 1977 that factoring a certain 426-bit
number would take quadrillions of years.  This in spite of the fact
that a sub-exponential algorithm for factoring was already known.
Knowledge was pretty fuzzy about its runtime and practicality.  Some
widely-deployed big-budget systems were designed using RSA as low as
320 bits.  That's easily broken.

During the 1980's, the research community perfected the early
sub-exponential methods and the 426-bit number was broken in 1994 by a
team led by Arjen Lenstra (I was in it...)

There was also a new faster algorithm, the NFS.  Knowledge was pretty
fuzzy about its runtime and practicality.  During the 1990's, the
research community perfected it.  Then recently 512 bits was broken.

The current record, as of a few days ago, is 524 bits attained by Jens
Franke et al. when completing the factorization of 2^953+1 (there were
three other factors previously found by Paul Zimmerman, Torbjorn
Granlund and myself).  It would be feasible to break 600 bits or more
with some effort.


Things have been quiet on the new algorithms front for a few years.
But at Crypto last August, Dan Bernstein announced a new design for a
machine dedicated to NFS using asymptotically fast algorithms and
optimising memory, CPU power and amount of parallelism to minimize
asymptotic cost.  His conclusion, recently detailed in a paper, should
be a bombshell, but apparently everyone is asleep:

DB:
This reduction of total cost [...] means that a [approx 

Re: Cringely Gives KnowNow Some Unbelievable Free Press... (fwd)

2002-02-05 Thread Eric Rescorla

Although the headers and quoting have gotten munged, this
appears to be a reply to my message.

Eugene Leitl [EMAIL PROTECTED] writes:

 -- Eugen* Leitl a href=http://leitl.org;leitl/a
 __
 ICBMTO: N48 04'14.8'' E11 36'41.2'' http://www.leitl.org
 57F9CFD3: ED90 0433 EB74 E4A9 537F CFF5 86E7 629B 57F9 CFD3
 
 -- Forwarded message --
 Date: Tue,  5 Feb 2002 11:10:49 +0100 (CET)
 From: Robert Harley [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 Subject: Re: Cringely Gives KnowNow Some Unbelievable Free Press...
 
 Eugene Leitl wrote:
 If you want to see EC used you need to describe a specific algorithm
 which has the following three properties:
 
 (1) widely agreed to be unencumbered, [...]
 
 No problem.  Use a random secure curve over a binary field with a
 polynomial basis (or over a random prime field).  Do it in software
 using standard text-book algorithms.  Use a protocol that is
 in the clear such as plain Diffie-Hellman key exchange.
And this is how fast?

 EL:
 (2) significantly better than RSA (this generally means faster)
 
 This seems a little bit simplistic...  Whatever way you cut it, RSA
 will beat ECC on public key ops (encryption, signature verification...)
 whereas ECC will beat RSA on private key ones (decryption, signing...)
 
 The speed argument can be compelling or not depending on what you want
 to do.
It's the only real argument that ECC's got going for it. RSA can
be made arbitrarily fast by making it arbitrarily slow.

 EL:
 Until someone does that, the cost of information in choosing an EC
 algorithm is simply too high to justify replacing RSA in most
 applications.
 
 Personally, I no longer trust RSA for long term security.
 
 This is public-key crypto, not symmetric, so a break of your RSA key
 means that all your encrypted traffic becomes readable rather than
 just one message.  E.g., if a few years ago you used 512-bit RSA to
 encrypt a will that was not to be read by anybody until you die,
 that's tough because it could be read today.  Doesn't matter that you
 moved to 768 bits and then 1024 in the meantime.
If you care about Perfect Forward Secrecy, you shouldn't be using
RSA at all. You should be using DH with a fresh key for each
exchange. The probability that in the next 50 years your key will
be compromised in some other way than factoring is sufficiently
high to motivate this tactic. (In my view, it's vastly higher
than that of your key being broken by factoring).


This message actually makes my point quite nicely.  I frequently see
long detailed arguments by ECC proponents about how wonderful ECC is
and how it should replace RSA. This is one such argument. That's not
what's needed.

For ECC to take off, someone will have to actually write it into
protocols. This requires someone to identify a specific ECC algorithm
that meets the properties that I laid out, and document those
properties with literature citations, performance numbers, securtiy
estimates, etc. That's what's needed before the COMSEC people will
feel comfortable adding ECC to their systems.

Until someone's willing to step up to the plate on that, we're not
going to see ECC deployment in standard protocols.

-Ekr

-- 
[Eric Rescorla   [EMAIL PROTECTED]]
http://www.rtfm.com/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



URL for fips140.c

2002-02-05 Thread Greg Rose

At 09:46 AM 2/5/2002 -0500, Arnold G. Reinhold wrote:
I couldn't find it. Give me a hint?

Sorry, I should have been more specific:

http://people.qualcomm.com/ggr/QC/fips140.c goes straight to it.

Greg.

Greg Rose   INTERNET: [EMAIL PROTECTED]
Qualcomm Australia  VOICE:  +61-2-9817 4188   FAX: +61-2-9817 5199
Level 3, 230 Victoria Road,http://people.qualcomm.com/ggr/
Gladesville NSW 2111232B EC8F 44C6 C853 D68F  E107 E6BF CD2F 1081 A37C


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



RE: Welome to the Internet, here's your private key

2002-02-05 Thread Bill Frantz

At 6:37 AM -0800 2/5/02, Arnold G. Reinhold wrote:
I'd argue that the RSA and DSA situations can be made equivalent if
the card has some persistent memory.

I expect you could initialize the random data in that memory during
manufacture with little loss of real security.  (If you are concerned about
the card's manufacturer, then you have bigger problems.  If anyone does,
the manufacturer has the necessary equipment to extract data from secret
parts of the card, install Trojans etc.)

Note that if the card generates its own keys, it needs the same kind of
memory to store the keys as it needs to store a random seed.

Cheers - Bill


-
Bill Frantz   | The principal effect of| Periwinkle -- Consulting
(408)356-8506 | DMCA/SDMI is to prevent| 16345 Englewood Ave.
[EMAIL PROTECTED] | fair use.  | Los Gatos, CA 95032, USA



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Cringely Gives KnowNow Some Unbelievable Free Press... (fwd)

2002-02-05 Thread Bill Frantz

At 2:25 AM -0800 2/5/02, Eugene Leitl wrote:
-- Eugen* Leitl a href=http://leitl.org;leitl/a
__
ICBMTO: N48 04'14.8'' E11 36'41.2'' http://www.leitl.org
57F9CFD3: ED90 0433 EB74 E4A9 537F CFF5 86E7 629B 57F9 CFD3

-- Forwarded message --
Date: Tue,  5 Feb 2002 11:10:49 +0100 (CET)
From: Robert Harley [EMAIL PROTECTED]

...

This is public-key crypto, not symmetric, so a break of your RSA key
means that all your encrypted traffic becomes readable rather than
just one message.

IMHO, interactive protocols (e.g. certain modes of SSL/TLS) which are
subject to this attack should be retired.  Non-interactive protocols (e.g.
PGP email), are much more difficult to fix.

Cheers - Bill


-
Bill Frantz   | The principal effect of| Periwinkle -- Consulting
(408)356-8506 | DMCA/SDMI is to prevent| 16345 Englewood Ave.
[EMAIL PROTECTED] | fair use.  | Los Gatos, CA 95032, USA



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



RE: Welome to the Internet, here's your private key

2002-02-05 Thread Kim-Ee Yeoh

On Tue, 5 Feb 2002, Greg Rose wrote:

 This forced me to instrument my FIPS-140 code to measure it. It takes 1.42 
 ms to run a test on a Sun Ultra at 250MHz (I know, this is an ancient 
 machine). It's all integer arithmetic, on short integers at that, except 
 for the chi-square poker test, which (currently) does some floating point 
 but could easily be recoded to do scaled, 32-bit integer (16 x 16 - 32 bit 
 multiplies, 16 of them, would be the bottleneck).

I took a brief look at your code, and one optimization you could do is to
make a single pass for both the monobit and poker tests.  If c_0, c_1,
..., c_15 are the frequency counts of nibbles, then the monobit count is
just the sum over all i's of c_i * w(i), where w(i) is the Hamming weight
of i.

Also the chi-square for the poker test could easily be done in pure
integer arithmetic by clearing out the denominator of 5000.  (Of course,
it helps when your embedded system, such as the one I work with, has a
machine word size of 32 bits.  Your mileage on specific smartcards may
vary.)


 The scariest thing, though... at first I put in an unkeyed RC4 generator 
 for the self-test data, but accidentally ran the FIPS test on a straight 
 counter output... and it passed (even version 1)! I'd always assumed that 
 something in the regularity of a counter would trigger it. Running through 
 the buffer, XORing consecutive bytes, makes it fail quite handily, but 
 might also have the undesirable effect of hiding a bias in the data, if 
 there was one. I'm thinking of suggesting to NIST that a stronger test 
 would be to run the test on the raw data, and then on the data after XORing 
 consecutive bytes... if it's really random stuff it should pass both, and 
 in the meantime this would catch all sorts of failures. Any comments?

Could you clarify what you mean by counter output?  Are we talking about
a sequence of consecutive 8-, 16-, or 32-bit numbers?  If so, FIPS will
detect and flunk such sequences.  I'm almost as certain that a
maximal-period LFSR would be similarly detected.  A runs test, especially,
isn't fooled so easily.

As it stands, the FIPS-140 RNG testsuite is a good compromise between
speed and validation efficacy.  FIPS-140 is a standard for crypto-module
validation, and so long as your tests for the RNG component of your
crypto-module are comparable or exceed those specified in the standard,
you're doing fine.  You are quite at liberty to perform the additional
tests you've suggested.


Kim-Ee


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: biometrics

2002-02-05 Thread bear



On Tue, 29 Jan 2002, Bill Frantz wrote:

What would be really nice is to be able to have the same PIN/password for
everything.  With frequent use, forgetting it would be less of a problem,
as would the temptation to write it down.  However, such a system would
require that the PIN/password be kept secret from the verifier (including
possibly untrusted hardware/software used to enter it.


You could, I suppose, create an algorithm that takes as inputs
your single PIN/password and the name of the entity you're
dealing with, and produces a daily use PIN/password for you
to use with that entity.

It wouldn't help much in the daily use arena -- you'd still
have to carry all the daily use PINs around in your head -
but in the scenario where you forget one, it could be used to
recreate it, and it would be a bit more secure than carrying
around the sheet of paper where your 20 PINs are all written
down.

Bear


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



RE: Welome to the Internet, here's your private key

2002-02-05 Thread Greg Rose

At 03:48 PM 2/5/2002 -0600, Kim-Ee Yeoh wrote:
I took a brief look at your code, and one optimization you could do is to
make a single pass for both the monobit and poker tests.  If c_0, c_1,
..., c_15 are the frequency counts of nibbles, then the monobit count is
just the sum over all i's of c_i * w(i), where w(i) is the Hamming weight
of i.

Yep, that's a small and useful optimisation.

Also the chi-square for the poker test could easily be done in pure
integer arithmetic by clearing out the denominator of 5000.  (Of course,
it helps when your embedded system, such as the one I work with, has a
machine word size of 32 bits.  Your mileage on specific smartcards may
vary.)

I recognise this too.

My goal when I wrote the code was to cleanly implement the FIPS as it 
described the tests... which I think I did. Anyone who *truly* cares about 
optimising it for a smartcard for example, won't use C code to do it. As I 
said, the only nasty things that would remain at that point are the 16 
multiplies (16x16-32 bit integers), calculating the squares of the poker 
values... these are pretty much unavoidable. Given that, I don't understand 
why the FIPS specified the tests using floating point... *they* should have 
made this optimisation (as they did for the bounds on the runs test, which 
they initially got wrong anyway...)

  The scariest thing, though... at first I put in an unkeyed RC4 generator
  for the self-test data, but accidentally ran the FIPS test on a straight
  counter output... and it passed (even version 1)! I'd always assumed that
  something in the regularity of a counter would trigger it. Running through
  the buffer, XORing consecutive bytes, makes it fail quite handily, but
  might also have the undesirable effect of hiding a bias in the data, if
  there was one. I'm thinking of suggesting to NIST that a stronger test
  would be to run the test on the raw data, and then on the data after 
 XORing
  consecutive bytes... if it's really random stuff it should pass both, and
  in the meantime this would catch all sorts of failures. Any comments?

Could you clarify what you mean by counter output?  Are we talking about
a sequence of consecutive 8-, 16-, or 32-bit numbers?  If so, FIPS will
detect and flunk such sequences.

While priming the RC4 table, I accidentally filled the data buffer instead 
(D'oh!) with consecutive byte values 0x00, 0x01, ... 0xFF, 0x00, ...

This very much passes the FIPS 140 tests for randomness, despite being 
nothing like it:
9932 ones
Poker test: 317 0x0s
Poker test: 317 0x1s
Poker test: 317 0x2s
Poker test: 317 0x3s
Poker test: 316 0x4s
Poker test: 316 0x5s
Poker test: 316 0x6s
Poker test: 316 0x7s
Poker test: 316 0x8s
Poker test: 316 0x9s
Poker test: 316 0xAs
Poker test: 316 0xBs
Poker test: 304 0xCs
Poker test: 300 0xDs
Poker test: 300 0xEs
Poker test: 300 0xFs
Poker test ChiSquared parameter = 2.304
2497 runs of 1 0s
2529 runs of 1 1s
1252 runs of 2 0s
1255 runs of 2 1s
628 runs of 3 0s
621 runs of 3 1s
317 runs of 4 0s
307 runs of 4 1s
159 runs of 5 0s
152 runs of 5 1s
160 runs of 6 0s
149 runs of 6 1s

I, like you, thought it would flunk, but it doesn't.

I'm almost as certain that a
maximal-period LFSR would be similarly detected.  A runs test, especially,
isn't fooled so easily.

An interesting question. One of the utilities I have lying around happens 
to do LFSRs... so I can answer this.

Up to LFSR length 8, there are too few runs of length 6 or greater, so they 
must fail. LFSR Length 9 fails the low end of the chi-squared test... too 
evenly distributed. (Note that the counter above only passes this because 
it cuts off in the middle of a byte count, and so skews away from the 
one-heavy end of the spectrum). Hmmm... a table is required. (I'm using the 
first polynomial specified in Schneier of each length, impulse sequence:
$ lfsr -n 2 16 5 3 2 | binbin | fips140)

Poly Length Description of failure
--- --
9   Poker test too regular
10  Poker test too regular
11  Poker test too regular
12  Passes test! (12, 6, 4, 1, 0)
13  Passes test! (13, 4, 3, 1, 0)
14  Passes test! (14, 5, 3, 1, 0)
15  multiple runs test failures (byte alignment too good?),
 but passes poker test
16  Passes test! (16, 5, 3, 2, 0)
17  Passes test! (17, 3, 0)

At this point I am detecting a pattern... So, I'm afraid it isn't true that 
it will pick up even these simple linear sequences. (An LFSR of length 12 
only generates 4095 bits, repeated about 5 times!) I find this less 
surprising, actually. LFSR output looks random in some more fundamental 
sense.

As it stands, the FIPS-140 RNG testsuite is a good compromise between
speed and validation efficacy.  FIPS-140 is a standard for crypto-module
validation, and so long as your tests for the RNG component of your
crypto-module are comparable or exceed those specified in the