Re: Upper limit?

2008-07-05 Thread Steven M. Bellovin
On Fri, 04 Jul 2008 20:46:13 -0700
Allen <[EMAIL PROTECTED]> wrote:

> Is there an upper limit on the number of RSA Public/Private 1024 bit 
> key pairs possible? If so what is the relationship of the number of 
> 1024 bit to the number of 2048 and 4096 bit key pairs?
> 
There are limits, but they're not particularly important.

I'll oversimplify.  Roughly speaking, a 1024-bit RSA public key is the
product of two 512-bit primes.  According to the Prime Number Theorem,
the number of primes < n is approximately n/log(n).  Actually, what we
need is the number of primes >2^511 and <2^512, but that correction
doesn't make much differences -- work through the math yourself to see
that.  Call the number of such primes P.

Now, we need two such primes.  There are therefore P^2 pairs, more than
2^1000.  The numbers are very much larger for 2048- and 4096-bit keys,
but I'll leave those as an exercise for the reader.

--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Upper limit?

2008-07-05 Thread Martin James Cochran


On Jul 4, 2008, at 9:46 PM, Allen wrote:

Is there an upper limit on the number of RSA Public/Private 1024 bit  
key pairs possible? If so what is the relationship of the number of  
1024 bit to the number of 2048 and 4096 bit key pairs?


Using the prime number theorem you can get an estimate on the number  
of such pairs.  Prime number theorem says that there are asyptotically  
2^{512}/ln(2^{512}) primes less than 512 bits.  So there are roughly  
2^{512}/ln(2^{512}) - 2^{511}/ln(2^{511}) ~ 2^{511}/ln(2^{512} 512-bit  
primes.  Squaring this and dividing by two will give the approximate  
number of pairs - 2^{1021}/(ln(2^{512}))^2.


Generalizing this for a n-bit RSA modulus gives 2^{n-3}/(ln(2^{n/ 
2}))^2 expected pairs.


Of course, well-chosen RSA primes usually have some special properties  
so that (p-1) and (q-1) don't have too many small factors and so that  
(p-q) isn't smaller than 2^{n/4}, so the above figures might be off by  
a bit if you consider the resulting distribution.  But they should  
still be fairly close (maybe within a factor of ln(2^{n/2})?).


Martin

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Upper limit?

2008-07-05 Thread Paul Hoffman

At 8:46 PM -0700 7/4/08, Allen wrote:
Is there an upper limit on the number of RSA Public/Private 1024 bit 
key pairs possible? If so what is the relationship of the number of 
1024 bit to the number of 2048 and 4096 bit key pairs?


On a related note: why did you skip 1536 bits? There is nothing 
special about key lengths being an integral power of 2 bits long.


--Paul Hoffman, Director
--VPN Consortium

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Strength in Complexity?

2008-07-05 Thread Paul Hoffman

At 12:48 AM +1200 7/6/08, Peter Gutmann wrote:

Florian Weimer <[EMAIL PROTECTED]> writes:

* Peter Gutmann:
 [1] Show of hands, how many people here not directly involved 
with X.509 work

 knew that the spec required that all extensions in CA root certificates
 ("trust anchors" in recent X.509 jargon) be ignored by an 
implementation?

 So if you put in name constraints, key usage constraints, a policy
 identifier, etc, then a conforming implementation is supposed 
to look at

 them, throw them away, and proceed as if they weren't there?


Are you sure that the constraints are not supposed to be applied when
the root certificate is actually processed, after its signature has been
verified by the trust anchor?


Yup.  The app is supposed to read the cert, parse and process the extensions,
and then throw them away.


Peter: please turn down the hyperbole a bit. You forgot the word 
"may" between "and" and "then".



The text from the spec is:

  3.3.60 trust anchor: A trust anchor is a set of the following information in
  addition to the public key: algorithm identifier, public key parameters (if
  applicable), distinguished name of the holder of the associated private key
  (i.e., the subject CA) and optionally a validity period. The trust anchor
  may be provided in the form of a self-signed certificate. A trust anchor is
  trusted by a certificate using system and used for validating certificates
  in certification paths.

Rendered into English, that says "take the pubic key, owner name, and
validity period, and ignore everything else in the cert".


Wrong. There is no requirement to "ignore everything else in the 
cert". There is simply no requirement to use that material.



To pre-empt the inevitable "yes, but it doesn't explicitly say you can't
process the extensions if you want to": It also doesn't say you can't reformat
the user's hard drive when you see a cert, but the intent is that you don't do
anything not explicitly listed in the text above.


No offense, but if I wanted to know the intent of a group of people 
who make hard-to-read-and-harder-to-impelemnt crypto standards, I 
would not ask you. You don't know their intent, Peter: you only know 
the output of the sausage-making process.


If I haven't made it clear: I agree with Peter that the spec should 
have clearly stated what one was supposed to do with the extensions. 
Where I think we disagree is that I would rather that the spec said 
explicitly to throw them away. I would rather have the semantics of 
"this is what I say my name is and this is what I say my public key 
is" quite separate from "this is what I think you should trust me to 
authenticate". This adds complexity (it takes two certs), but it also 
reduces complexity (it doesn't mandate binding policy to 
identification).



Luckily no sane implementation pays any attention to this nonsense :-).


We might agree here because I don't think there are many sane 
implementations of X.509.


--Paul Hoffman, Director
--VPN Consortium

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Strength in Complexity?

2008-07-05 Thread Arshad Noor

Florian Weimer wrote:

* Arshad Noor:


http://www.informationweek.com/shared/printableArticle.jhtml?articleID=208800937


On a more serious note, I think the criticism probably refers to the
fact that SKSML does not cryptopgrahically enforce proper key
management.  If a participant turns bad (for instance, by storing key
material longer than permitted by the protocol), there's nothing in the
protocol that stops them.


Thank you for your comment, Florian.

I may be a little naive, but can a protocol itself enforce proper
key-management?  I can certainly see it facilitating the required
discipline, but I can't see how a protocol alone can enforce it.
Any examples you can cite where this has been done, would be very
helpful.

The design paradigm we chose for EKMI was to have:

1) the centralized server be the focal point for defining policy;
2) the protocol carry the payload with its corresponding policy;
3) and the client library enforce the policy on client devices;

In some form or another, don't all cryptographic systems follow a
similar paradigm?

Arshad Noor
StrongAuth, Inc.

P.S. Companies deploying an EKMI must have an external process in
place to ensure their applications are using "verified" libraries
on the client devices, so their polices are not subverted.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Strength in Complexity?

2008-07-05 Thread Florian Weimer
* Peter Gutmann:

> Florian Weimer <[EMAIL PROTECTED]> writes:
>>* Peter Gutmann:
>>> [1] Show of hands, how many people here not directly involved with X.509 
>>> work
>>> knew that the spec required that all extensions in CA root certificates
>>> ("trust anchors" in recent X.509 jargon) be ignored by an 
>>> implementation?
>>> So if you put in name constraints, key usage constraints, a policy
>>> identifier, etc, then a conforming implementation is supposed to look at
>>> them, throw them away, and proceed as if they weren't there?
>>
>>Are you sure that the constraints are not supposed to be applied when
>>the root certificate is actually processed, after its signature has been
>>verified by the trust anchor?
>
> Yup.  The app is supposed to read the cert, parse and process the extensions,
> and then throw them away.  The text from the spec is:
>
>   3.3.60 trust anchor: A trust anchor is a set of the following information in
>   addition to the public key: algorithm identifier, public key parameters (if
>   applicable), distinguished name of the holder of the associated private key
>   (i.e., the subject CA) and optionally a validity period. The trust anchor
>   may be provided in the form of a self-signed certificate. A trust anchor is
>   trusted by a certificate using system and used for validating certificates
>   in certification paths.
>
> Rendered into English, that says "take the pubic key, owner name, and 
> validity period, and ignore everything else in the cert".

Let me rephrase my remark: The trust anchor is conceptually separate
from a root CA certificate.  It is only used to validate it the CA
certificate.  Nothing in that section gives you permission to ignore
extensions on the CA certificate (skipping the first entry in the
certification path).  In addition, the trust anchor cannot be used
directly to verify certificates issued by the CA because the subject DN
does not match.  Ergo, the extensions on the CA certificate are still in
force.

> Luckily no sane implementation pays any attention to this nonsense :-).

I think your interpretation actually leads to a non-compliant
implementation.  Obviously, wording of that section could be less
confusing.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Strength in Complexity?

2008-07-05 Thread Peter Gutmann
Florian Weimer <[EMAIL PROTECTED]> writes:
>* Peter Gutmann:
>> [1] Show of hands, how many people here not directly involved with X.509 work
>> knew that the spec required that all extensions in CA root certificates
>> ("trust anchors" in recent X.509 jargon) be ignored by an implementation?
>> So if you put in name constraints, key usage constraints, a policy
>> identifier, etc, then a conforming implementation is supposed to look at
>> them, throw them away, and proceed as if they weren't there?
>
>Are you sure that the constraints are not supposed to be applied when
>the root certificate is actually processed, after its signature has been
>verified by the trust anchor?

Yup.  The app is supposed to read the cert, parse and process the extensions,
and then throw them away.  The text from the spec is:

  3.3.60 trust anchor: A trust anchor is a set of the following information in
  addition to the public key: algorithm identifier, public key parameters (if
  applicable), distinguished name of the holder of the associated private key
  (i.e., the subject CA) and optionally a validity period. The trust anchor
  may be provided in the form of a self-signed certificate. A trust anchor is
  trusted by a certificate using system and used for validating certificates
  in certification paths.

Rendered into English, that says "take the pubic key, owner name, and 
validity period, and ignore everything else in the cert".

To pre-empt the inevitable "yes, but it doesn't explicitly say you can't 
process the extensions if you want to": It also doesn't say you can't reformat 
the user's hard drive when you see a cert, but the intent is that you don't do 
anything not explicitly listed in the text above.  One of the known problems 
with this is that any cert that's marked as trusted now becomes a root CA cert 
because of the requirement to ignore the fact that it isn't a root CA cert.

Luckily no sane implementation pays any attention to this nonsense :-).

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: ITU-T recommendations for X.509v3 certificates

2008-07-05 Thread Peter Gutmann
Florian Weimer <[EMAIL PROTECTED]> writes:
>* Peter Gutmann:
>>>Or is it unreasonable to expect that the specs match what is actually needed
>>>for interoperability with existing implementations (mostly in the TLS, S/MIME
>>>area)?
>>
>> There is very little correspondence between PKI specs and reality.
>
>I should have written that my main goal was to extract the public key
>material, and perhaps the validity period.  I want to use the
>certificates as interoperable public key containers, 

That's the best way to use them.  For one thing it doesn't create any mistaken 
impression that setting a particular extension will have any useful effect 
when the software at the other end sees it :-).

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Upper limit?

2008-07-05 Thread Allen
Is there an upper limit on the number of RSA Public/Private 1024 bit 
key pairs possible? If so what is the relationship of the number of 
1024 bit to the number of 2048 and 4096 bit key pairs?


Thanks,

Allen

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Strength in Complexity?

2008-07-05 Thread Florian Weimer
* Arshad Noor:

> The author of an article that appeared in InformationWeek this week
> (June 30, 2008) on Enterprise Key Management Infrastructure (EKMI):
>
> http://www.informationweek.com/shared/printableArticle.jhtml?articleID=208800937
>
> states the following:
>
> "There are, of course, obstacles that must still be overcome by EKMI
> proponents. For example, the proposed components are somewhat simple
> by design, which concerns some encryption purists who prefer more
> complex protocols, on the logic that they're more difficult to break
> into."

First of all, a simple SKSML request for a symmetric key is a whopping
77 lines of SOAPWSS/whatever XML; the server response is 62 lines even
without the container.  If this is not enough to make every complexity
fanboy happy, I don't know what can do the trick.

On a more serious note, I think the criticism probably refers to the
fact that SKSML does not cryptopgrahically enforce proper key
management.  If a participant turns bad (for instance, by storing key
material longer than permitted by the protocol), there's nothing in the
protocol that stops them.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Strength in Complexity?

2008-07-05 Thread Florian Weimer
* Peter Gutmann:

> [1] Show of hands, how many people here not directly involved with X.509 work
> knew that the spec required that all extensions in CA root certificates
> ("trust anchors" in recent X.509 jargon) be ignored by an implementation?
> So if you put in name constraints, key usage constraints, a policy
> identifier, etc, then a conforming implementation is supposed to look at
> them, throw them away, and proceed as if they weren't there?

Are you sure that the constraints are not supposed to be applied when
the root certificate is actually processed, after its signature has been
verified by the trust anchor?

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: ITU-T recommendations for X.509v3 certificates

2008-07-05 Thread Florian Weimer
* Peter Gutmann:

>>Or is it unreasonable to expect that the specs match what is actually needed
>>for interoperability with existing implementations (mostly in the TLS, S/MIME
>>area)?
>
> There is very little correspondence between PKI specs and reality.

I should have written that my main goal was to extract the public key
material, and perhaps the validity period.  I want to use the
certificates as interoperable public key containers, mainly in order to
be able to rely on proven TLS implementations for encryption and
authentication.

> The brokenness in X.509 implementations creates a self-sustaining cycle in
> which applications that accept certs are more or less oblivious to anything
> that's in the cert (beyond basic stuff like correct formatting and encoding,
> and so on), so you can get away with almost anything (and then in turn because
> apps will accept anything, cert creators can create arbitrarily broken certs
> without being caught out by it, so the cycle is self-sustaining).

I guess parsing X.509 certs to derive further semantic content is
comparable to mail header parsing.  That is a futile exercise, too, but
sometimes unavoidable (for finding spam injection points, for instance).

But to be honest, I really don't see the point of extracting further
data from the certificates.  I can't reach OCSP servers and CRL
distribution points anyway because they are firewalled off.  I still
need to map a DN to some application-specific entity, and I need to
grant specific capabilities to it because I don't want to grant blanket
permission to the CAs involved--but this means I can directly bind this
metadata to the certificate, using the DN instead does not really
simplify set-up.  The lack of indirection makes key rollover more
difficult, granted, but you don't have to deal with broken random number
generators every other day, so I'm not sure if this is such a bad
trade-off.

> If you want to be really lazy, use an X.509v1 cert where you don't even need
> to bother with extensions.  A downside (?) of this is that some applications
> will treat it as a CA root cert.

I've got a couple of X.509v1 certs with extensions in production use,
which are a bit difficult to phase out. 8-( Turns out that this is not
so interoperable after all.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Secure voice?

2008-07-05 Thread Allen

Interesting tidbit:

http://www.epaynews.com/index.cgi?survey=&ref=browse&f=view&id=121516308313743148197&block=

"Nick Ogden, a Briton who launched one of the world's first 
e-commerce processors in 1994, has developed a system for 
voice-signed financial transactions. The Voice Transact platform was 
developed by Ogden's Voice Commerce Group in partnership with U.S. 
speech software firm Nuance Communications."


Best,

Allen

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: German banks liable for phishing (really: keylogging) attacks

2008-07-05 Thread Florian Weimer
* Stephan Neuhaus:

> This article: http://www.spiegel.de/wirtschaft/0,1518,563606,00.html
> (sorry, German only) describes a judgment made by a German district
> court which says that banks are liable for damages due to phishing
> attacks.

"District court" may be a bit misleading, it's the entry-level court for
this particular type of dispute, at the lowest place in the hierarchy.

> In the case in question, a customer was the victim of a
> keylogger even though he had the latest anti-virus software installed,

The "latest" part is not clear.  I'm also puzzled that forensics could
not recover the actual malware.

(A keylogger alone is not quite good enough--you need to disrupt
transmission of the one-time password to the bank's server if you want
to to use the password later on.  OTOH, the disruption component does
not necessarily appear in AV descriptions.)

> and lost 4000 Euro. The court ruled that the bank was liable because
> the remittance in question had demonstrably not been made by the
> customer and therefore the bank had to take the risk.

Well, the open question is not whether the bank has to take the risk
(after all, the transaction has been successfully disputed, even before
the case went to court), but if the customer was negligent and needs to
share some of the damage.

For instance, if a computer takes 15 minutes to boot, constantly
displays pop-up ads, and sporadically shows error messages during
browsing, I would hope that it's reasonable to assume that the machine
is not safe for on-line banking--no matter what the anti-virus says
about the state of the machine.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: WoW security: now better than most banks.

2008-07-05 Thread Anne & Lynn Wheeler

Perry E. Metzger wrote:

My bank doesn't provide any sort of authentication for logging in to
bank accounts other than passwords. However, Blizzard now allows you
to get a one time password keychain frob to log in to your World of
Warcraft account.


   


post in thread here a yr ago (1jul07) about financial institutions 
attempting some
(disastrous) deployments in the 99/00 time-frame ... and then instead of 
taking
blame for deployment problems ... there was quickly spreading opinion 
that hardware

tokens weren't practical in the consumer market place
http://www.garlic.com/~lynn/aadsm27.htm#34 The bank fraud blame game

as noted in another post ... the disastrous failures were somewhat a case of
institutional knowledge not permeating different part of the organizations.

banking conferences in the mid-90s were attributing the existing online 
banking
migration to the internet in large part motivated by significant 
customer support

problems with serial port modems (mostly with the serial port part).
http://www.garlic.com/~lynn/aadsm27.htm#38 The bank fraud blame game

that even if a little bit of the experience form the earlier online banking
programs had carried over into the later hardware token deployments ...
much of the deployment problems could have been averted.

In any case, the claim could be made that the industry is still attempting
to recover from those disasters.

a couple other posts on the same subject in other threads:
http://www.garlic.com/~lynn/2007n.html#65 Poll: oldest computer thing 
you still use
http://www.garlic.com/~lynn/2007t.html#22 'Man in the browser' is new 
threat to online banking

http://www.garlic.com/~lynn/2007u.html#11 Public Computers

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]