Re: ComSign Root Inclusion Request

2009-02-26 Thread Kyle Hamilton
This is mostly off-topic, and relates primarily to one of my pet
peeves regarding everything cryptography-oriented on the Internet
today.  I also know that this is not the correct venue to try to make
any reforms.  However, since Mr. Ross has stated his view on the
topic, I feel that I must state mine.

My view is actually one more of administrative convenience.  One
reason that everything on the Internet has worked well up to now is
that the admins can actually look at the data streams coming in and
out, and figure out what's going on, and what piece of data is being
obtained from where.  (Look at SMTP, or ESMTP, or even the diagnostic
value of HTTP headers for an example.)

At this point, requiring additional tools (such as OpenSSL, libpkix,
NSS, or any of any others) to figure out what a given datastream is
simply obscures the ability to troubleshoot.  It's possible to
identify an X.509 certificate (since it encodes strings in ways that
can be simply read by standard ASCII printable-character scanning) by
looking for Subject lines that include S=, OU=, O=, C=, and so on.
It's not as easily possible to determine that a given data stream is
in actuality a CRL.

If you look at the DER and BER encodings, you will see that they are
designed to minimize the number of bits used to encode any given data
structure.  If you look at the definition for XER (X.693), it includes
the following paragraph:

[quoted from X.693 (12/2001) section A.3]
The length of this encoding in BASIC-XER is 653 octets ignoring all
white-space.  For comparison, the same
PersonnelRecord value encoded with the UNALIGNED variant of PER (see
ITU-T Rec. X.690 | ISO/IEC 8825-1) is 84
octets, with the ALIGNED variant of PER it is 94 octets, with BER (see
ITU-T Rec. X.691 | ISO/IEC 8825-2) using the
definite length form it is a mininum of 136 octets, and with BER using
the indefinite length form it is a minimum of 161
octets.
[end quote]

I understand that memory-constrained devices would do well to have
less data to process; however, I don't particularly think that
obscuring the data sent by the CA is the way to go.  If necessary,
code the gateway that the memory-constrained devices use (I'm thinking
mobile phones here, primarily, though this means that I know that I am
ignoring other classes of memory-constrained devices that would not
necessarily use a gateway) to de-base64 the data, so that at the very
least the type of data can be identified without having to run it
through 'file' with its magic number structures -- especially since I
haven't seen a version of the 'magic' file that can properly identify
DER-encoded data as such.

As to the current case, this CA in question is not generating improper
certificates.  It is generating proper CRLs, and it is simply encoding
and transmitting them as PEM-encoded DER-encoded CRL structures when
RFC5280 (which, by the way, I've been repeatedly told that NSS does
*NOT* comply with) states that they must be sent as DER-encoded.

I have asked why NSS insists on DER-encoded CRLs and throws an
e009 when the received data is PEM-encoded.  I have not received a
specific answer as to my query: is -BEGIN X509 CRL- a valid
DER sequence?  If it is not, I would ask if the received data could be
run through a base64 decoder and processing reattempted before
throwing the error.

(Honestly, having NSS do base64 decoding of anything it's handed if it
fails on initial import would go a LONG way to increasing the
usability of X.509 structures within common-use cryptography -- I know
certain software requires PEM-encoded certificates for import, and
this software that we're discussing now requires non-PEM-encoded
certificates for import.  This necessitates providing multiple links
to multiple formats of a root certificate, for example, and relying on
the user to do the bookkeeping that the computer is much more suited
for.  The user should be involved at the point of deciding whether to
trust and what to trust -- not how to get the data into the software
before the trust decision can be made in the first place.)

-Kyle H

2009/2/25 David E. Ross nob...@nowhere.not:
 On 2/25/2009 2:04 PM, Kyle Hamilton wrote:
 Postel's first rule of interoperability: be liberal in what you
 accept, be conservative in what you send.

 Which RFC requires which?  (I had read somewhere, for example, that
 wildcard certificates must be handled by HTTP over TLS servers in a
 particular way -- it turns out that it wasn't part of PKIX, as I had
 thought, but rather an Informational RFC regarding HTTP over TLS.)

 -Kyle H

 On Wed, Feb 25, 2009 at 1:57 PM, Nelson B Bolyard nel...@bolyard.me wrote:
 Kyle Hamilton wrote, On 2009-02-25 13:56:
 This is going to sound rather stupid of me, but I'm going to ask this 
 anyway:
 Why is Firefox insisting on a specific encoding of the data, rather
 than being flexible to alternate, unconfusable, common encodings?
 The RFCs require conforming CAs to send binary DER CRLs.

 In the case of 

Re: ComSign Root Inclusion Request

2009-02-26 Thread Kyle Hamilton
2009/2/25 Eddy Nigg eddy_n...@startcom.org:

 Or in other words - and lets put it a bit more mildly - they certainly never 
 tested their CRLs, at least not with the software this group cares about.

 But didn't Kyle say the CRLs are empty anyway (no revocations)? I couldn't 
 find any records either. This doesn't sound quite right. More investigations 
 needed here IMO. Review is due at the weekend...

There's a potential problematic practice here, which is long time
period between CRL issuance.  I'm seeing issuance dates of October 6,
2008, with the next updates to be expected at April 4, 2009.  I expect
this is 180 days, though I don't feel like counting through my
calendar to verify that.

Neither of the CRLs show any currently-revoked certificates.  This is
NOT necessarily a failure of the CA's CRL mechanism, though, since
they very easily could have no unexpired certificates which were
revoked at the time the CRL was generated.

Since I'm much more a guru with openssl than with NSS, I'll just post
its output regarding the CRLs:

ComSignCA.crl:
KyleMac:comsign kyanha$ openssl crl -inform PEM -noout -text -in ComSignCA.crl
Certificate Revocation List (CRL):
Version 2 (0x1)
Signature Algorithm: sha1WithRSAEncryption
Issuer: /CN=ComSign CA/O=ComSign/C=IL
Last Update: Oct  6 13:18:54 2008 GMT
Next Update: Apr  4 13:18:54 2009 GMT
CRL extensions:
X509v3 Authority Key Identifier:

keyid:4B:01:9B:3E:56:1A:65:36:76:CB:7B:97:AA:92:05:EE:32:E7:28:31

X509v3 CRL Number:
9
No Revoked Certificates.
Signature Algorithm: sha1WithRSAEncryption
82:3f:d6:08:0c:38:ed:6f:9d:0e:86:b6:c4:b6:ef:09:7a:3b:
0a:08:00:e2:db:77:95:58:bb:8e:ad:8d:7e:78:76:0b:27:d7:
1a:9f:52:52:12:c7:c7:d8:a6:57:e7:8a:23:44:2b:3f:2d:a9:
2b:44:15:ec:c1:ba:ff:3f:93:9d:93:f2:47:bf:a2:9f:9d:8f:
5e:c6:2f:ec:1a:49:ff:94:e5:f9:80:61:2b:43:b7:66:95:f6:
a5:16:35:ff:7e:21:ee:52:2e:ce:e2:20:81:5b:b0:7a:df:ad:
31:d4:00:35:75:8a:92:3f:3f:fd:0e:8d:b0:48:3a:d2:be:82:
e7:30:22:45:92:ef:98:b0:c4:6f:17:57:d3:94:6e:83:9b:be:
f0:82:1f:b8:0a:9f:dc:ef:08:18:ef:36:50:d8:2e:1b:b5:8a:
e0:6d:4c:09:5f:29:7d:5b:b6:dc:6f:2c:8a:cd:11:f4:7d:ec:
5a:7a:12:20:f5:af:da:d8:6e:11:9d:8d:02:7e:4d:9e:9a:dd:
54:99:53:01:ac:b2:08:c8:ff:2a:66:ae:ed:53:5a:18:e6:56:
58:2d:89:5b:c5:ec:82:c8:b5:76:67:fe:64:af:5b:a6:53:87:
46:66:74:18:6b:bd:21:b2:f2:57:8a:88:9f:f9:78:17:e5:7a:
bb:a9:d1:94

ComSignSecuredCA.crl:
KyleMac:comsign kyanha$ openssl crl -inform PEM -noout -text -in
ComSignSecuredCA.crl
Certificate Revocation List (CRL):
Version 2 (0x1)
Signature Algorithm: sha1WithRSAEncryption
Issuer: /CN=ComSign Secured CA/O=ComSign/C=IL
Last Update: Oct  6 13:20:11 2008 GMT
Next Update: Apr  4 13:20:11 2009 GMT
CRL extensions:
X509v3 Authority Key Identifier:

keyid:C1:4B:ED:70:B6:F7:3E:7C:00:3B:00:8F:C7:3E:0E:45:9F:1E:5D:EC

X509v3 CRL Number:
8
No Revoked Certificates.
Signature Algorithm: sha1WithRSAEncryption
54:c3:34:37:1f:f2:2c:74:90:bb:96:ed:f0:d1:5b:ef:95:59:
c8:9d:2e:e0:b6:a4:c4:7b:93:ca:df:9a:33:4a:f8:83:77:79:
60:67:1b:8a:6c:b8:d1:7f:6d:2f:1f:c1:22:db:c3:a9:e3:17:
0f:34:9c:76:58:14:7c:b7:90:e7:fe:af:7e:98:53:5e:06:5a:
15:df:a9:92:e4:ef:e2:f4:e5:7d:75:f0:75:07:69:b9:fe:c5:
ab:f4:ca:e4:5e:7a:ab:69:8f:f2:df:53:b6:07:5c:b1:d0:99:
6f:59:51:7f:46:14:31:86:e8:4c:da:8b:07:f1:c4:0d:8b:e0:
f0:b7:c5:50:e5:35:de:62:b8:14:4d:b1:b2:3a:06:91:2d:5c:
e3:9c:83:60:e7:0f:a3:8e:7b:ea:23:35:6d:d3:5c:47:5f:75:
b7:b2:40:8e:29:48:7a:34:2d:18:5e:38:77:6c:de:56:67:21:
05:fd:97:72:3c:af:1e:09:32:f1:8e:2b:6f:32:3a:af:6d:18:
71:a2:50:19:95:9b:28:93:27:0a:d4:61:b2:4b:e8:5d:10:05:
f2:40:ab:31:39:b9:dd:5e:b3:f3:4a:38:5c:5e:61:1f:f2:2c:
22:ea:41:83:be:52:fe:00:55:1f:37:95:10:66:b4:42:ad:82:
0e:f3:32:29

-Kyle H
--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: ComSign Root Inclusion Request

2009-02-26 Thread Jean-Marc Desperrier

Kyle Hamilton wrote:

[...]  this CA in question is not generating improper
certificates.  It is generating proper CRLs, and it is simply encoding
and transmitting them as PEM-encoded DER-encoded CRL structures when
RFC5280 (which, by the way, I've been repeatedly told that NSS does
*NOT* comply with) states that they must be sent as DER-encoded.


It does not *fully* support RFC3280, but I think all what it supports is 
RFC3280 compatible, and also it seems to me that in Firefox 3 it 
supports quite more of RFC3280 than OpenSSL.

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: ComSign Root Inclusion Request

2009-02-26 Thread Jean-Marc Desperrier

Nelson B Bolyard wrote:

Kathleen Wilson wrote, On 2009-02-24 12:21:


* CRL issue: Current CRLs result in the e009 error code when
downloading into Firefox. ComSign has removed the critical flag from
the CRL, and the new CRLs will be generated in April.


Was that with FF 2?   FF 3 should not be showing hexadecimal error numbers.
  I will be very upset with PSM if it is still showing hex
error numbers in FF 3.x!!


With FF 3.2a1pre latest nightly the result of dropping the URL 
http://fedir.comsign.co.il/crl/ComSignSecuredCA.crl on a browser window is :


The application cannot import the Certificate Revocation List (CRL).
Error Importing CRL to local Database. Error Code:e009
Please ask your system administrator for assistance.

What should it show instead, where's the bug number ?
--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Return of i18n attacks with the help of wildcard certificates

2009-02-26 Thread Jean-Marc Desperrier

Eddy Nigg wrote:

On 02/25/2009 08:31 PM, Gervase Markham:

On 23/02/09 23:54, Eddy Nigg wrote:
[...]

Only CAs are relevant if at all. You don't expect that 200 domain names
were registered by going through anti-spoofing checking and measures, do
you?!

[...]


Outsh, sorry! That should have been 200 *million* domain names were
registered by going through some anti-spoofing checking and measures...


OTOH domain spoofing is dangerous *even* when there's no certificate 
involved, so it makes sense to require to solve it at the registrar/DNS 
level, and not at the CA level.


But you are right to point out that the volume of domain names involved 
makes unrealistic any procedure that's not fully automatized.


So I think Mozilla should require that the procedure be fully 
automatized, and not accept any solution that requires human 
intervention to approve requests, even if only for a portion of them.

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: ComSign Root Inclusion Request

2009-02-26 Thread Jean-Marc Desperrier

Jean-Marc Desperrier wrote:

[...]
With FF 3.2a1pre latest nightly the result of dropping the URL
http://fedir.comsign.co.il/crl/ComSignSecuredCA.crl on a browser window
is :

The application cannot import the Certificate Revocation List (CRL).
Error Importing CRL to local Database. Error Code:e009
Please ask your system administrator for assistance.

What should it show instead, where's the bug number ?


In my opinion, the right bug is bug 379298.

bug 107491 used to be the bug about that but has changed into a meta bug 
since Patch v9 - netError.dtd in dom and browser and comment 82.


I think it would have been better to create a new meta bug, and to do 
the wording, nssFailure changes after that on separate bugs blocking 
that meta bug instead.

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Return of i18n attacks with the help of wildcard certificates

2009-02-26 Thread Jean-Marc Desperrier

Paul Hoffman wrote:

At 7:09 AM +0100 2/24/09, Kaspar Brand wrote:

Kyle Hamilton wrote:

Removal of support for wildcards can't be done without PKIX action, if
one wants to claim conformance to RFC 3280/5280.

Huh? Both these RFCs completely step out of the way when it comes to
wildcard certificates - just read the last paragraph of section
4.2.1.7/4.2.1.6. PKIX never did wildcards in its RFCs.


Which says:
Finally, the semantics of subject alternative names that include
wildcard characters (e.g., as a placeholder for a set of names) are
not addressed by this specification.  Applications with specific
requirements MAY use such names, but they must define the semantics.

At 10:50 PM -0800 2/23/09, Kyle Hamilton wrote:

RFC 2818 (HTTP Over TLS), section 3.1.


RFC 2818 is Informational, not Standards Track. Having said that, it is also 
widely implemented, and is the main reason that the paragraph above is in the 
PKIX spec.


Just one thing : The use of a wildcard certificate was a misleading red 
herring in the implementation of the attack.


What's truly broken is that the current i18n attack protection relies on 
the checking done by the registrar/IDN, and that the registrar/IDN can 
only check the second-level domain name component.


Once they have obtained their domain name, attacker can freely use the 
third-level domain name component to implement any i18n attack they want 
even if no wildcard certificate is authorized.


This is not to say that wildcard certificates are not bad, evil, 
anything, but that nothing new has been truly brought about that by this 
attack.


So talk about wildcard certificate all you want, but this is a separate 
discussion from the discussion about the solution for this new i18n attack.
And the solution for it will not be wildcard certificate related, will 
not be easy or obvious, and so needs to be discussed as widely as possible.
Also there will be no crypto involved in the solution, as it's not 
acceptable to choose to just leave ordinary DNS user out in the cold 
with regard to the attack. So it needs to be discussed on the security 
group, not crypto.

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: ComSign Root Inclusion Request

2009-02-26 Thread Ian G

On 25/2/09 23:28, Nelson B Bolyard wrote:

Kyle Hamilton wrote, On 2009-02-25 14:04:

Postel's first rule of interoperability: be liberal in what you
accept, be conservative in what you send.


Yeah.  Lots of nasty Internet vulnerabilities have results from applying
that to crypto protocols and formats.  I know, I've had to fix 'em!



Agreed.  We don't really like Postel's rule in security work.  For us it 
is more likely, be precise about what you send and precise about what 
you accept.


iang
--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: ComSign Root Inclusion Request

2009-02-26 Thread stefan . claesson
On Feb 26, 3:55 pm, Eddy Nigg eddy_n...@startcom.org wrote:
 On 02/26/2009 06:18 AM, Eddy Nigg:





  On 02/26/2009 05:24 AM, David E. Ross:

  In the case of secure browsing at authenticated Web sites, I want to be
  conservative in what I accept. If a CA is generating certificates that
  do not comply with accepted RFCs, what else is that CA doing wrong? In
  other words, if a CA sends CRLs that are not binary DER, that should be
  a red flag that the CA might not be trustworthy in other respects.

  Or in other words - and lets put it a bit more mildly - they certainly
  never tested their CRLs, at least not with the software this group cares
  about.

  But didn't Kyle say the CRLs are empty anyway (no revocations)? I
  couldn't find any records either. This doesn't sound quite right. More
  investigations needed here IMO. Review is due at the weekend...

 Right now I found a few CRL apparently intended for EE certs 
 athttp://fedir.comsign.co.il/crl/ServerCA.crlandhttp://fedir.comsign.co.il/crl/corporate.crl

 Those are DER encoded, the other ones are apparently for their own CAs
 (e.g. suicide notes) which perhaps isn't relevant anyway. Not sure...

 --
 Regards

 Signer: Eddy Nigg, StartCom Ltd.
 Jabber: start...@startcom.org
 Blog:  https://blog.startcom.org- Hide quoted text -

 - Show quoted text -

The CRL that you have problems with are generated manually trough
our offline CA. (RSA Certificate Manager) When generating manually you
just copy
the crl into notepad and save it as crl.

The above CRL's are from our online intemediates that are generated
automatically (Also RSA CM)
Probably that is the difference.
We will gererate new CRL's the proper way as soon as we can.
--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: ComSign Root Inclusion Request

2009-02-26 Thread Eddy Nigg

On 02/26/2009 06:18 AM, Eddy Nigg:

On 02/26/2009 05:24 AM, David E. Ross:


In the case of secure browsing at authenticated Web sites, I want to be
conservative in what I accept. If a CA is generating certificates that
do not comply with accepted RFCs, what else is that CA doing wrong? In
other words, if a CA sends CRLs that are not binary DER, that should be
a red flag that the CA might not be trustworthy in other respects.



Or in other words - and lets put it a bit more mildly - they certainly
never tested their CRLs, at least not with the software this group cares
about.

But didn't Kyle say the CRLs are empty anyway (no revocations)? I
couldn't find any records either. This doesn't sound quite right. More
investigations needed here IMO. Review is due at the weekend...




Right now I found a few CRL apparently intended for EE certs at 
http://fedir.comsign.co.il/crl/ServerCA.crl and 
http://fedir.comsign.co.il/crl/corporate.crl


Those are DER encoded, the other ones are apparently for their own CAs 
(e.g. suicide notes) which perhaps isn't relevant anyway. Not sure...


--
Regards

Signer: Eddy Nigg, StartCom Ltd.
Jabber: start...@startcom.org
Blog:   https://blog.startcom.org
--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: ComSign Root Inclusion Request

2009-02-26 Thread Eddy Nigg

On 02/26/2009 04:18 PM, stefan.claes...@gmail.com:

The CRL that you have problems with are generated manually trough
our offline CA. (RSA Certificate Manager) When generating manually you
just copy
the crl into notepad and save it as crl.



It's very easy to convert them to DER afterward. You can do it even now. 
Are you using OpenSSL or another tool?


--
Regards

Signer: Eddy Nigg, StartCom Ltd.
Jabber: start...@startcom.org
Blog:   https://blog.startcom.org
--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Return of i18n attacks with the help of wildcard certificates

2009-02-26 Thread Gervase Markham

On 26/02/09 11:05, Jean-Marc Desperrier wrote:

Eddy Nigg wrote:

On 02/25/2009 08:31 PM, Gervase Markham:

On 23/02/09 23:54, Eddy Nigg wrote:
[...]

Only CAs are relevant if at all. You don't expect that 200 domain names
were registered by going through anti-spoofing checking and
measures, do
you?!

[...]


Outsh, sorry! That should have been 200 *million* domain names were
registered by going through some anti-spoofing checking and measures...


The vast majority of those domain names are ASCII, not IDN.


So I think Mozilla should require that the procedure be fully
automatized, and not accept any solution that requires human
intervention to approve requests, even if only for a portion of them.


Why? If a registrar wants to do all their checking manually, what's that 
to us? It'll raise their costs, but that's their business.


Gerv
--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: ComSign Root Inclusion Request

2009-02-26 Thread Kyle Hamilton
2009/2/26 Eddy Nigg eddy_n...@startcom.org:
 On 02/26/2009 04:18 PM, stefan.claes...@gmail.com:

 The CRL that you have problems with are generated manually trough
 our offline CA. (RSA Certificate Manager) When generating manually you
 just copy
 the crl into notepad and save it as crl.


 It's very easy to convert them to DER afterward. You can do it even now. Are 
 you using OpenSSL or another tool?

Any recent (i.e., 0.9.7 or 0.9.8) version of openssl can do this.  The
command line to do so is:

openssl crl -inform PEM -in [PEMCRLfile] -outform DER -out [DERCRLfile]

This works on Windows and UNIX at the least, if you have a compiled
copy of openssl for Windows.  As this is a security-conscious tool, I
would recommend compiling it from source yourself -- but not on the
machine that contains the offline CA (it involves installing the
compiler and the development kit, and that's a lot of unaudited
software to be running on a critical system).

I am not sure how NSS's crlutil handles PEM, or which tool would be
used to de-PEM the target.

-Kyle H
--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: ComSign Root Inclusion Request

2009-02-26 Thread David E. Ross
On 2/26/2009 1:48 AM, Kyle Hamilton wrote [in part]:

 There's a potential problematic practice here, which is long time
 period between CRL issuance.  I'm seeing issuance dates of October 6,
 2008, with the next updates to be expected at April 4, 2009.  I expect
 this is 180 days, though I don't feel like counting through my
 calendar to verify that.
 
According to Excel, it's exactly 180 days.

-- 
David E. Ross
http://www.rossde.com/

Go to Mozdev at http://www.mozdev.org/ for quick access to
extensions for Firefox, Thunderbird, SeaMonkey, and other
Mozilla-related applications.  You can access Mozdev much
more quickly than you can Mozilla Add-Ons.
--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: ComSign Root Inclusion Request

2009-02-26 Thread kathleen95014
 There's a potential problematic practice here, which is long time
 period between CRL issuance.

My understanding is that the update frequency of the CRLs is important
in regards to the end-entity certificates, not necessarily at the CA
level.

These URLs are the CRLs at the CA level, and their update frequency is
indeed long unless there is a revocation of a sub-CA:
http://fedir.comsign.co.il/crl/ComSignCA.crl
http://fedir.comsign.co.il/crl/ComSignSecuredCA.crl

These URLs are at the end-entity cert level:
http://fedir.comsign.co.il/crl/ServerCA.crl
http://fedir.comsign.co.il/crl/corporate.crl
You will see that their next expected update is tomorrow.

In regards to the CRL update frequency for end-entity certs, ComSign’s
CPS Section 4.4.2 says “ComSign will publish a new CRL the earliest of
not later than every 24 hours or immediately following revocation of a
certificate.”
--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: ComSign Root Inclusion Request

2009-02-26 Thread Nelson B Bolyard
Kyle Hamilton wrote, On 2009-02-26 07:49:

 I am not sure how NSS's crlutil handles PEM, 

It doesn't.  It requires DER.

 or which tool would be used to de-PEM the target.

   grep -v X509 CRL crl.pem | atob -o crl.der

Uses the atob utility which is one of NSS's utilities.
--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Return of i18n attacks with the help of wildcard certificates

2009-02-26 Thread Paul Hoffman
At 12:49 PM +0100 2/26/09, Jean-Marc Desperrier wrote:
Just one thing : The use of a wildcard certificate was a misleading red 
herring in the implementation of the attack.

What's truly broken is that the current i18n attack protection relies on the 
checking done by the registrar/IDN, and that the registrar/IDN can only check 
the second-level domain name component.

Once they have obtained their domain name, attacker can freely use the 
third-level domain name component to implement any i18n attack they want even 
if no wildcard certificate is authorized.

The author was showing that even looking at the lock doesn't help in a spoofing 
attack if the attacker has a wildcard certificate. In this way, it is an attack 
improvement.

This is not to say that wildcard certificates are not bad, evil, anything, but 
that nothing new has been truly brought about that by this attack.

So talk about wildcard certificate all you want, but this is a separate 
discussion from the discussion about the solution for this new i18n attack.
And the solution for it will not be wildcard certificate related, will not be 
easy or obvious, and so needs to be discussed as widely as possible.
Also there will be no crypto involved in the solution, as it's not acceptable 
to choose to just leave ordinary DNS user out in the cold with regard to the 
attack. So it needs to be discussed on the security group, not crypto.

We disagree here: it should be discussed in both places. In security, it is 
what should the browser do about spoofing. In crypto-policy (or whatever that 
list will be called when it is turned on), it should be how wildcards assist in 
the attack if a user is looking at the lock.
--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Take my database of certs/ssl details from high-traffic sites, please!

2009-02-26 Thread Wan-Teh Chang
On Wed, Jan 21, 2009 at 6:50 AM, Johnathan Nightingale
john...@mozilla.com wrote:
 Hi folks,

 I just posted a blog entry here about a side project I've had running for a
 little while:

 http://blog.johnath.com/2009/01/21/ssl-information-wants-to-be-free/

 The very short version is that I crawled the top 1M sites (according to
 Alexa) to harvest some basic SSL information, including the end-entity
 certs, and dumped it all into an SQLite database.

Hi,

Here are the MD5 certificate numbers we measured using Google Chrome's
usage statistics collection service:
http://dev.chromium.org/developers/md5-certificate-statistics

Wan-Teh Chang
--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Take my database of certs/ssl details from high-traffic sites, please!

2009-02-26 Thread Paul Hoffman
Here are the MD5 certificate numbers we measured using Google Chrome's
usage statistics collection service:
http://dev.chromium.org/developers/md5-certificate-statistics

I don't see any way to edit that page, so I'll have to correct it here. The 
first sentence is deceptively wrong, as we have discussed on this mailing list 
many times. The attack is not on CAs that issue certificates signed with 
MD5-based signatures, it is on CAs that issue certificates signed with 
MD5-based signatures and whose serial number and date of issue and revocation 
is predictable. There is a huge difference.

This makes the second sentence, As a result, some browser developers are 
planning to drop support of MD5 certificates at some point somewhat wrong as 
well. It would be much better stated Because a browser cannot determine 
whether or not a CA uses unpredictable serial number and date of issue and 
revocation, some browser vendors

--Paul Hoffman
--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: ComSign Root Inclusion Request

2009-02-26 Thread Nelson B Bolyard
After quoting a passage from ITU document X.690, whose title is:
ASN.1 encoding rules:
Specification of Basic Encoding Rules (BER),
Canonical Encoding Rules (CER) and
Distinguished Encoding Rules (DER)
on 2009-02-26 01:40 PST, Kyle Hamilton wrote,

 I have not received a specific answer as to my query: 
 is -BEGIN X509 CRL- a valid DER sequence?

Kyle, the answer to that question is in the document from which you
quoted.  See the section entitled Basic Encoding Rules.  All other
forms of encoding are derivatives of BER, so understanding BER is
the foundation.  I encourage you to read it.

The short answer is: No.  No data made up entirely of printable ASCII
strings is a valid DER or BER encoding.

PEM's only real value is that it allows data to be copied and pasted
into and out of text documents.  The base64 content is no more
enlightening, and IMO is significantly less informative, than the
binary DER.  PEM encoding adds a MINIMUM 33% overhead in amount of data
that must be sent. That's a 33% tax on all the bandwidth providers and
memory chips, just to allow copying into text documents.  It also requires
an additional pass over all the data to translated it to/from PEM form,
because the binary form is always the one that is valuable to the software.
Use of PEM may be appropriate in text documents, but CRLs being fetched over
the wire are fundamentally NOT text documents.

To make all users pay a 33% bandwidth tax, so that the occasional and rare
person who wants to actually look at the contents can avoid binary, is
silly.  Further, you can find out what kind of document it is from the
MIME content type of the http response, so the PEM header is redundant.

The main point here is that, on the Internet, interoperability is all
achieved based on compliance to Standards.  A standard that says
There's one way to do this is generally preferred in the IETF
(but not in the ITU, obviously) to a standard that says there are
many ways to do this and you must implement them all to be interoperable.

Many of the places where software products already permit a variety of
data formats to be used exist simply because, at the time that the
specification for that software was written, it failed to adequately
specify data formats, usually because it never occurred to the authors
that more than one format was likely to ever be used.  It seemed obvious
which was the right format, so no comment about format was provided.
That is the reason for utterly absurd implementations like the one
described in
https://developer.mozilla.org/en/NSS_Certificate_Download_Specification
which accepts certificates in no less than 8 different formats of data.

I applaud the IETF for specifying the formats for CRLs in sufficient
detail that products need not carry multiple implementations in order to
find one that works.  I lament products that promote the use of non-standard
formats, and that some so-called authorities are unaware of the
standards' requirements.
--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto