Re: Entropy of certificate serial number

2019-04-11 Thread Hector Martin 'marcan' via dev-security-policy
On 06/04/2019 03.01, Lijun Liao via dev-security-policy wrote:
> 5. Related to how the MD5 attacks you might be right. But theoretically,
> and also in practice, if you have enough bits to play and the hash
> algorithm is not cryptographically secure, you can find a collision with
> less complexity than the claimed one.

No, not in practice. There are different levels of "not
cryptographically secure". What you are talking about is preimage
resistance - the ability to construct an input to the hash algorithm
that produces a given, fixed, arbitrary output. There are no such
practical attacks on MD5 or SHA-1.

What the serial number entropy requirement seeks to mitigate are
collision attacks, in particular chosen-prefix collision attacks. This
is the attack that was used to break MD5. This means that you can
construct two messages with the same hash, by modifying both, given a
chosen (known, not modifiable) prefix for each message. Due to the
Merkle-Damgård construction of MD5 and SHA-1, these collisions are also
inherently arbitrary-suffix (after you get two partial messages to
collide, you can append the same arbitrary data to both and they will
still collide).

The serial number entropy requirement effectively mitigates collision
attacks, because the serial number is one of the first pieces of
information in the certificate, well before any attacker-controlled
data. In order to implement a chosen-prefix collision attack, you need
to predict the serial number. If the serial number has at least 64 bits
of entropy, then you would have to try to obtain around 2^63 colliding
certificates on average to match a precomputed collision. Note that the
birthday paradox does not apply here, because any given certificate can
only be obtained for any given collision attempt; it doesn't matter if
you compute 2^32 collisions and then try to get 2^32 certificates,
because *each one* of those certificates has to be obtained for a
*single* collision attempt embedded into it.

Note that the practical SHA-1 attack that was demonstrated was even
weaker than this, as it wasn't a chosen-prefix attack (each message has
a different prefix), but rather an identical-prefix attack (each message
has the *same* prefix, and the messages only differ in the
collision-generating blocks). This is less powerful, but still
sufficient for practical attacks, e.g. I bet you could combine it with
the x.509 structure to yield useful conditional parsing, much like the
demonstrated SHA-1 collision combined it with the JPEG structure to
yield conditional parsing. The serial number entropy requirement also
mitigates this weaker attack, of course.

-- 
Hector Martin "marcan" (mar...@marcan.st)
Public Key: https://mrcn.st/pub
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Survey of (potentially noncompliant) Serial Number Lengths

2019-03-18 Thread Hector Martin 'marcan' via dev-security-policy
On 19/03/2019 02.17, Rob Stradling via dev-security-policy wrote:
> On 18/03/2019 17:05, Kurt Roeckx wrote:
>> On Mon, Mar 18, 2019 at 03:30:37PM +, Rob Stradling via 
>> dev-security-policy wrote:
>>>
>>> When a value in column E is 100%, this is pretty solid evidence of
>>> noncompliance with BR 7.1.
>>> When the values in column E and G are both approximately 50%, this
>>> suggests (but does not prove) that the CA is handling the output from
>>> their CSPRNG correctly.
>>
>> Sould F/G say >= 64, instead of > 64?
> 
> Yes.  Fixed.  Thanks!

Perhaps it would make sense to separate out <64, ==64, >64?

100% "64-bit" serial numbers would indicate an algorithm using 63 bits
of entropy and the top bit coerced to 1.


-- 
Hector Martin "marcan" (mar...@marcan.st)
Public Key: https://mrcn.st/pub
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Pre-Incident Report - GoDaddy Serial Number Entropy

2019-03-18 Thread Hector Martin 'marcan' via dev-security-policy

On 15/03/2019 13:26, Peter Gutmann via dev-security-policy wrote:

I actually thought it was from "Chosen-prefix collisions for MD5 and
applications" or its companion papers ("Short chosen-prefix collisions for MD5
and the creation of a rogue CA certificate", "Chosen-Prefix Collisions for MD5
and Colliding X.509 Certificates for Different Identities"), but it's not in
any of those.  Even the CCC talk slides only say "We need defense in depth ->
random serial numbers" without giving a bit count.  So none of the original
cryptographic analysis papers seem to give any value at all.  It really does
seem to be a value pulled entirely out of thin air.


To be honest, I see this as a proxy for competence. Complying with the 
spec strictly means you're doing the right thing. Complying with the 
spec minus a tiny margin of error for practical reasons means you're 
probably fine. Mucking things up due to misunderstood notions of entropy 
and thus dropping entropy on the floor means you probably shouldn't be 
writing CA software. The fact that the bar was pulled from thin air is 
irrelevant; nobody here is suggesting that those using 63 bits of 
entropy actually *introduced* a tangible security problem. They just 
didn't follow the BR rules, which are there to be followed.


Thus:

a) If you use >64 bits of CSPRNG output raw, you're fine (assuming any 
practical CA size).


b) If you use exactly 64 bits of CSPRNG output with duplicate and zero 
checks, such that the odds of those checks ever *actually* rejecting a 
serial number based on the number of issued certs are < (say) 1%, then 
you're fine. For all *practical* intents and purposes you have 64 bits 
of entropy.


Ideally you'd have used more bits to avoid risking a duplicate serial 
discarding entropy, but at 64 bits, and doing the math for the birthday 
problem, the threshold for 1% chance is at about 608 million 
certificates [1]. If you've issued less than that number, you have a 
less than 1% chance of having ever rejected a single serial number due 
to the duplicate check (the zero check is negligible in comparison). It 
can be argued that if the problematic code never ran, then it might as 
well have never been there. Of course, *proving* that the code never ran 
is unlikely to be possible. Ultimately, entropy was reduced by the 
presence of that code, even if the outcome was identical to that had it 
not been there.


That said, it's quite *reasonable* to write the code this way; no 
strange misunderstandings are required. You had 64 bits of entropy and 
you applied a required check that negligibly reduced that; it's 
certainly better to lose a tiny fraction of a bit of entropy than to 
risk a duplicate serial.


c) If you're dropping serials with e.g. the top 8 bits set to zero (as 
per Daymion's algorithm), then you very clearly have 63.994353 bits of 
entropy, for no good reason. This is problematic, because it clearly 
demonstrates a *misunderstanding* of how entropy works and what the 
whole point of using 64 bits of raw CSPRNG output is. This is a BR 
violation in any strict reading, and readily provable by looking at 
serial number distribution.


d) If you're coercing the top bit (EJBCA defaults), then that's clearly 
63 bits of entropy, not 64, and of course a BR violation; it doesn't 
take a cryptanalyst to realize this, and anyone who isn't trying to 
"creatively interpret" the rules to weasel out of admitting 
non-compliance can see that 63 < 64 and there's no way to have 64 bits 
of entropy in a number where one bit never changes.


[1] See 
https://en.wikipedia.org/wiki/Birthday_problem#Cast_as_a_collision_problem :


>>> math.sqrt(2*(2**64) * math.log(1/(1 - 0.01)))
608926881.2334852

--
Hector Martin "marcan"
Public key: https://mrcn.st/pub
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Pre-Incident Report - GoDaddy Serial Number Entropy

2019-03-18 Thread Hector Martin 'marcan' via dev-security-policy

On 15/03/2019 07:13, Jaime Hablutzel via dev-security-policy wrote:

64bits_entropy = GetRandom64Bits() //This returns 64 random bits from a
CSPRNG with at least one bit in the highest byte set to 1

is, strictly speaking, not true. The best possible implementation for
GetRandom64Bits(), as described, only returns 63.994353 bits of entropy,
not 64.



Can you share how did you get the previous 63.994353?.

I'm trying the following and I'm getting a different value:

a = 2^64 = 18446744073709551616
b = 0x80 = 36028797018963968

(a - b) / a * 64 = 63.875

Maybe I'm misunderstanding something.


Entropy in bits is measured as the log2 of the possible values. So:

>>> math.log2(2**64)
64.0

Of 64-bit numbers, 255/256 have at least one bit set in the highest byte 
(only those starting with 00 don't), so:


>>> math.log2(2**64 * 255/256)
63.99435343685886

--
Hector Martin "marcan"
Public key: https://mrcn.st/pub
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CAA records on a CNAME

2019-03-18 Thread Hector Martin 'marcan' via dev-security-policy

On 18/03/2019 16:42, Corey Bonnell wrote:
Perhaps not very elegant, but you can encode an “allow all issuers” CAA 
RRSet by specifying a single iodef CAA record without any 
issue/issuewild records in the RRSet, which will probably be treated as 
permission to issue for  CAs. I say “probably” because the RFC wasn’t 
clear on the proper handling RRSets with no issue/issuewild property 
tags, but this was clarified in the CAB Forum in Ballot 219 
(https://cabforum.org/2018/04/10/ballot-219-clarify-handling-of-caa-record-sets-with-no-issue-issuewild-property-tag/) 
to explicitly allow the above behavior (although of course some CAs may 
be more restrictive and disallow issuance in this case).


Huh, I hadn't considered that interpretation. Indeed, a strict reading 
of the RFC suggests that would work. It seems an arbitrary non-defined 
non-critical CAA tag record should work too (if using an actual iodef is 
undesirable for some reason). Maybe such a tag should be defined for 
this purpose?


Though this won't help Amazon/Google/etc, as having a higher-level CAA 
record would require tree-climbing on CNAME targets, which was removed 
by errata 5065. Sorry for the noise.


--
Hector Martin "marcan"
Public key: https://mrcn.st/pub
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CAA records on a CNAME

2019-03-17 Thread Hector Martin 'marcan' via dev-security-policy

On 16/03/2019 10:25, Jan Schaumann via dev-security-policy wrote:

someapp.example.com, over which I have control is a CNAME, so I can't
set a CAA record there.  Let's say the CNAME points to
ghs.googlehosted.com.

Your suggestion is to contact Google and ask them to please add a CAA
record to that domain for a CA that a third-party (to them and myself)
chooses.  My experience has been that Google, Akamai, Cloudflare,
Amazon, and Microsoft etc. are not amenable to adding such records.


I think part of the problem here is that the CAA specification has no 
"allow all" option. Third party hosting providers probably want to allow 
their customers to use any CA they wish, so they lack CAA records. 
However, there is no way to specify this once you have already 
encountered a CAA record, so by the time you follow the CNAME, you're stuck.


The default CAA behavior can *only* be specified by default, by the 
absence of records throughout the entire tree. In my optinion this is an 
oversight in the CAA specification.


My use case is different, but somewhat related. I host services for an 
event under example.com, e.g. .example.com, but I also have a 
dynamic user.example.com zone (several, actually) where users 
automatically get a dynamic record assigned at 
.user.example.com. I use Let's Encrypt for all of the 
services. Currently I have a CAA record for example.com, but this also 
locks all the dynamic users into using Let's Encrypt, while I want them 
to be free to use any CA. I could instead have a CAA record for individual service>.example.com, but this is hard to manage and less 
secure, as it would allow issuance for all nonexistent names and is 
harder to manage. Ideally I would just set a CAA record for "*" on 
user.example.com and that would solve the issue, but the current CAA 
specification has no mechanism for this.


If CAA had a way to signal an explicit "allow all", then third party 
hosting services could signal that on their overall zones in order to 
solve this problem with CNAMEs (of course, customers who wish to lock 
down their CAA records for such third-party hosted domains would have to 
get CAA records added to them, but I think that makes more sense as an 
explicit thing rather than breaking CNAMEs by default).


--
Hector Martin "marcan"
Public key: https://mrcn.st/pub
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Pre-Incident Report - GoDaddy Serial Number Entropy

2019-03-12 Thread Hector Martin 'marcan' via dev-security-policy
On 13/03/2019 05.38, Ryan Sleevi via dev-security-policy wrote:
> Note that even 7 bytes or less may still be valid - for example, if the
> randomly generated integer was 4 [1], you might only have a one-byte serial
> in encoded form ( '04'H ), and that would still be compliant. The general
> burden of proof would be to demonstrate that these certificates were
> generated with that given algorithm you described above.
> 
> [1] https://xkcd.com/221/

Not only that, but, in fact, any attempt to guarantee certain properties
of the serial (such that it doesn't encode to 7 bytes or less) *reduces*
entropy.

In particular,

64bits_entropy = GetRandom64Bits() //This returns 64 random bits from a
CSPRNG with at least one bit in the highest byte set to 1

is, strictly speaking, not true. The best possible implementation for
GetRandom64Bits(), as described, only returns 63.994353 bits of entropy,
not 64.

Now whether 0.57% of a bit worth of entropy matters for practical
purposes, and for BR compliance purposes, is another matter entirely,
but the point is that *any* subsequent filtering and rejection of
serials with certain properties only *hurts* entropy, it doesn't help it.


-- 
Hector Martin "marcan" (mar...@marcan.st)
Public Key: https://mrcn.st/pub
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: What's the meaning of "non-sequential"? (AW: EJBCA defaulting to 63 bit serial numbers)

2019-03-12 Thread Hector Martin 'marcan' via dev-security-policy
On 12/03/2019 21.10, Mike Kushner via dev-security-policy wrote:
>>> There are no, and has never been any, 63 bit serial numbers created by 
>>> EJBCA.
>>
>> ... lead me to significantly reduce my trust in those making them, and 
>> their ability to correctly interpret security-critical standards in the 
>> future. Not everyone gets things right the first time, but owning up to 
>> problems, understanding the technical issue at hand, and accepting 
>> responsibility is a basic tenet of earning community trust.
> 
> I'm sorry you feel that way, but here's the thing. EJBCA produces whatever 
> length serial numbers you request from it, restricted to an even octet and 
> within the span of 4 to 20. EJBCA set to produce 8 octet serial numbers will 
> produce exactly 64 bit serial numbers, including the MSB. Are you suggesting 
> that a logical behavior for a 8 octet serial number would be to produce a 9 
> octet serial number and pad the first 7 bits? 
> EJBCA will produce exactly the serial number you've specified, and give you 
> as much entropy as your serial length allows. 

It's clear that multiple CAs made a configuration mistake here, and in
general when multiple users make the same mistake when configuring
software, that points to a usability problem in the software. Your
statement is just shoving the entirety of the issue on CAs, while
picking the interpretation most favorable to EJBCA. While the ultimate
responsibility certainly lies with the CAs, it is not helpful for EJBCA
to be so dismissive of the subject.

We can make several accurate statements about EJBCA configured to an
8-octet serial number size (the default):

- It generates serial numbers with 63 bits of entropy (or negligibly
less, if we consider the duplicate-removal code)
- It generates serial numbers from 1 to 2**63 - 1
- It generates serial numbers with 63 bits of effective output from a
CSPRNG (the MSB having been coerced to zero, and thus effectively
eliminated; that this is done by "trying until you get lucky" is an
irrelevant implementation detail and has no bearing on the result)
- It generates 8-byte serial numbers in encoded DER form (which is 64
bits worth of DER).

In other words, only after DER encoding does the serial number become 64
bits in length. The statement "There are no, and has never been any, 63
bit serial numbers created by EJBCA." presumes that bit length is being
measured at one specific point consistent with what EJBCA is actually
doing, which may not be what users expect.

I would in fact expect that if software is taking N bits of output from
a CSPRNG (with the goal of providing N bits of entropy), that it would
then encode it as a positive integer in DER, which indeed requires
adding an extra zero octet to contain the sign bit when the MSB of the
original value is 1. In fact, I would dare say that the output size is
less likely to be relevant to users than the amount of entropy contained
within, and that if a fixed output size is desired, a solution much less
likely to result in people shooting themselves in the foot is to prepend
a fixed constant byte 0x01<=0x7f to the serial before encoding.

The EJBCA configuration file defaults state, verbatim:
> # The length in octets of certificate serial numbers generated. 8 octets is a 
> 64 bit serial number.
> # It is really recommended to use at least 64 bits, so please leave as 
> default unless you are really sure, 
> # and have a really good reason to change it.
> # Possible values: between 4 and 20
> # Default: 8
> #ca.serialnumberoctetsize=8

Considering:

1) The BRs require 64 bits of output from a CSPRNG (which can only be
reasonably interpreted by anyone familiar with the subject as as 64 bits
of entropy; anything else is just 'creative interpretation')
2) The configuration file *explicitly* discourages changes.
3) All references are to "64 bits", with no mention that this refers to
the *encoded* serial number and, thus, one of those bits is always zero

Then it's not surprising that multiple CAs made the same mistake; heck,
I probably would've done the same too, without reviewing the code.

> EJBCA is a general CA implementation with multiple use cases, so it's not 
> built to specifically conform to cabf requirements. As Ryan Sleevi pointed 
> out - It is up to the end customer to understand their own requirements, and 
> to understand that a 64 bit signed integer can in no way or fashion contain 
> 64 bits of entropy. 
> 
> Unless you're going under the presumption that the MSB doesn't count as a 
> part of the serial number (and I've never seen an RFC or requirement pointing 
> to that being the case, EJBCA does not produce 63 bit serial numbers. 

The MSB is part of the serial number *encoding*. It is part of the
serial number field as encoded in a DER certificate. It is not part of
the *number* itself, the integer. For example, the MSB has no meaning
when certificate serial numbers are e.g. expressed in integer decimal
notation. A 64-bit positive 

Re: What's the meaning of "non-sequential"? (AW: EJBCA defaulting to 63 bit serial numbers)

2019-03-12 Thread Hector Martin 'marcan' via dev-security-policy

On 12/03/2019 07:54, Ryan Sleevi via dev-security-policy wrote:

On Mon, Mar 11, 2019 at 5:35 PM Buschart, Rufus via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


Since choice 1 is a logical consequence of "containing 64 bits of random
data", I was always under the impression, that choice 2 was meant by the
BRGs. If choice 1 is meant, then I think the requirement of being
'non-sequential' is just some lyrical sugar in the BRGs. Maybe there is a
third definition of "sequential" that I haven't thought of?



I had definitely seen it as lyrical sugar, trying to *really* hammer the
point of concern (of predictable serials). This is an example where
providing guidance in-doc can lead to more confusion, rather than less.

For example, a "confused" reading of the BR requirement would say "at least
64-bits of entropy" by generating a random number once [1] and including it
in all subsequent serials, monotonically increasing +1 each time :)


Sony tried this (minus the increment) when generating random nonces for 
ECDSA signing, apparently generating the nonce once during key 
generation, and reusing it for all subsequent signing operations [1]. 
That worked wonders for them *cough* :)


I think when it comes to specifications with cryptographic relevance (as 
unpredictable serials are), less is more; the more inflexible and 
unambiguous the spec is, the less likely it will be "creatively 
interpreted" in a manner that bypasses the whole point. To someone with 
crypto experience and an understanding of the intent, the current 
language clearly means "take 64 bits from a CSPRNG once, put whatever 
you want around them (or nothing), DER encode, and stuff it into the 
serial field". But clearly some implementers interpreted this 
differently, and here we are.


That said, I do think the current exercise is, shall we say, bringing 
out some interesting opinions on what an appropriate response to the 
problem is. Statements such as:



There are no, and has never been any, 63 bit serial numbers created by EJBCA.


... lead me to significantly reduce my trust in those making them, and 
their ability to correctly interpret security-critical standards in the 
future. Not everyone gets things right the first time, but owning up to 
problems, understanding the technical issue at hand, and accepting 
responsibility is a basic tenet of earning community trust.


[1] https://mrcn.st/t/1780_27c3_console_hacking_2010.pdf pp. 122-129

--
Hector Martin "marcan"
Public key: https://mrcn.st/pub
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Online exposed keys database

2018-12-27 Thread Hector Martin 'marcan' via dev-security-policy

On 19/12/2018 20:09, Rob Stradling via dev-security-policy wrote:

I'm wondering how I might add a pwnedkeys check to crt.sh.  I think I'd
prefer to have a table of SHA-256(SPKI) stored locally on the crt.sh DB.


Yes, I think the right approach for an upstream source is to provide a 
big list of hashes. People can then postprocess that into whatever 
database or filter format they want. For example, this is how Pwned 
Passwords does things, and I wrote a bloom filter implementation to 
import that for production usage (with parameters tuned for my personal 
taste of false positive rate, etc).


--
Hector Martin "marcan"
Public key: https://mrcn.st/pub
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DNS fragmentation attack subverts DV, 5 public CAs vulnerable

2018-12-11 Thread Hector Martin 'marcan' via dev-security-policy
On 12/12/2018 01.47, Ryan Sleevi via dev-security-policy wrote:
> Is this new from the past discussion?

I think what's new is someone actually tried this, and found 5 CAs that
are vulnerable and for which this attack works in practice.

> https://groups.google.com/d/msg/mozilla.dev.security.policy/KvQc102ZTPw/iLQLKfbbAwAJ

Looking back, this attack is also documented in the paper linked in that
thread, but unfortunately it's not open access. I get the feeling this
may be why that discussion didn't really proceed further in that thread.
I certainly missed it.

The paper does list the vulnerable CAs, which are:

> • COMODO, InstantSSL, NetworkSolutions, SSL.com: these CAs
> use the same MX email server mcmail1.mcr.colo.comodo.net
> which uses the same caching DNS resolver. The results from our
> cache overwriting methods indicates that the DNS resolver software
> is New BIND 9.x with DNSSEC-validation.
> • Thawte, GeoTrust, RapidSSL: use the same MX server and
> resolution platform.
> • StartCom4
> , StartSSL: both use the same email server and the
> same DNS resolver.
> • SwissSign: uses New BIND 9.x

> I think we're at the stage where it's less about a call to abstract action,
> and actually requires specific steps being taken, by CAs, to explore and
> document solutions. Saying "push for DNSSEC" doesn't actually lead to
> objective threat analysis' and their mitigations.

My last paragraph was intended as two separate statements; DNSSEC solves
this problem but we can't magically flip a switch and make everyone do
that (heck, my own TLD's registrar still doesn't support it, and yes,
I've been pestering them about it). Given that, I think CAs should be
required to implement practical mitigations against this particular
attack, and I'm hoping that by pointing this out we can start a
discussion about what those mitigations should look like :-)

As you've noted, Let's Encrypt seems to be leading on this front. It
would be interesting to see if any other CAs can document their approach
to mitigating this issue, if any.

-- 
Hector Martin "marcan" (mar...@marcan.st)
Public Key: https://mrcn.st/pub
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Trustico code injection

2018-03-01 Thread Hector Martin 'marcan' via dev-security-policy
On 2018-03-02 15:24, Todd Johnson wrote:
> Did *anyone* capture this information in a way that can be proven?  
> 
> While I personally would not trust any content from either hostname, the
> Twitter post referenced earlier is not sufficient proof of key compromise.

Unfortunately, the server quickly went down after the vulnerability was
publicly posted (as you might expect when throwing a root shell to
Twitter), and now that it is back up the vulnerable endpoints return
404. I'm not sure if anyone managed to capture further proof of the
extent of the breach. That Twitter thread is pretty damning, though,
even if it may not qualify as proof of key compromise.

I think the more interesting question here will be Trustico's response,
if any. Will they report the potential key compromise to Comodo and
request a revocation and reissuance? Or will they just pretend the
Twitterverse didn't have root on the server almost certainly holding
their private key for a while? Will they be transparent about their
storage of customer private keys, and exactly how it was implemented and
whether this compromise could've further compromised those keys?

And what does Comodo think of all of this?

-- 
Hector Martin "marcan" (mar...@marcan.st)
Public Key: https://mrcn.st/pub
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Trustico code injection

2018-03-01 Thread Hector Martin 'marcan' via dev-security-policy
On 2018-03-02 13:32, grandamp--- via dev-security-policy wrote:
> The web site is back up, with the same certificate being used.  That said, it 
> *is* possible that the certificate was managed by their load balancing 
> solution, and the private key for (trustico.com) was not exposed.
> 
> trustico.co.uk appears to be the same web site, yet it has a *different* 
> certificate.

The code injection occurred on an interface they had to check the
certificate of an arbitrary server. When 127.0.0.1 was used, the
trustico.com certificate was returned. That means the local web server
was handling TLS, not a remote load balancer solution (unless somehow
127.0.0.1 was forwarding to a remote host, which doesn't really make any
sense).

-- 
Hector Martin "marcan" (mar...@marcan.st)
Public Key: https://mrcn.st/pub
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Trustico code injection

2018-03-01 Thread Hector Martin 'marcan' via dev-security-policy
On 2018-03-02 02:56, Hector Martin 'marcan' via dev-security-policy wrote:
> On 2018-03-02 00:28, Hanno Böck via dev-security-policy wrote:
>> Hi,
>>
>> On twitter there are currently some people poking Trustico's web
>> interface and found trivial script injections:
>> https://twitter.com/svblxyz/status/969220402768736258
>>
>> Which seem to run as root:
>> https://twitter.com/cujanovic/status/969229397508153350
>>
>> I haven't tried to reproduce it, but it sounds legit.
> 
> Unsurprisingly, the entire server is now down. If Trustico are lucky,
> someone just `rm -rf /`ed the whole thing. If they aren't, they now have
> a bunch of persistent backdoors in their network.
> 
> Now the interesting question is whether this vector could've been used
> to recover any/all archived private keys.
> 
> As I understand it, Trustico is in the process of terminating their
> relationship with Digicert and switching to Comodo for issuance. I have
> a question for Digicert, Comodo, and other CAs: do you do any vetting of
> resellers for best practices? While clearly most of the security burden
> rests with the CA, this example shows that resellers with poor security
> practices (archiving subscriber public keys, e-mailing them to trigger
> revocation, trivial command injection vulnerabilities, running a PHP
> frontend directly as root) can have a significant impact on the security
> of the WebPKI for a large number of certificate holders. Are there any
> concerns that the reputability of a CA might be impacted if they
> willingly choose to partner with resellers which have demonstrated such
> problems?

According to this report, 127.0.0.1 returned the SSL certificate of the
Trustico server itself. This is evidence that no reverse proxy was in
use, and thus, the private key of trustico.com was directly exposed to
the code execution vector and could've been trivially exfiltrated:
https://twitter.com/ebuildy/status/969230182295982080

Therefore, it is not unreasonable to assume that this key has been
compromised.

The certificate in use is this one:
https://crt.sh/?id=206535041

At this point I would expect Comodo should revoke this certificate due
to key compromise within the next 24 hours.

-- 
Hector Martin "marcan" (mar...@marcan.st)
Public Key: https://mrcn.st/pub
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Trustico code injection

2018-03-01 Thread Hector Martin 'marcan' via dev-security-policy
On 2018-03-02 00:28, Hanno Böck via dev-security-policy wrote:
> Hi,
> 
> On twitter there are currently some people poking Trustico's web
> interface and found trivial script injections:
> https://twitter.com/svblxyz/status/969220402768736258
> 
> Which seem to run as root:
> https://twitter.com/cujanovic/status/969229397508153350
> 
> I haven't tried to reproduce it, but it sounds legit.

Unsurprisingly, the entire server is now down. If Trustico are lucky,
someone just `rm -rf /`ed the whole thing. If they aren't, they now have
a bunch of persistent backdoors in their network.

Now the interesting question is whether this vector could've been used
to recover any/all archived private keys.

As I understand it, Trustico is in the process of terminating their
relationship with Digicert and switching to Comodo for issuance. I have
a question for Digicert, Comodo, and other CAs: do you do any vetting of
resellers for best practices? While clearly most of the security burden
rests with the CA, this example shows that resellers with poor security
practices (archiving subscriber public keys, e-mailing them to trigger
revocation, trivial command injection vulnerabilities, running a PHP
frontend directly as root) can have a significant impact on the security
of the WebPKI for a large number of certificate holders. Are there any
concerns that the reputability of a CA might be impacted if they
willingly choose to partner with resellers which have demonstrated such
problems?

-- 
Hector Martin "marcan" (mar...@marcan.st)
Public Key: https://mrcn.st/pub
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: 2018.01.09 Issue with TLS-SNI-01 and Shared Hosting Infrastructure

2018-01-13 Thread Hector Martin 'marcan' via dev-security-policy
On 2018-01-13 12:38, josh--- via dev-security-policy wrote:
> Another update, the main thing being that we have deployed patches to our CA 
> that allow TLS-SNI for both renewal and whitelisted accounts, as we said we 
> would in our previous update:
> 
> https://community.letsencrypt.org/t/tls-sni-challenges-disabled-for-most-new-issuance/50316

Would it make sense to effectively allow "self-service" whitelisting by
using a DNS TXT record? This would allow a static DNS configuration (no
need for dynamic records as in DNS-01) and basically allow TLS-SNI-01
users to continue using their existing setup. The record would basically
be an assertion that yes, the domain owner allows the usage of
TLS-SNI-01 and the server it is pointed to will not allow third-party
provisioning of acme.invalid certs.

Another suggestion is to use an SRV record for TLS-SNI-01 validation.
This would serve as an assertion that the method is acceptable and also
allow choosing a different port or even a different hostname/IP
altogether. Supporting this for HTTP-01 would also make sense, e.g. that
would allow using certbot in standalone mode on a nonstandard port,
making it perhaps one of the simplest and most universal validation
configurations, working with any server software as long as you can
provision a single static DNS record.

-- 
Hector Martin "marcan" (mar...@marcan.st)
Public Key: https://mrcn.st/pub
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Possible future re-application from WoSign (now WoTrus)

2017-11-24 Thread Hector Martin 'marcan' via dev-security-policy
On 2017-11-22 21:10, Rob Stradling via dev-security-policy wrote:
> On 22/11/17 11:45, marcan via dev-security-policy wrote:
>> On 22/11/17 20:41, Tom via dev-security-policy wrote:
 Although not listed in the Action plan in #1311824, it is noteworthy
 that Richard Wang has apparently not been relieved of his other
 responsibilities, only the CEO title
>>>
>>> Do you have a link about the relieved of the CEO title?
>>>
>>> https://www.wosign.com/english/about.htm has been updated with the new
>>> name, WoTrus, and currently says "Richard Wang, CEO"
>>>
>>
>> It was discussed here in the past (and IIRC was part of the requirements 
>> for re-inclusion, since he was a large part of the problem), but the 
>> fact that so far it seems Richard Wang has been the main person to 
>> interact on this mailing list from the WoSign (now WoTrus) side makes me 
>> wonder if that wasn't all a ruse. He certainly seems to still be very 
>> much in charge.
> 
> "Richard Wang will be relieved of his duties as CEO of WoSign and other 
> responsibilities" seems to be a forward-looking statement with no firm 
> implementation date.  I think we should at least give WoTrus an 
> opportunity to clarify Richard's position before we pass judgment on 
> whether or not this was "all a ruse".

It's worth considering the implications of him remaining on board for an
extended period of time. Presumably the reason why him leaving was made
a requirement was because he has lost trust with the community and it
was deemed that he was directly responsible for a lot of WoSign's woes.
If that is the case, then it stands to reason that removing him as soon
as possible would be the best course of action for WoSign in order to
improve their security and recover community trust.

After all, if Richard Wang has been running the ship all along, then
leaves the day before a re-inclusion request is filed, should the
community trust the system and company which were built under his watch?
Sure, this meets the letter of the requirements, but I think it's fair
to say it wouldn't meet the spirit, or at least reduce confidence and
WoSign's chances for re-inclusion.

-- 
Hector Martin "marcan" (mar...@marcan.st)
Public Key: https://mrcn.st/pub
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Efficient test for weak RSA keys generated in Infineon TPMs / smartcards

2017-10-20 Thread Hector Martin 'marcan' via dev-security-policy

On 17/10/17 20:36, Nick Lamb via dev-security-policy wrote:

The bitmasks are effectively lists of expected remainders for each small prime, 
if your modulus has an expected remainder for all the 20+ small primes that 
distinguish Infineon, there's a very high chance it was generated using their 
hardware


Yup, that seems to be it. In fact, according to [1], those lists are 
just an optimization for the check N^r = 1 mod p for various values of 
r,p (plus some dummy entries with all bits but bit 0 set to 1, which are 
useless and apparently further obfuscation; they can be removed to speed 
up the test with no effect on the outcome). I believe further tests can 
be constructed following that same pattern to further reduce the false 
positive rate.


Here's a non-obfuscated version of the modulus check without the 
redundant entries:


https://mrcn.st/p/MOEoh2EH

(It's kind of sad seeing trivial obfuscation in a tool like this; come 
on guys, this isn't going to slow anyone down, it's just makes you look 
silly.)


FWIW, I tested 8 keys generated by affected Yubikeys and all failed the 
test (as in were detected), so it seems this issue affects 100% of 
generated keys, not just some fraction (or at least 100% of keys 
generated on affected hardware are detected by the test tool regardless 
of how vulnerable they are).


[1] https://crypto.stackexchange.com/questions/52292/what-is-fast-prime

--
Hector Martin "marcan"
Public key: https://mrcn.st/pub
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy