Hi,

Yeah, that attack would still work, however these attack vectors can be
mitigated using domain configuration (DNS records). For example a DNS TXT
record for _acme.example.com could contain a configuration string on how
the challange can be completed. This would then also make it possible to
run the challange on a non-443 or non-80 port, but could also contaain a
flag or a variable of what the hostname suffix for the challlange should
be. Since the DNS record needs to be created by the domain owner, we are
not introducing security risks on there, and it only has to be done once,
so it is acceptable to do that manually once.

However I'm sure that protecting a lot of people is the goal of such specs,
however if we want to be very strict, the only challenge that proves the
ownership of the domain is the dns challenge. Everything other than that is
just a heuristic around that. both the tls and the http challenges only
prove ownership of the machine that the domain points to. Not even that as
windows does not implement the notion of privileged ports (so are we
discriminating and not protecting windows users now?). If even with these
caveats we are trying to protect users of non-complying webserver
configurations. I'm pretty sure that according to the spec a request made
to a non-existing vhost must respond with a 404, which means that these
vulnerable webservers are not complying with the specs, the rules. If we
continue with this train of thought, how do we want to protect users of
systems where the the privileged ports are misconfigured, or the notion
does not exists? Linux is not the only operating system out there, not all
of them have this notion (like Windows). Where do we draw the line?

With all that said I do see your points, so let me amend my suggestion to
one of these:
  1. If a correct DNS TXT entry exists, let the challange be responded on a
non-443 or non-80 port (probably should restrict to < 1024, but as I said
earlier that's just a heuristic - although it should protect a good chunk
of users and everyone should be able to find a port below 1024)
  2. Run the check on a SPECIFIC non-443 port, always, and register it with
IANA for this - enables the daemon to be run on that port
  3. If a correct DNS TXT entry exists, let the ServerName in the http
challenge contain a suffix for all checks
  4. If a correct DNS TXT entry exists, let the ServerName be
example.com.acme.invalid, but expect the challenge to respond with the same
ccertificates, (or with a SAN certificate containing all iterations
perhaps?))

In any case, please be on the lookout for automating the certificate
issuance on a sysadmin level, without having certbot modifying
configuration of other software. NOT a good idea. Especially with chef, who
checks the system state from time to time and makes sure everything is
configured as it expects it to be. Then there is a race condition around
the configuration of ngninx for example. And yes, the system should be easy
and secure for average Joe, but keep on the lookout for the advanced
sysadmins, who will be using the service a lot more, and will hate you if
it cannot be automated in an isolated, robust manner.

Also serving requests to certbot from a vhost that may have no knowledge
that its certificate is issued using acme is probably not good for
isolation.

Akos
  Vandra

On 26 November 2016 at 19:43, Patrick Figel <[email protected]> wrote:

> I have two concerns about this proposal.
>
> First, there's a good chance that the vulnerability that caused SimpleHTTP
> to be
> deprecated[1] would work. In short, if a multi-tenant hosting environment
> does
> not set a default virtual host explicitly, commonly-used web server
> software
> would pick the first virtual host it encounters when parsing the config as
> the
> default vhost. An attacker on the same hosting infrastructure as the victim
> could attempt to forcefully get into that spot (they're typically sorted by
> alphabet, so that's fairly easy) and would then be able to solve the
> challenge
> if the validation server uses the example.com.acme.invalid Host header,
> which is
> unknown to the web server and would cause it to fall back to the default
> vhost.
>
> Second, I don't think this validation mechanism would be compatible with
> the
> Baseline Requirements, section 3.2.2.4.6 Agreed-Upon Change to Website[2].
> This
> would limit the usefulness of such a mechanism in ACME, as any
> publicly-trusted
> CA would not be able to use it (at least once that section goes into
> effect).
>
> Patrick
>
> [1]: https://mailarchive.ietf.org/arch/msg/acme/
> B9vhPSMm9tcNoPrTE_LNhnt0d8U
> [2]: https://github.com/cabforum/documents/pull/25/files?short_
> path=7f6d14a#diff-7f6d14a20e7f3beb696b45e1bf8196f2
>
> On Sat, Nov 26, 2016 at 7:13 PM, Akos Vandra <[email protected]> wrote:
> > Hello,
> >
> > This has been copied over from a github letsencrypt/acme-spec#242,
> > ietf-wg-acme/acme#215:
> >
> > As you seem to be strongly concerned over adding the option to adding the
> > possibility to do the challange over alternate ports (some of which I are
> > valid, but all of which can be handled in a secure way, if we are
> careful -
> > such as using a DNS record and other solutions to address have been
> already
> > stated in here [among the GH issue list] ), I propose to add at least an
> > alternate hostname for the http challange.
> >
> > This would enable us to add a specific virtual host, answering to
> > *.acme.invalid and redirect that to a specific wwhost, which can be used
> by
> > the certbot to place the necessary files to complete the challange.
> >
> > It is currently not possible to add global aliases in nginx for example,
> so
> > this way it would be possible to issue certificates without having to
> either
> > have certbot modify the nginx config (yes, it's possible to do that in a
> > zero-downtime way, but it makes my sysadmin inner self scream - my
> software
> > reconfiguring files generated by other software...), or have to add the
> > location redirect to all vhosts, which is not as easy as it sounds when
> they
> > get generated by different software components.
> >
> > If we'd say that the challange has to respond for the Host header of
> > example.com.acme.invalid OR example.com (the current as fallback) that
> would
> > make all our lives easier, while maintaining a) backward compatibility b)
> > ease of use for the average joe.
> >
> > Not as easy as with a custom port, but still a lot more easy to automate.
> >
> > Additional info:
> >
> > kelunik commented 4 hours ago
> > @axos88 You can redirect /.well-known/acme-challenge/* to another
> (virtual)
> > host at any time. The validation authority will follow any redirects. You
> > could also use includes to define a common web root just for
> > /.well-known/acme-challenge, that's what I usually do.
> >
> > axos88 commented 2 hours ago • edited
> > As stated, this is not that easy to do when the server configuration is
> > generated by different software components (chef cookbooks) that one does
> > not have control over. Unfortunately there are no global aliases /
> redirects
> > in nginx, only per server.
> >
> > This would also mean that in order to use letsencrypt, one has to MODIFY
> the
> > current configuration,
> >
> > rather than ADD a new virtualhost declaration. Modifying something
> generated
> > by some other actor is always a bad idea (this is one of the reasons
> conf.d
> > directories exist btw).
> >
> > axos88 commented 2 hours ago
> > For example: I have an automated installer for an web application,
> running
> > over let's say ruby on rails. The application is obviously unaware, and
> > should NEVER be aware how it's exposed to the internet. Thus it is
> unaware
> > of how its SSL certificate is obtained and installed (normally it
> wouldn't
> > even run on https, but would rely on a forward proxy to terminate the ssl
> > connection, and forward it using http, but that's another matter).
> >
> > Now my automated installer installs this software, and also installs and
> > configures nginx for forward proxying. It will configure the nginx vhost,
> > and other things that are needed. I don't know that LE exists, my
> installer
> > just asks for the path to a certificate and a key.
> >
> > Now I sell my software to a third party, who uses my installer (chef
> > cookbook) to install my software on THEIR infrastructure. THEY are
> smarter
> > then me, and know that LE exists, and want to use it to create the certs.
> >
> > Current options:
> >
> > They start hacking around the nginx configuration generated by my
> installer
> > and add the alias - not good, the next update will overwrite their
> changes,
> > and they won't be able to renew
> > They stop nginx every time they need to upgrade the certs for the
> duration
> > of the verification - unacceptable
> > They use dns challenge if they can - usually it cannot be automated, or
> is a
> > great effort to add dns records automatically.
> > They use the tls challenge (although it doesn't supprot nginx yet), and
> they
> > modify its configuration during every verification, reloading its
> > configuration, etc. Can easily create problems if someone is maintaining
> the
> > server at the same time, etc.
> > OR: They also create a virtualhost accepting connections for
> *.acme.invalid
> > once during installation, redirect it to a webroot, and have the
> > verification client drop files into that webroot. Configure it once, and
> it
> > works. Unnecessary to modify configuration files generated by other
> > installers, unnecessary to keep reloading the nginx configuration all the
> > time, less possibility for failure.
> >
> > axos88 commented an hour ago
> > And let's face it, validation requests to a vhost have NOTHING to do with
> > the software who serves the content on that server. They are intended
> for a
> > totally different actor (certbot), thus they should be routed to a
> different
> > vhost, not be mingled into all the other ones as locations and aliases,
> and
> > such.
> >
> > kelunik commented an hour ago
> > unnecessary to keep reloading the nginx configuration all the time
> > You have to do that anyway for Nginx to use the new certificate instead
> of
> > the old one.
> >
> > Anyway, this is something that should be in the official repository
> instead
> > and on the ACME mailing list.
> >
> > axos88 commented 3 minutes ago
> > unnecessary to keep reloading the nginx configuration all the time
> > You have to do that anyway for Nginx to use the new certificate instead
> of
> > the old one.
> > True true, but at least you are not modifying configuration.
> >
> > Let me know what you think.
> >
> > _______________________________________________
> > Acme mailing list
> > [email protected]
> > https://www.ietf.org/mailman/listinfo/acme
> >
>
_______________________________________________
Acme mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/acme

Reply via email to