Ted Wynnychenko wrote:
Hello
For many years now I have been using a DNS black hole setup to stop http/https
connections to blocked websites (well, any connection to those sites). This has
worked well.
Connections with http are routed to an IP on the internal network which returns
a simple "blocked" web page.
Connections with https come back to the browser complaining of a certificate
error (clearly, the HTTPS certificate of the web-server at the redirected IP
does not have a valid certificate for any blocked site).
This really isn't a big deal; but as more sites have started using https, and as
tools such as relayd and squid (and others?) have developed ways to "inject"
https certificates on the fly, I am wondering if there is a way to create https
certificates based solely on the requested URL in a connection attempt using an
internal CA to avoid the certificate errors with blocked HTTPS connections?
In other words, rather than having an "SSL-MITM" setup, where the proxy goes out
and connects to the ultimate destination before responding to the client with a
forged certificate; all I want is for the "proxy" to generate a certificate for
the requested URL signed by a locally trusted CA, before returning a static
"blocked" webpage.
This (to me) seems simpler than what has already been accomplished with relayd.
I have been looking at relayd, and I don't think it will do what I want (or, at
least, I can't figure it out). I also have been unable to find anything else
that will help me with this.
Are there any tools available to do what I am looking for? Or, is there a way
to setup relayd to accomplish this?
Thanks
[demime 1.01d removed an attachment of type application/x-pkcs7-signature which
had a name of smime.p7s]
If all your internal clients trust a CA you control, just have it issue
a certificate with a common name of * and install that cert onto your
webserver. Its how we do MitM virus scanning at my day job.
-CA