Re: [squid-users] FW: Encrypted browser-Squid connection errors
On 2022-11-02 15:35, Grant Taylor wrote: On 11/1/22 6:27 PM, squid3 wrote: The working ones deliver an HTTP/1.1 302 redirect to their companies homepage if the request came from outside the company LAN. If the request came from an administrators machine it may respond with stats data about the node being probed. I suspect that Squid et al. could do similar. ;-) Yes, they can be configured to do so if you need it. Neither outcome avoids the problem that the client was trying to interact with a resource entirely different on another server whose info has been lost implicitly by the protocol syntax. I take it from your statement you have not worked on networks like web-cafes, airports, schools, hospitals, public shopping malls who all use captive portal systems, or high-security institutions capturing traffic for personnel activity audits. I have worked in schools, and other public places, some of which had a captive portal that intercepted to a web server to process registration or flat blocked non-proxied traffic. The proxy server in those cases was explicit. They missed a trick then. If the registration process is simple, it can be done by Squid with a session helper and two listening ports. We even ship some ERR_AGENT_* templates for captive portals use. The current default doesn't work on servers using NLD Active API Server. Reference? Google is not providing me with anything HTTP capable by that name or the obvious sub-sets. And you were specifying the non-default-'http-alt' port via the "http://; scheme in yours. Either way these are two different HTTP syntax with different "default port" values. An agent supporting the http:// URL treats it as a request for some resource at the HTTP origin server indicated by the URL authority part or Host header. An agent supporting the http-alt:// URL treats it as a request to forward-proxy the request-target specified in the URL query segment, using the upstream proxy indicated by the URL authority part or Host header. If I'm understanding correctly, this is a case of someone asking Bob to connect to Bob. That's not a thing. Just talk directly to Bob. http-alt://bob?http://alice/some/resource Is instructing a client to ask proxy (Bob) to fetch /some/resource from origin (Alice). All the client "explicit configuration" is in the URL, rather than client config files or environment variables. The ones I am aware of are: * HTTP software testing and development * IoT sensor polling * printer network bootstrapping * manufacturing controller management * network stability monitoring systems Why is anything developed in the last two decades green fielding with HTTP/0.9?!?!?! The IoT stuff at least. The others are getting old, but more like 10+ years rather than 20+. Cheers Amos ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] FW: Encrypted browser-Squid connection errors
On 11/1/22 6:27 PM, squ...@treenet.co.nz wrote: No, you cropped my use-case description. It specified a client which was *unaware* that it was talking to a forward-proxy. Sorry, that was unintentional. Such a client will send requests that only a reverse-proxy or origin server can handle properly - because they have explicit special configuration to do so. ACK In all proxying cases there is special configuration somewhere. For forward-proxy it is in the client (or its OS so-called "default"), for reverse-proxy it is in the proxy, for interception-proxy it is in both the network and the proxy. ACK The working ones deliver an HTTP/1.1 302 redirect to their companies homepage if the request came from outside the company LAN. If the request came from an administrators machine it may respond with stats data about the node being probed. I suspect that Squid et al. could do similar. ;-) Almost all the installs I have worked on had interception as part of their configuration. Fair enough. It is officially recommended to include interception as a backup to explicit forward-proxy for networks needing full traffic control and/or monitoring. I've taken things one step further. I forego the interception and simply have the firewall / router hard block traffic not from the proxy server. }:-) But short of that, I see and acknowledge the value of interception. I take it from your statement you have not worked on networks like web-cafes, airports, schools, hospitals, public shopping malls who all use captive portal systems, or high-security institutions capturing traffic for personnel activity audits. I have worked in schools, and other public places, some of which had a captive portal that intercepted to a web server to process registration or flat blocked non-proxied traffic. The proxy server in those cases was explicit. There are also at least a half dozen nation states with national firewalls doing traffic monitoring and censorship. At least 3 of the ones I know of use Squid's for the HTTP portion. I'm aware of a small number of such nation states. I assume that there are many more. I was not aware that Squid played in that arena. ACK. That is you. I am coming at this from the maintainer viewpoint where the entire community's needs have to be balanced. I maintain that the /default/ does not have to work for /all/ use cases. I agree that the /default/ should work for /most/ use cases. The current default doesn't work on servers using NLD Active API Server. Ergo the current default doesn't work on /all/ use cases. }:-) And you were specifying the non-default-'http-alt' port via the "http://; scheme in yours. Either way these are two different HTTP syntax with different "default port" values. An agent supporting the http:// URL treats it as a request for some resource at the HTTP origin server indicated by the URL authority part or Host header. An agent supporting the http-alt:// URL treats it as a request to forward-proxy the request-target specified in the URL query segment, using the upstream proxy indicated by the URL authority part or Host header. If I'm understanding correctly, this is a case of someone asking Bob to connect to Bob. That's not a thing. Just talk directly to Bob. The ones I am aware of are: * HTTP software testing and development * IoT sensor polling * printer network bootstrapping * manufacturing controller management * network stability monitoring systems Why is anything developed in the last two decades green fielding with HTTP/0.9?!?!?! I doubt anyone can quantify it accurately. But worldwide use of HTTP/1.1 is also dropping, and at a faster rate than 0.9/1.0 right now as the more efficient HTTP/2+ expand. Sure. There should be three categories of migrations: HTTP/0.9 to something HTTP/1.0 to something HTTP/1.1 to HTTP/2 I sincerely hope that the somethings are going to HTTP/1.1 or HTTP/2. HTTP/1.1 specification requires semantic compatibility. So long as 1.1 is still a thing the older versions are likely to remain as well. Undesirable as that may be. ACK -- Grant. . . . unix || die smime.p7s Description: S/MIME Cryptographic Signature ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Does Squid support client ssl termination?
On 2022-11-02 13:58, mingheng wang wrote: On Wed, Nov 2, 2022 at 6:17 AM squid3 wrote: SSL-Bump implies interception of TLS * intercept may happen at network level (port 443 redirect or NAT) * intercept may be entirely within Squid (CONNECT tunnel unwrapped) Decryption is independent of interception. a) SSL-Bump 'bump' action performs decrypt (the others do not) b) a TLS forward/explicit-proxy performs decrypt c) a TLS reverse-proxy performs decrypt Traffic from (a) case requires re-encrypt before sending, even if its URL indicates insecure protocols. I don't understand. According to the wiki on Squid that I read, there are several steps involving "peek", "bump" or "slice" etc, we can already choose to bump or slice through SNI at step2. So why does HTTP have to be encrypted too? Those "steps" are points along the TLS handshake sequence, the actions are things Squid can be asked to do at each step. The peek/splice/stare/terminate actions do not decrypt, so do not matter. The 'bump' action uses details from origin TLS server certificate and maybe initiates a TLS session between client and server. That means a) there needs be a TLS server to fetch those details from, and b) the decrypted traffic can only be sent to that TLS server. Thus delivery of traffic to the server requires re-encryption with the security keys 'bump' negotiated with the server already (so your split-in-half idea breaks). These limits are all specific to SSL-Bump decrypted traffic. Different details/restrictions apply to Squid operating as TLS reverse-proxy or TLS explicit forward-proxy. I assume that you have already considered those setups before settling on SSL-Bump intercepting TLS. HTH Amos ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Does Squid support client ssl termination?
On Wed, Nov 2, 2022 at 6:17 AM wrote: > On 2022-11-02 07:49, Grant Taylor wrote: > > On 11/1/22 11:33 AM, squid3 wrote: > >> That is not true as a blanket statement. > > > > Please clarify which statement / who you are addressing. > > > > It seems as if you're addressing mingheng (copied below for > > convenience): > > > > Yes I was addressing mingheng's statement. > > > > On 10/31/22 7:32 PM, mingheng wang wrote: > >> I delved into the configuration the last few days, and found that > >> Squid doesn't officially support cache_peer when ssl_bump is in use. > > > > But you may be addressing my statement (...): > > > > On 11/1/22 10:44 AM, Grant Taylor wrote: > >> That surprises me. I wonder if it's a technical limitation or an > >> oversight. > > > > > > On 11/1/22 11:33 AM, squ...@treenet.co.nz wrote: > >> What Squid officially *does not* support is decrypting traffic then > >> sending the un-encrypted form to a HTTP-only cache_peer. > Oh sorry, I was meant to say that. I tested setting a sslstrip proxy as cache_peer when ssl_bump was in use, and it didn't work. I was too focused on my setup and forgot the case of regular proxies. > > > > Please elaborate. I'm trying to develop a mental model of what is and > > is not supported with regard to client / proxy / server communications. > > I'm unclear on how this applies to the two potential HTTPS streams; > > client-to-proxy and proxy-to-server. > > Okay, some info that may help with that mental model... > > The first thing you need to do is avoid that "HTTPS" term. It has > multiple meanings and they cause confusion. Instead decompose it into > its TLS and HTTP layers. > > * A client can use TCP or TLS to connect to a proxy. > - this is configured with http_port vs https_port > > * Independently of the connection type the client can request http:// or > https:// URLs or CONNECT tunnels. > > * Independent of what the client is doing/requesting, a cache_peer may > be connected to using TCP or TLS. > - this is configured with cache_peer tls options (or their absence) > > * Independent of anything else, a cache_peer MAY be asked to open a > CONNECT tunnel for opaque uses. > - this is automatically decided by Squid based on various criteria. > > > TCP is the foundation layer. On top of that can be HTTP transfer or TLS > transfer. Transfer layers can be nested infinitely deep in any order. > > So "HTTPS" can mean any one of things like: > 1) HTTP-over-TLS (how Browsers handle https:// URLs) > 2) HTTP-over-TLS (sending http:// URLs over a secure connection) > 3) HTTP-over-TLS-over-TLS (relay (1) through a secure cache_peer) > 4) HTTP-over-TLS-over-HTTP (relay (1), (2) or (3) through an insecure > cache_peer via CONNECT tunnel) > > Each agent along the chain can add or remove any number of transfer > layers to the protocol X-over-Y stack. Although for efficiency most > prefer to minimize the layering depth. > > A typical web request may flow across the Internet through a chain of > proxies like this: > > client -(1)-> S1 =(4)=> S2 =(1)=> S3 -(2)-> O > > C = origin client > S1 = forward-proxy > S2 = insecure relay proxy > S3 = TLS terminating reverse-proxy > O = origin server > > > > Or if this is more applicable to TLS-Bump on implicit / network > > transparent / intercepting proxies where the client thinks that it's > > talking HTTPS to the origin server and the proxy would really be > > downgrading security by stripping TLS. > > > > It's *more* important with SSL-Bump 'bump' due to the interception > nature of that operation. But also applies to other cases. > > SSL-Bump implies interception of TLS > * intercept may happen at network level (port 443 redirect or NAT) > * intercept may be entirely within Squid (CONNECT tunnel unwrapped) > > Decryption is independent of interception. > a) SSL-Bump 'bump' action performs decrypt (the others do not) > b) a TLS forward/explicit-proxy performs decrypt > c) a TLS reverse-proxy performs decrypt > > Traffic from (a) case requires re-encrypt before sending, even if its > URL indicates insecure protocols. > I don't understand. According to the wiki on Squid that I read, there are several steps involving "peek", "bump" or "slice" etc, we can already choose to bump or slice through SNI at step2. So why does HTTP have to be encrypted too? Anyway, essentially what I need is like splitting Squid into two parts: the client side part communicate with a client over a connection with dynamically generated certificates in order to fool the client when dealing with HSTS; while forwarding traffic unencrypted to the "other part" of Squid somewhere, which in turn establishes a new connection with the original server to do the bump thing and so on. Since Squid doesn't support this, I'll stop fiddling with it. I think HTTP isn't a very complicated protocol, and most HTTP libraries can handle TLS as well. Perhaps it won't be hard to write a simple proxy for personal use and Squid even has a
Re: [squid-users] FW: Encrypted browser-Squid connection errors
On 2022-11-02 09:03, Grant Taylor wrote: On 11/1/22 1:24 PM, squid3 wrote: No I meant W3C. Back in the before times things were a bit messy. Hum. I have more questions than answers. I'm not aware of W3C ever assigning ports. I thought it was /always/ IANA. Indeed, thus we cannot register it with IEFT/IANA now. The IANA http-alt port would probably be best if we did go official. ACK You see my point I hope. A gateway proxy that returns an error to *every* request is not very good. Except it's not "/ever/ /request/" It's "/every/ /request/ /of/ /a/ /specific/ /type/" where type is an HTTP version. No, you cropped my use-case description. It specified a client which was *unaware* that it was talking to a forward-proxy. Such a client will send requests that only a reverse-proxy or origin server can handle properly - because they have explicit special configuration to do so. In all proxying cases there is special configuration somewhere. For forward-proxy it is in the client (or its OS so-called "default"), for reverse-proxy it is in the proxy, for interception-proxy it is in both the network and the proxy. What does CloudFlare or any of the other big proxy services or even other proxy applications do if you send them an HTTP/1.0 or even HTTP/0.9 request without the associated Host: header? The working ones deliver an HTTP/1.1 302 redirect to their companies homepage if the request came from outside the company LAN. If the request came from an administrators machine it may respond with stats data about the node being probed. There is no "configured proxy" for this use-case. Those are the two most/extremely common instances of the problematic use-cases. All implicit use of proxy (or gateway) have the same issue. How common is the (network) transparent / intercepting / implicit use of Squid (or any proxy for that matter)? All of the installs that I've worked on (both as a user and as an administrator) have been explicit / non-transparent. Almost all the installs I have worked on had interception as part of their configuration. It is officially recommended to include interception as a backup to explicit forward-proxy for networks needing full traffic control and/or monitoring. I take it from your statement you have not worked on networks like web-cafes, airports, schools, hospitals, public shopping malls who all use captive portal systems, or high-security institutions capturing traffic for personnel activity audits. There are also at least a half dozen nation states with national firewalls doing traffic monitoring and censorship. At least 3 of the ones I know of use Squid's for the HTTP portion. I think you are getting stuck with the subtle difference between "use for case X" and "use by default". ANY port number can be used for *some* use-case(s). Sure. "by default" has to work for *all* use-cases. I disagree. ACK. That is you. I am coming at this from the maintainer viewpoint where the entire community's needs have to be balanced. Note that you are now having to add a non-default port "8080" and path "/" to the URL to make it valid/accepted by the Browser. You were already specifying the non-default-http port via the "http-alt://" scheme in your example. And you were specifying the non-default-'http-alt' port via the "http://; scheme in yours. Either way these are two different HTTP syntax with different "default port" values. An agent supporting the http:// URL treats it as a request for some resource at the HTTP origin server indicated by the URL authority part or Host header. An agent supporting the http-alt:// URL treats it as a request to forward-proxy the request-target specified in the URL query segment, using the upstream proxy indicated by the URL authority part or Host header. Clients speaking HTTP origin-form (the http:// scheme) are not permitted to request tunnels or equivalent gateway services. They can only ask for resource representations. I question the veracity of that. Mostly around said client's use of an explicit proxy. It is clear side-effect of the fact that tunnels cannot be opened by requesting an origin-form URL (eg "/index.html"). They require an authority-form URI (eg "example.com:80"). See https://www.rfc-editor.org/rfc/rfc9110.html#name-intermediaries for definitions of intermediary and role scopes. Note that it explicitly says (requires) absolute-URI for "proxy" (aka forward-proxy) intermediaries. Clients do not speak origin-form to explicit proxies. [yes I know the first paragraph says an intermediary may switch behaviour based on just the request, that is for HTTP/2+. Squid being 1.1 is more restricted by the legacy issues]. Port is just a number, it can be anything *IF* it is made explicit. The scheme determines what protocol syntax is being spoken and thus what restrictions and/or requirements are. ... and so the protocol for talking to
Re: [squid-users] Does Squid support client ssl termination?
On 2022-11-02 07:49, Grant Taylor wrote: On 11/1/22 11:33 AM, squid3 wrote: That is not true as a blanket statement. Please clarify which statement / who you are addressing. It seems as if you're addressing mingheng (copied below for convenience): Yes I was addressing mingheng's statement. On 10/31/22 7:32 PM, mingheng wang wrote: I delved into the configuration the last few days, and found that Squid doesn't officially support cache_peer when ssl_bump is in use. But you may be addressing my statement (...): On 11/1/22 10:44 AM, Grant Taylor wrote: That surprises me. I wonder if it's a technical limitation or an oversight. On 11/1/22 11:33 AM, squ...@treenet.co.nz wrote: What Squid officially *does not* support is decrypting traffic then sending the un-encrypted form to a HTTP-only cache_peer. Please elaborate. I'm trying to develop a mental model of what is and is not supported with regard to client / proxy / server communications. I'm unclear on how this applies to the two potential HTTPS streams; client-to-proxy and proxy-to-server. Okay, some info that may help with that mental model... The first thing you need to do is avoid that "HTTPS" term. It has multiple meanings and they cause confusion. Instead decompose it into its TLS and HTTP layers. * A client can use TCP or TLS to connect to a proxy. - this is configured with http_port vs https_port * Independently of the connection type the client can request http:// or https:// URLs or CONNECT tunnels. * Independent of what the client is doing/requesting, a cache_peer may be connected to using TCP or TLS. - this is configured with cache_peer tls options (or their absence) * Independent of anything else, a cache_peer MAY be asked to open a CONNECT tunnel for opaque uses. - this is automatically decided by Squid based on various criteria. TCP is the foundation layer. On top of that can be HTTP transfer or TLS transfer. Transfer layers can be nested infinitely deep in any order. So "HTTPS" can mean any one of things like: 1) HTTP-over-TLS (how Browsers handle https:// URLs) 2) HTTP-over-TLS (sending http:// URLs over a secure connection) 3) HTTP-over-TLS-over-TLS (relay (1) through a secure cache_peer) 4) HTTP-over-TLS-over-HTTP (relay (1), (2) or (3) through an insecure cache_peer via CONNECT tunnel) Each agent along the chain can add or remove any number of transfer layers to the protocol X-over-Y stack. Although for efficiency most prefer to minimize the layering depth. A typical web request may flow across the Internet through a chain of proxies like this: client -(1)-> S1 =(4)=> S2 =(1)=> S3 -(2)-> O C = origin client S1 = forward-proxy S2 = insecure relay proxy S3 = TLS terminating reverse-proxy O = origin server Or if this is more applicable to TLS-Bump on implicit / network transparent / intercepting proxies where the client thinks that it's talking HTTPS to the origin server and the proxy would really be downgrading security by stripping TLS. It's *more* important with SSL-Bump 'bump' due to the interception nature of that operation. But also applies to other cases. SSL-Bump implies interception of TLS * intercept may happen at network level (port 443 redirect or NAT) * intercept may be entirely within Squid (CONNECT tunnel unwrapped) Decryption is independent of interception. a) SSL-Bump 'bump' action performs decrypt (the others do not) b) a TLS forward/explicit-proxy performs decrypt c) a TLS reverse-proxy performs decrypt Traffic from (a) case requires re-encrypt before sending, even if its URL indicates insecure protocols. Traffic from (b) MUST be re-encrypted when it is for a secure protocol eg https://, otherwise optional. Traffic from (c) SHOULD be encrypted on sending, but always optional. The "re-encrypt" may take the form of TLS to the secure peer, or a CONNECT tunnel through any peer with TLS to whatever is at the other end of the tunnel. Here is my mental model based on my current understanding. Is the following diagram accurate? +-+---+ | P2S-HTTP | P2S-HTTPS | +---+-+---+ | C2P-HTTP | supported | supported | +---+-+---+ | C2P-HTTPS | unsupported | supported | +---+-+---+ C2P = Client to Proxy communication P2S = Proxy to server communication Vaguely yes. There are three dimensions to the matrix, you only have two shown here. The box showing "unsupported" has "supported" in its other dimension. All other permutations of inbound TCP/TLS, http:// or https:// URL, and outbound TCP/TLS should currently work to some degree. The more recent your Squid version the better it is. ACK ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] FW: Encrypted browser-Squid connection errors
On 11/1/22 1:24 PM, squ...@treenet.co.nz wrote: No I meant W3C. Back in the before times things were a bit messy. Hum. I have more questions than answers. I'm not aware of W3C ever assigning ports. I thought it was /always/ IANA. Indeed, thus we cannot register it with IEFT/IANA now. The IANA http-alt port would probably be best if we did go official. ACK You see my point I hope. A gateway proxy that returns an error to *every* request is not very good. Except it's not "/ever/ /request/" It's "/every/ /request/ /of/ /a/ /specific/ /type/" where type is an HTTP version. What does CloudFlare or any of the other big proxy services or even other proxy applications do if you send them an HTTP/1.0 or even HTTP/0.9 request without the associated Host: header? There is no "configured proxy" for this use-case. Those are the two most/extremely common instances of the problematic use-cases. All implicit use of proxy (or gateway) have the same issue. How common is the (network) transparent / intercepting / implicit use of Squid (or any proxy for that matter)? All of the installs that I've worked on (both as a user and as an administrator) have been explicit / non-transparent. I think you are getting stuck with the subtle difference between "use for case X" and "use by default". ANY port number can be used for *some* use-case(s). Sure. "by default" has to work for *all* use-cases. I disagree. Note that you are now having to add a non-default port "8080" and path "/" to the URL to make it valid/accepted by the Browser. You were already specifying the non-default-http port via the "http-alt://" scheme in your example. Clients speaking HTTP origin-form (the http:// scheme) are not permitted to request tunnels or equivalent gateway services. They can only ask for resource representations. I question the veracity of that. Mostly around said client's use of an explicit proxy. Port is just a number, it can be anything *IF* it is made explicit. The scheme determines what protocol syntax is being spoken and thus what restrictions and/or requirements are. ... and so the protocol for talking to a webcache service is http-alt://. Whose default port is not 80 nor 443 for all the same reasons why Squid default listening port is 3128. If we wanted to we could easily switch Squid default port to http-alt/8080 without causing technical issues. But it would be annoying to update all the existing documentation around the Internet, so not worth the effort changing now. Ditto. Though the legacy install base has a long long long tail. 26 years after HTTP/1.0 came out and HTTP/0.9 still has use-cases alive. Where is HTTP/0.9 still being used? Decreasing, but still a potentially significant amount of traffic seen by Squid in general. Can you, or anyone else, quantify what "a potentially significant amount of traffic" is? Do these cases *really* /need/ to be covered by the /default/ configuration? Or can they be addressed by a variation from the default configuration? Ah, if you have been treating it like an irrelevant elephant that is your confusion. The "but not always" is a critical detail in the puzzle - its side-effects are the answer to your initial question of *why* Squid defaults to X instead of 80/443. I have no problems using non-default for the "but not always" configurations. -- Grant. . . . unix || die smime.p7s Description: S/MIME Cryptographic Signature ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] FW: Encrypted browser-Squid connection errors
On 2022-11-01 11:38, Grant Taylor wrote: On 10/30/22 6:59 AM, squ...@treenet.co.nz wrote: Duane W. would be the best one to ask about the details. What I know is that some 10-12 years ago I discovered an message by Duane mentioning that W3C had (given or accepted) port 3128 for Squid use. I've checked the squid-cache archives and not seeing the message. Right now it looks like the W3C changed their systems and only track the standards documents. So I cannot reference their (outdated?) protocol registry :-{ . Also checked the squid-cache archives and not finding it email history. Sorry. Did you by chance mean IANA? No I meant W3C. Back in the before times things were a bit messy. I looked and 3128 is registered to something other than Squid. Indeed, thus we cannot register it with IEFT/IANA now. The IANA http-alt port would probably be best if we did go official. Nor did their search bring anything up for Squid. I mean "authority" as used by HTTP specification, which refers to https://www.rfc-editor.org/rfc/rfc3986#section-3.2 Yes exactly. That is the source of the problem, perpetuated by the need to retain on-wire byte/octet backward compatibility until HTTP/2 changed to binary format. Consider what the proxy has to do when (not if) the IP:port being connected to are that proxy's (eg localhost:80) and the URL is only a path ("/") on an origin server somewhere else. Does the "GET / HTTP/1.0" mean "http://example.com/; or "http://example.net/; ? I would hope that it would return an error page, much like Squid does when it can't resolve a domain name or the connection times out. You see my point I hope. A gateway proxy that returns an error to *every* request is not very good. The key point is that the proxy host:port and the origin host:port are two different authority and only the origin may be passed along in the URL (or URL+Host header). Agreed. When the client uses port 80 and 443 thinking they are origin services it is *required* (per https://www.rfc-editor.org/rfc/rfc9112.html#name-origin-form) to omit the real origins info. Enter problems. Why would a client (worth it's disk space) ever conflate the value of it's configured proxy as the origin server? There is no "configured proxy" for this use-case. I can see a potential for confusion when using (network) transparent / intercepting proxies. Those are the two most/extremely common instances of the problematic use-cases. All implicit use of proxy (or gateway) have the same issue. The defaults though are tuned for origin server (or reverse-proxy) direct contact. I don't see how that precludes their use for (forward) proxy servers. I think you are getting stuck with the subtle difference between "use for case X" and "use by default". ANY port number can be used for *some* use-case(s). "by default" has to work for *all* use-cases. No Browser I know supports "http-alt://proxy.example.com?http://origin.example.net/index.html; URLs. But I bet that many browsers would support: http://proxy.example.com:8080/?http://origin.example.net/index.html Note that you are now having to add a non-default port "8080" and path "/" to the URL to make it valid/accepted by the Browser. Clients speaking HTTP origin-form (the http:// scheme) are not permitted to request tunnels or equivalent gateway services. They can only ask for resource representations. Also, I'm talking about "http://; and "https://; using their default ports of 80 & 443. Port is just a number, it can be anything *IF* it is made explicit. The scheme determines what protocol syntax is being spoken and thus what restrictions and/or requirements are. ... and so the protocol for talking to a webcache service is http-alt://. Whose default port is not 80 nor 443 for all the same reasons why Squid default listening port is 3128. If we wanted to we could easily switch Squid default port to http-alt/8080 without causing technical issues. But it would be annoying to update all the existing documentation around the Internet, so not worth the effort changing now. It is based on experience. Squid used to be a lot more lenient and tried for decades to do the syntax auto-detection. The path from that to separate ports is littered with CVEs. Most notably the curse that keeps on giving: CVE-2009-0801, which is just the trigger issue for a whole nest of bad side effects. I wonder how much of that problematic history was related to HTTP/0.9 vs HTTP/1.0 vs HTTP/1.1 clients. Ditto. Though the legacy install base has a long long long tail. 26 years after HTTP/1.0 came out and HTTP/0.9 still has use-cases alive. I similarly wonder how much HTTP/1.0, or even HTTP/0.9, protocol is used these days. Decreasing, but still a potentially significant amount of traffic seen by Squid in general. Also, there is the elephant in the room of we're talking about a proxy server which is
Re: [squid-users] Does Squid support client ssl termination?
On 11/1/22 13:33, squ...@treenet.co.nz wrote: On 2022-11-02 05:44, Grant Taylor wrote: On 10/31/22 7:32 PM, mingheng wang wrote: I delved into the configuration the last few days, and found that Squid doesn't officially support cache_peer when ssl_bump is in use. That surprises me. I wonder if it's a technical limitation or an oversight. That is not true as a blanket statement. Agreed. What Squid officially *does not* support is decrypting traffic then sending the un-encrypted form to a HTTP-only cache_peer. Yes, if we are still talking about Squid that does SslBump. Outside of SslBump, "decrypting traffic then sending the un-encrypted form to a HTTP-only cache_peer should be supported": A combination of https_port forward proxy (i.e. no SslBump!) and plain text cache_peer should work. I have not tested that, but there is no technical reason to prohibit that and, arguably, there is no policy reason to prohibit that either. All other permutations of inbound TCP/TLS, http:// or https:// URL, and outbound TCP/TLS should currently work to some degree. The more recent your Squid version the better it is. The other thing that is not yet supported is "TLS inside TLS". That is, a combination of SslBump and a TLS cache_peer. That is a purely technical limitation. HTH, Alex. ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Does Squid support client ssl termination?
On 11/1/22 11:33 AM, squ...@treenet.co.nz wrote: That is not true as a blanket statement. Please clarify which statement / who you are addressing. It seems as if you're addressing mingheng (copied below for convenience): On 10/31/22 7:32 PM, mingheng wang wrote: I delved into the configuration the last few days, and found that Squid doesn't officially support cache_peer when ssl_bump is in use. But you may be addressing my statement (...): On 11/1/22 10:44 AM, Grant Taylor wrote: That surprises me. I wonder if it's a technical limitation or an oversight. On 11/1/22 11:33 AM, squ...@treenet.co.nz wrote: What Squid officially *does not* support is decrypting traffic then sending the un-encrypted form to a HTTP-only cache_peer. Please elaborate. I'm trying to develop a mental model of what is and is not supported with regard to client / proxy / server communications. I'm unclear on how this applies to the two potential HTTPS streams; client-to-proxy and proxy-to-server. Or if this is more applicable to TLS-Bump on implicit / network transparent / intercepting proxies where the client thinks that it's talking HTTPS to the origin server and the proxy would really be downgrading security by stripping TLS. Here is my mental model based on my current understanding. Is the following diagram accurate? +-+---+ | P2S-HTTP | P2S-HTTPS | +---+-+---+ | C2P-HTTP | supported | supported | +---+-+---+ | C2P-HTTPS | unsupported | supported | +---+-+---+ C2P = Client to Proxy communication P2S = Proxy to server communication All other permutations of inbound TCP/TLS, http:// or https:// URL, and outbound TCP/TLS should currently work to some degree. The more recent your Squid version the better it is. ACK -- Grant. . . . unix || die smime.p7s Description: S/MIME Cryptographic Signature ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Problem with wss protocol.
On 2022-11-02 06:59, Armando Ramos Roche wrote: Hi all. I was working with squid v3.3 on ubuntu 18.04, after migrating to ubuntu 20.04 a few months ago, squid was updated to version 3.5, currently version 3.5.27. And I have realized that nothing that uses the wss or ws protocol works for me, for example whatsapp, messenger etc... I've searched the logs, but nothing shows up. Not showing up in logs, even as a failed or rejected transaction is a sign that it is not going to Squid. From the syntax shown by Firefox it looks to me like HTTP/2 or HTTP/3. Which also means it is probably not going to Squid. Amos ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
[squid-users] Problem with wss protocol.
Hi all. I was working with squid v3.3 on ubuntu 18.04, after migrating to ubuntu 20.04 a few months ago, squid was updated to version 3.5, currently version 3.5.27. And I have realized that nothing that uses the wss or ws protocol works for me, for example whatsapp, messenger etc... I've searched the logs, but nothing shows up. I am not doing SSL Bump. And I've done some searching on the internet and can't find anything to help me. It does not even get a response from the server, it does not even leave a trace in the log Here the request in firefox: { "GET": { "scheme": "wss", "host": "web.whatsapp.com", "filename": "/ws/chat" } } { "Cabeceras de la petición (585 B)": { "headers": [ { "name": "Accept", "value": "*/*" }, { "name": "Accept-Encoding", "value": "gzip, deflate, br" }, { "name": "Accept-Language", "value": "es-ES,es;q=0.8,en-US;q=0.5,en;q=0.3" }, { "name": "Cache-Control", "value": "no-cache" }, { "name": "Connection", "value": "keep-alive, Upgrade" }, { "name": "DNT", "value": "1" }, { "name": "Host", "value": "web.whatsapp.com" }, { "name": "Origin", "value": "https://web.whatsapp.com; }, { "name": "Pragma", "value": "no-cache" }, { "name": "Sec-Fetch-Dest", "value": "websocket" }, { "name": "Sec-Fetch-Mode", "value": "websocket" }, { "name": "Sec-Fetch-Site", "value": "same-origin" }, { "name": "Sec-WebSocket-Extensions", "value": "permessage-deflate" }, { "name": "Sec-WebSocket-Key", "value": "CahSZ7V991nVOR4e+FTLIg==" }, { "name": "Sec-WebSocket-Version", "value": "13" }, { "name": "Upgrade", "value": "websocket" }, { "name": "User-Agent", "value": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:106.0) Gecko/20100101 Firefox/106.0" } ] } } ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Does Squid support client ssl termination?
On 2022-11-02 05:44, Grant Taylor wrote: On 10/31/22 7:32 PM, mingheng wang wrote: Sorry about that, don't know why it only went to you. Things happen. That's why I let people know, in case unwanted things did happen. I delved into the configuration the last few days, and found that Squid doesn't officially support cache_peer when ssl_bump is in use. That surprises me. I wonder if it's a technical limitation or an oversight. That is not true as a blanket statement. What Squid officially *does not* support is decrypting traffic then sending the un-encrypted form to a HTTP-only cache_peer. All other permutations of inbound TCP/TLS, http:// or https:// URL, and outbound TCP/TLS should currently work to some degree. The more recent your Squid version the better it is. Amos ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Does Squid support client ssl termination?
On 10/31/22 7:32 PM, mingheng wang wrote: Sorry about that, don't know why it only went to you. Things happen. That's why I let people know, in case unwanted things did happen. I delved into the configuration the last few days, and found that Squid doesn't officially support cache_peer when ssl_bump is in use. That surprises me. I wonder if it's a technical limitation or an oversight. Actually, I can't find a single tool in the market that can just encrypt any HTTP connection, "converting" it to an HTTPS connection. I'm reading RFCs and documentation to write my own proxy. That really surprises me. It's not a general proxy, but this seems like something that stunnel will do. (Either direction HTTPS <-> HTTP and HTTP <-> HTTPS.) This is what still confuses me. A reverse proxy is supposed to proxy a web site. At least that's what I learnt from Nginx and Haproxy's documentation. I'll read more on this when I have time. I think of forward and reverse proxies as doing quite similar things with the primary difference being where in the path they are and how many sites will be accessed. Forward: (C)---(P)---(Big Bad Internet)-(S) Reverse: (C)-(Big Bad Internet)---(P)---(S) Both take requests from clients and pass them to (what the proxy thinks is) the server. But with the forward proxy interfacing between relatively few clients and significantly more servers. Conversely the reverse proxy interfaces with significantly more clients and relatively few servers. The reverse proxy tends to be explicitly configured where servers are while the forward proxy relies on standard name resolution to find them, usually DNS. So, on one level, what the forward and reverse proxy do is similar, but how they do it is subtly different. Then there's this: Both: (C)---(P)---(Big Bad Internet)---(P)---(S) Where in both a client side forward proxy /and/ a server side reverse proxy are in use. }:-) This really is just both technologies being independently used at each end. Very tough network environment. They can even somehow detect a confidential file going through the gateway, even with TLS. I'm not going to ask questions. -- Grant. . . . unix || die smime.p7s Description: S/MIME Cryptographic Signature ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users