Re: [squid-users] FW: Encrypted browser-Squid connection errors

2022-11-01 Thread squid3

On 2022-11-02 15:35, Grant Taylor wrote:

On 11/1/22 6:27 PM, squid3 wrote:
The working ones deliver an HTTP/1.1 302 redirect to their companies 
homepage if the request came from outside the company LAN. If the 
request came from an administrators machine it may respond with stats 
data about the node being probed.


I suspect that Squid et al. could do similar.  ;-)



Yes, they can be configured to do so if you need it.

Neither outcome avoids the problem that the client was trying to 
interact with a resource entirely different on another server whose info 
has been lost implicitly by the protocol syntax.




I take it from your statement you have not worked on networks like 
web-cafes, airports, schools, hospitals, public shopping malls who all 
use captive portal systems, or high-security institutions capturing 
traffic for personnel activity audits.


I have worked in schools, and other public places, some of which had a 
captive portal that intercepted to a web server to process registration 
or flat blocked non-proxied traffic.  The proxy server in those cases 
was explicit.




They missed a trick then. If the registration process is simple, it can 
be done by Squid with a session helper and two listening ports. We even 
ship some ERR_AGENT_* templates for captive portals use.





The current default doesn't work on servers using NLD Active API 
Server.



Reference? Google is not providing me with anything HTTP capable by that 
name or the obvious sub-sets.



And you were specifying the non-default-'http-alt' port via the 
"http://; scheme in yours.
Either way these are two different HTTP syntax with different "default 
port" values.



An agent supporting the http:// URL treats it as a request for some 
resource at the HTTP origin server indicated by the URL authority part 
or Host header.


An agent supporting the http-alt:// URL treats it as a request to 
forward-proxy the request-target specified in the URL query segment, 
using the upstream proxy indicated by the URL authority part or Host 
header.


If I'm understanding correctly, this is a case of someone asking Bob to 
connect to Bob.  That's not a thing.  Just talk directly to Bob.


  http-alt://bob?http://alice/some/resource
Is instructing a client to ask proxy (Bob) to fetch /some/resource from 
origin (Alice). All the client "explicit configuration" is in the URL, 
rather than client config files or environment variables.





The ones I am aware of are:
  * HTTP software testing and development
  * IoT sensor polling
  * printer network bootstrapping
  * manufacturing controller management
  * network stability monitoring systems


Why is anything developed in the last two decades green fielding with 
HTTP/0.9?!?!?!




The IoT stuff at least. The others are getting old, but more like 10+ 
years rather than 20+.



Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Does Squid support client ssl termination?

2022-11-01 Thread squid3

On 2022-11-02 13:58, mingheng wang wrote:

On Wed, Nov 2, 2022 at 6:17 AM squid3 wrote:


SSL-Bump implies interception of TLS
  * intercept may happen at network level (port 443 redirect or NAT)
  * intercept may be entirely within Squid (CONNECT tunnel unwrapped)

Decryption is independent of interception.
  a) SSL-Bump 'bump' action performs decrypt (the others do not)
  b) a TLS forward/explicit-proxy performs decrypt
  c) a TLS reverse-proxy performs decrypt

Traffic from (a) case requires re-encrypt before sending, even if its
URL indicates insecure protocols.

  I don't understand. According to the wiki on Squid that I read, there 
are

several
steps involving "peek", "bump" or "slice" etc, we can already choose to
bump or
slice through SNI at step2. So why does HTTP have to be encrypted too?


Those "steps" are points along the TLS handshake sequence, the actions 
are things Squid can be asked to do at each step.
The peek/splice/stare/terminate actions do not decrypt, so do not 
matter.


The 'bump' action uses details from origin TLS server certificate and 
maybe initiates a TLS session between client and server. That means a) 
there needs be a TLS server to fetch those details from, and b) the 
decrypted traffic can only be sent to that TLS server. Thus delivery of 
traffic to the server requires re-encryption with the security keys 
'bump' negotiated with the server already (so your split-in-half idea 
breaks).



These limits are all specific to SSL-Bump decrypted traffic. Different 
details/restrictions apply to Squid operating as TLS reverse-proxy or 
TLS explicit forward-proxy. I assume that you have already considered 
those setups before settling on SSL-Bump intercepting TLS.



HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] FW: Encrypted browser-Squid connection errors

2022-11-01 Thread squid3

On 2022-11-02 09:03, Grant Taylor wrote:

On 11/1/22 1:24 PM, squid3 wrote:

No I meant W3C. Back in the before times things were a bit messy.


Hum.  I have more questions than answers.  I'm not aware of W3C ever 
assigning ports.  I thought it was /always/ IANA.


Indeed, thus we cannot register it with IEFT/IANA now. The IANA 
http-alt port would probably be best if we did go official.


ACK

You see my point I hope. A gateway proxy that returns an error to 
*every* request is not very good.


Except it's not "/ever/ /request/"  It's "/every/ /request/ /of/ /a/ 
/specific/ /type/" where type is an HTTP version.




No, you cropped my use-case description. It specified a client which was 
*unaware* that it was talking to a forward-proxy. Such a client will 
send requests that only a reverse-proxy or origin server can handle 
properly - because they have explicit special configuration to do so.


In all proxying cases there is special configuration somewhere. For 
forward-proxy it is in the client (or its OS so-called "default"), for 
reverse-proxy it is in the proxy, for interception-proxy it is in both 
the network and the proxy.



What does CloudFlare or any of the other big proxy services or even 
other proxy applications do if you send them an HTTP/1.0 or even 
HTTP/0.9 request without the associated Host: header?




The working ones deliver an HTTP/1.1 302 redirect to their companies 
homepage if the request came from outside the company LAN. If the 
request came from an administrators machine it may respond with stats 
data about the node being probed.




There is no "configured proxy" for this use-case.

Those are the two most/extremely common instances of the problematic 
use-cases. All implicit use of proxy (or gateway) have the same issue.


How common is the (network) transparent / intercepting / implicit use 
of Squid (or any proxy for that matter)?


All of the installs that I've worked on (both as a user and as an 
administrator) have been explicit / non-transparent.




Almost all the installs I have worked on had interception as part of 
their configuration. It is officially recommended to include 
interception as a backup to explicit forward-proxy for networks needing 
full traffic control and/or monitoring.


I take it from your statement you have not worked on networks like 
web-cafes, airports, schools, hospitals, public shopping malls who all 
use captive portal systems, or high-security institutions capturing 
traffic for personnel activity audits.


There are also at least a half dozen nation states with national 
firewalls doing traffic monitoring and censorship. At least 3 of the 
ones I know of use Squid's for the HTTP portion.



I think you are getting stuck with the subtle difference between "use 
for case X" and "use by default".


ANY port number can be used for *some* use-case(s).


Sure.


"by default" has to work for *all* use-cases.


I disagree.



ACK. That is you. I am coming at this from the maintainer viewpoint 
where the entire community's needs have to be balanced.



Note that you are now having to add a non-default port "8080" and path 
"/" to the URL to make it valid/accepted by the Browser.


You were already specifying the non-default-http port via the 
"http-alt://" scheme in your example.




And you were specifying the non-default-'http-alt' port via the 
"http://; scheme in yours.
Either way these are two different HTTP syntax with different "default 
port" values.



An agent supporting the http:// URL treats it as a request for some 
resource at the HTTP origin server indicated by the URL authority part 
or Host header.


An agent supporting the http-alt:// URL treats it as a request to 
forward-proxy the request-target specified in the URL query segment, 
using the upstream proxy indicated by the URL authority part or Host 
header.



Clients speaking HTTP origin-form (the http:// scheme) are not 
permitted to request tunnels or equivalent gateway services. They can 
only ask for resource representations.


I question the veracity of that.  Mostly around said client's use of an 
explicit proxy.




It is clear side-effect of the fact that tunnels cannot be opened by 
requesting an origin-form URL (eg "/index.html"). They require an 
authority-form URI (eg "example.com:80").


See https://www.rfc-editor.org/rfc/rfc9110.html#name-intermediaries for 
definitions of intermediary and role scopes.
Note that it explicitly says (requires) absolute-URI for "proxy" (aka 
forward-proxy) intermediaries. Clients do not speak origin-form to 
explicit proxies.


[yes I know the first paragraph says an intermediary may switch 
behaviour based on just the request, that is for HTTP/2+. Squid being 
1.1 is more restricted by the legacy issues].




Port is just a number, it can be anything *IF* it is made explicit.
The 

Re: [squid-users] Does Squid support client ssl termination?

2022-11-01 Thread squid3

On 2022-11-02 07:49, Grant Taylor wrote:

On 11/1/22 11:33 AM, squid3 wrote:

That is not true as a blanket statement.


Please clarify which statement / who you are addressing.

It seems as if you're addressing mingheng (copied below for 
convenience):




Yes I was addressing mingheng's statement.



On 10/31/22 7:32 PM, mingheng wang wrote:
I delved into the configuration the last few days, and found that 
Squid doesn't officially support cache_peer when ssl_bump is in use.


But you may be addressing my statement (...):

On 11/1/22 10:44 AM, Grant Taylor wrote:
That surprises me.  I wonder if it's a technical limitation or an 
oversight.



On 11/1/22 11:33 AM, squ...@treenet.co.nz wrote:
What Squid officially *does not* support is decrypting traffic then 
sending the un-encrypted form to a HTTP-only cache_peer.


Please elaborate.  I'm trying to develop a mental model of what is and 
is not supported with regard to client / proxy / server communications. 
I'm unclear on how this applies to the two potential HTTPS streams; 
client-to-proxy and proxy-to-server.


Okay, some info that may help with that mental model...

The first thing you need to do is avoid that "HTTPS" term. It has 
multiple meanings and they cause confusion. Instead decompose it into 
its TLS and HTTP layers.


* A client can use TCP or TLS to connect to a proxy.
 - this is configured with http_port vs https_port

* Independently of the connection type the client can request http:// or 
https:// URLs or CONNECT tunnels.


* Independent of what the client is doing/requesting, a cache_peer may 
be connected to using TCP or TLS.

 - this is configured with cache_peer tls options (or their absence)

* Independent of anything else, a cache_peer MAY be asked to open a 
CONNECT tunnel for opaque uses.

 - this is automatically decided by Squid based on various criteria.


TCP is the foundation layer. On top of that can be HTTP transfer or TLS 
transfer. Transfer layers can be nested infinitely deep in any order.


So "HTTPS" can mean any one of things like:
 1) HTTP-over-TLS (how Browsers handle https:// URLs)
 2) HTTP-over-TLS (sending http:// URLs over a secure connection)
 3) HTTP-over-TLS-over-TLS (relay (1) through a secure cache_peer)
 4) HTTP-over-TLS-over-HTTP (relay (1), (2) or (3) through an insecure 
cache_peer via CONNECT tunnel)


Each agent along the chain can add or remove any number of transfer 
layers to the protocol X-over-Y stack. Although for efficiency most 
prefer to minimize the layering depth.


A typical web request may flow across the Internet through a chain of 
proxies like this:


 client -(1)-> S1 =(4)=> S2 =(1)=> S3 -(2)-> O

 C = origin client
 S1 = forward-proxy
 S2 = insecure relay proxy
 S3 = TLS terminating reverse-proxy
 O = origin server


 Or if this is more applicable to TLS-Bump on implicit / network 
transparent / intercepting proxies where the client thinks that it's 
talking HTTPS to the origin server and the proxy would really be 
downgrading security by stripping TLS.




It's *more* important with SSL-Bump 'bump' due to the interception 
nature of that operation. But also applies to other cases.


SSL-Bump implies interception of TLS
 * intercept may happen at network level (port 443 redirect or NAT)
 * intercept may be entirely within Squid (CONNECT tunnel unwrapped)

Decryption is independent of interception.
 a) SSL-Bump 'bump' action performs decrypt (the others do not)
 b) a TLS forward/explicit-proxy performs decrypt
 c) a TLS reverse-proxy performs decrypt

Traffic from (a) case requires re-encrypt before sending, even if its 
URL indicates insecure protocols.
Traffic from (b) MUST be re-encrypted when it is for a secure protocol 
eg https://, otherwise optional.

Traffic from (c) SHOULD be encrypted on sending, but always optional.

The "re-encrypt" may take the form of TLS to the secure peer, or a 
CONNECT tunnel through any peer with TLS to whatever is at the other end 
of the tunnel.



Here is my mental model based on my current understanding.  Is the 
following diagram accurate?


+-+---+
|  P2S-HTTP   | P2S-HTTPS |
+---+-+---+
| C2P-HTTP  |  supported  | supported |
+---+-+---+
| C2P-HTTPS | unsupported | supported |
+---+-+---+
  C2P = Client to Proxy communication
  P2S = Proxy to server communication



Vaguely yes. There are three dimensions to the matrix, you only have two 
shown here.

The box showing "unsupported" has "supported" in its other dimension.


All other permutations of inbound TCP/TLS, http:// or https:// URL, 
and outbound TCP/TLS should currently work to some degree. The more 
recent your Squid version the better it is.


ACK



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] FW: Encrypted browser-Squid connection errors

2022-11-01 Thread squid3

On 2022-11-01 11:38, Grant Taylor wrote:

On 10/30/22 6:59 AM, squ...@treenet.co.nz wrote:

Duane W. would be the best one to ask about the details.

What I know is that some 10-12 years ago I discovered an message by 
Duane mentioning that W3C had (given or accepted) port 3128 for Squid 
use. I've checked the squid-cache archives and not seeing the message.


Right now it looks like the W3C changed their systems and only track 
the standards documents. So I cannot reference their (outdated?) 
protocol registry :-{ . Also checked the squid-cache archives and not 
finding it email history. Sorry.


Did you by chance mean IANA?


No I meant W3C. Back in the before times things were a bit messy.



I looked and 3128 is registered to something other than Squid.



Indeed, thus we cannot register it with IEFT/IANA now. The IANA http-alt 
port would probably be best if we did go official.




Nor did their search bring anything up for Squid.

I mean "authority" as used by HTTP specification, which refers to 
https://www.rfc-editor.org/rfc/rfc3986#section-3.2


Yes exactly. That is the source of the problem, perpetuated by the 
need to retain on-wire byte/octet backward compatibility until HTTP/2 
changed to binary format.


Consider what the proxy has to do when (not if) the IP:port being 
connected to are that proxy's (eg localhost:80) and the URL is only a 
path ("/") on an origin server somewhere else. Does the "GET / 
HTTP/1.0" mean "http://example.com/; or "http://example.net/; ?


I would hope that it would return an error page, much like Squid does 
when it can't resolve a domain name or the connection times out.


You see my point I hope. A gateway proxy that returns an error to 
*every* request is not very good.





The key point is that the proxy host:port and the origin host:port are 
two different authority and only the origin may be passed along in the 
URL (or URL+Host header).


Agreed.

When the client uses port 80 and 443 thinking they are origin services 
it is *required* (per 
https://www.rfc-editor.org/rfc/rfc9112.html#name-origin-form) to omit 
the real origins info. Enter problems.


Why would a client (worth it's disk space) ever conflate the value of 
it's configured proxy as the origin server?




There is no "configured proxy" for this use-case.


I can see a potential for confusion when using (network) transparent / 
intercepting proxies.




Those are the two most/extremely common instances of the problematic 
use-cases. All implicit use of proxy (or gateway) have the same issue.





The defaults though are tuned for origin server (or reverse-proxy) 
direct contact.


I don't see how that precludes their use for (forward) proxy servers.



I think you are getting stuck with the subtle difference between "use 
for case X" and "use by default".


ANY port number can be used for *some* use-case(s). "by default" has to 
work for *all* use-cases.



No Browser I know supports 
"http-alt://proxy.example.com?http://origin.example.net/index.html; 
URLs.


But I bet that many browsers would support:

   http://proxy.example.com:8080/?http://origin.example.net/index.html



Note that you are now having to add a non-default port "8080" and path 
"/" to the URL to make it valid/accepted by the Browser.


Clients speaking HTTP origin-form (the http:// scheme) are not permitted 
to request tunnels or equivalent gateway services. They can only ask for 
resource representations.



Also, I'm talking about "http://; and "https://; using their default 
ports of 80 & 443.




Port is just a number, it can be anything *IF* it is made explicit.
The scheme determines what protocol syntax is being spoken and thus what 
restrictions and/or requirements are.


... and so the protocol for talking to a webcache service is 
http-alt://.
Whose default port is not 80 nor 443 for all the same reasons why Squid 
default listening port is 3128.


If we wanted to we could easily switch Squid default port to 
http-alt/8080 without causing technical issues. But it would be annoying 
to update all the existing documentation around the Internet, so not 
worth the effort changing now.





It is based on experience. Squid used to be a lot more lenient and 
tried for decades to do the syntax auto-detection. The path from that 
to separate ports is littered with CVEs. Most notably the curse that 
keeps on giving: CVE-2009-0801, which is just the trigger issue for a 
whole nest of bad side effects.


I wonder how much of that problematic history was related to HTTP/0.9 
vs HTTP/1.0 vs HTTP/1.1 clients.


Ditto. Though the legacy install base has a long long long tail. 26 
years after HTTP/1.0 came out and HTTP/0.9 still has use-cases alive.




I similarly wonder how much HTTP/1.0, or even HTTP/0.9, protocol is 
used these days.


Decreasing, but still a potentially significant amount of traffic seen 
by Squid in general.




Also, there is the elephant in the room of we're talking about a proxy 
server which is 

Re: [squid-users] Problem with wss protocol.

2022-11-01 Thread squid3

On 2022-11-02 06:59, Armando Ramos Roche wrote:

Hi all.
I was working with squid v3.3 on ubuntu 18.04, after migrating to 
ubuntu
20.04 a few months ago, squid was updated to version 3.5, currently 
version

3.5.27.
And I have realized that nothing that uses the wss or ws protocol works 
for

me, for example whatsapp, messenger etc...
I've searched the logs, but nothing shows up.


Not showing up in logs, even as a failed or rejected transaction is a 
sign that it is not going to Squid.


From the syntax shown by Firefox it looks to me like HTTP/2 or HTTP/3. 
Which also means it is probably not going to Squid.


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Does Squid support client ssl termination?

2022-11-01 Thread squid3

On 2022-11-02 05:44, Grant Taylor wrote:

On 10/31/22 7:32 PM, mingheng wang wrote:

Sorry about that, don't know why it only went to you.


Things happen.  That's why I let people know, in case unwanted things 
did happen.


I delved into the configuration the last few days, and found that 
Squid doesn't officially support cache_peer when ssl_bump is in use.


That surprises me.  I wonder if it's a technical limitation or an 
oversight.




That is not true as a blanket statement.

What Squid officially *does not* support is decrypting traffic then 
sending the un-encrypted form to a HTTP-only cache_peer.


All other permutations of inbound TCP/TLS, http:// or https:// URL, and 
outbound TCP/TLS should currently work to some degree. The more recent 
your Squid version the better it is.



Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] FW: Encrypted browser-Squid connection errors

2022-10-30 Thread squid3

On 2022-10-23 06:10, Grant Taylor wrote:

On 10/21/22 11:30 PM, Amos Jeffries wrote:
Not just convention. AFAICT was formally registered with W3C, before 
everyone went to using IETF for registrations.


Please elaborate on what was formally registered.  I've only seen 3128 
/ 3129 be the default for Squid (and a few things emulating squid).  
Other proxies of the time, namely Netscape's and Microsoft's 
counterparts, tended to use 8080.


I'd genuinely like to learn more about and understand the history / 
etymology / genesis of the 3128 / 3129.


Duane W. would be the best one to ask about the details.

What I know is that some 10-12 years ago I discovered an message by 
Duane mentioning that W3C had (given or accepted) port 3128 for Squid 
use. I've checked the squid-cache archives and not seeing the message.


Right now it looks like the W3C changed their systems and only track the 
standards documents. So I cannot reference their (outdated?) protocol 
registry :-{ . Also checked the squid-cache archives and not finding it 
email history. Sorry.






FYI, discussion started ~30 years ago.


ACK


The problem:

For bandwidth savings HTTP/1.0 defined different URL syntax for origin 
and relay/proxy requests. The form sent to an origin server lacks any 
information about the authority. That was expected to be known 
out-of-band by the origin itself.


HTTP/1.1 has attempted several different mechanisms to fix this over 
the years. None of them has been universally accepted, so the problem 
remains. The best we have is mandatory Host header which most (but 
sadly not all) clients and servers use.


HTTP/2 cements that design with mandatory ":authority" pseudo-header 
field. So the problem is "fixed"for native HTTP/2+ traffic. But until 
HTTP/1.0 and broken HTTP/1.1 clients are all gone the issue will still 
crop up.


I'm not entirely sure what you mean by "the authority".  I'm taking it 
to mean the identity of the service that you are wanting content from. 
The Host: header comment with HTTP/1.1 is what makes me think this.




I mean "authority" as used by HTTP specification, which refers to 
https://www.rfc-editor.org/rfc/rfc3986#section-3.2



My understanding is that neither HTTP/0.9 nor HTTP/1.0 had a Host: 
header and that it was assumed that the IP address you were connecting 
to conveyed the server that you were wanting to connect to.


Yes exactly. That is the source of the problem, perpetuated by the need 
to retain on-wire byte/octet backward compatibility until HTTP/2 changed 
to binary format.


Consider what the proxy has to do when (not if) the IP:port being 
connected to are that proxy's (eg localhost:80) and the URL is only a 
path ("/") on an origin server somewhere else. Does the "GET / HTTP/1.0" 
mean "http://example.com/; or "http://example.net/; ?





More importantly the proxy hostname:port the client is opening TCP 
connections to may be different from the authority-info specified in 
the HTTP request message (or lack thereof).


My working understanding of what the authority is seems to still work 
with this.




The key point is that the proxy host:port and the origin host:port are 
two different authority and only the origin may be passed along in the 
URL (or URL+Host header). When the client uses port 80 and 443 thinking 
they are origin services it is *required* (per 
https://www.rfc-editor.org/rfc/rfc9112.html#name-origin-form) to omit 
the real origins info. Enter problems.



This crosses security boundaries and involves out-of-band information 
sources at all three endpoints involved in the transaction for the 
message semantics and protocol negotiations to work properly.


I feel like the nature of web traffic tends to frequently, but not 
always, cross security / administrative boundaries.  As such, I don't 
think that existence of proxies in the communications path alters 
things much.


Please elaborate on what out-of-band information you are describing. 
The most predominant thing that comes to mind, particularly with 
HTTP/1.1 and HTTP/2 is name resolution -- ostensibly DNS -- to identify 
the IP address to connect to.




I refer to all the many ways the clients may be explicitly or implicitly 
configured to be aware that it is talking to a proxy - such that it 
explicitly avoids sending the problematic origin-form URLs.



What that text does not say is that when they are omitted by the 
**user** they are taken from configuration settings in the OS:


  * the environment variable name provides:
     - the protocol name ("http" or "HTTPS", aka plain-text or 
encrypted)
     - the expected protocol syntax/semantics ("proxy" aka 
forward-proxy)


  * the machine /etc/services configuration provides the default port 
for the named protocol.


Ergo the use of /default/ values when values are not specified.


The defaults though are tuned for origin server (or reverse-proxy) 
direct contact.
No Browser I know supports 

Re: [squid-users] Empty transfer-encoding header causes 502 response

2022-10-25 Thread squid3

On 2022-10-24 13:36, Matthew H wrote:

Hi,

I'm using Squid to proxy HTTP requests to another proxy. I can see 
squid
sending the request to the parent and getting a response, but it sends 
the

client that initiated the request a 502 Bad Gateway response.


That is correct behaviour. Squid does not know how to decode the content 
for delivery.




On closer inspection it appears the parent proxy is sending an
empty transfer-encoding header, and this is causing Squid to send a 
502. Is

there any way to ignore this?



This MUST NOT be ignored. The server has explicitly indicated that the 
response content area is encoded, but not how. Squid cannot tell where 
the boundaries of the message content are, nor how to transform it for 
delivery to the client.


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] where to put my own rules

2021-07-27 Thread squid3

On 2021-07-28 00:25, robert k Wild wrote:

is it best to put my "ssl bump" and "no ssl interception" rules under

# Recommended minimum Access Permission configuration:

or

# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS



Both of the above comments are about the ordering of http_access lines.
It is just a matter of convenience to put other directives custom config 
there as well.


The rules you are asking about do not (currently) matter where they go 
in regard to *placement*. What matters for them is their *order* is 
correct for what needs to be achieved.


Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Parent Proxy and direct traffic

2021-07-27 Thread squid3

On 2021-07-26 23:05, jens.altrock wrote:

Hi!

I got a little Problem:

We have a proxy server that should route special requests to a parent
proxy and forward the rest tot he standard gateway. I haven't found
any suitable and working configurations, so I'm asking ehre for help.


You appear to not understand some of the directives correctly.

As a result your config currently forces Squid to ignore all cache_peer 
lines.





My configuration so far:






_acl alwayspeer dstdomain EXAMPLE.COM:777_



":777" is not part of any domain name.

This ACL can never produce a match result.

To check two different properties (domain and port) you need two 
different ACLs.


For example;
 acl example dstdomain example.com
 acl port777 port 777

 cache_peer_access PARENT_PROXY_SRV allow example port777
 never_direct allow example port777




_cache deny all_

_cache_peer PARENT_PROXY_SRV parent 8080 7 proxy-only no-query_

_cache_peer_access PARENT_PROXY_SRV allow alwayspeer_



Since "alwayspeer" is always false this line means the default for 
traffic going to this peer is "deny all".


With the ACL adjustments from above this would be:

 cache_peer_access PARENT_PROXY_SRV allow example port777




_#http_access deny !Safe_ports_

_#http_access deny CONNECT !SSL_ports_



Please restore those rules. They are protecting your proxy against being 
abused as a relay for DoS attacks against your network. They have 
nothing to do with routing of valid HTTP messages.





_http_access allow localhost manager_




_http_access allow all Safe_ports_

_http_access allow all SSL_ports_



Remove those two lines **urgently**.



_never_direct deny alwayspeer_

_always_direct allow all_



From the actions chosen I see you misunderstand these two directives.

"DIRECT" means using DNS (or equivalent) to locate and connect to origin 
server(s) from the URL domain name.


always_direct has precedence. So "allow all" means servers will *always* 
be found using URL domain and DNS instead of your config file and 
cache_peer lines.


  -> you need to remove the always_direct line.

never_direct means the URL domain / DNS lookup mechanism is *never* 
used. Only cache_peer have any possibility, and only when 
cache_peer_access rules also say allow.


  -> the 'action' field needs to be "allow" in order to force cache_peer 
to be used.


In both of these directives "deny" is simply a way to stop processing 
the directive lines before any more checks happen. eg, a way to put 
"except" or "unless" clauses into the logic.






_http_access deny all_



No http_access rules placed below this will be checked. You should 
remove this line.


FYI; the whole point of include directive on the next line is so you can 
put your custom cache_peer and related rules into a file in there and 
not worry about the OS Squid package fiddling with it.




_include /etc/squid/conf.d/*_

_http_access allow localhost_

_http_access deny all_






Problem ist hat direct traffic is working, but he doesn't redirect
EXAMPLE.COM:777 to the correct Proxy server.

In the access.log I only see:

1627297417.299  31535 CLIENT_IP NONE/503 0 CONNECT EXAMPLE.COM:777 -
HIER_NONE/- -




Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ICAP latency information, Bench-marking

2021-07-27 Thread squid3

On 2021-07-27 21:27, Jason Spashett wrote:

If you look at the squid logformat page you can find various
additional logging options available to start with, such as ICAP
processing time. This is a good place to start if you are not using a
custom format already:
http://www.squid-cache.org/Doc/config/logformat/

.e.g.
squid_status=NONE_NONE:HIER_NONE time_response=7
time_icap_processing=6  tls_max=TLS/1.3 tls=TLS/1.3



Indeed. access_log directive format gives access to the metrics about 
the HTTP transactions.


There is also a specific icap_log directive for recording metrics of the 
ICAP service transactions.


Between those two directives and choice of the metrics you want to see 
you can discover most things about Squid vs external latency.



Amos



On Mon, 26 Jul 2021 at 21:15, roie rachamim  
wrote:


Hi,

Can i get information regarding latency of each ICAP, or even latency 
added by processing in squid ?


In addition which tool/method can we use in order to benchmark squid ?

Many Thanks,
Roie
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] refresh_pattern and "?"

2021-07-14 Thread squid3

On 2021-07-15 07:48, Vincent Tamet wrote:

You are totally right !
The problem was on my side with the acl regular expression used to
choose extensions to be cached:
acl images url_regex -i
\.(bmp|gif|ico|jpeg|jpg|png|svg|tif|tiff|webp)$
$ was not matching for '?query-string'



For this usage my advice is to have (|\?.*)$ as the end of the pattern.
So it looks like this:

  refresh_pattern -i \.(bmp|gif|ico|jpe?g|png|tiff?|webp)(|\?.*)$



* Your answer helps me to find my error.
* And I am now thinking about changing the way of doing the cache
  (a special refresh for my acls or/and a default to 0 0% 0 to use the
TCP_REFRESH_UNMODIFIED).



FYI: the latest Squid versions have store_miss and send_hit that may 
help you out with the caching ACLs redesign. see 





Thank you very much :)
And sorry for the sound ! :(



Welcome. No worries. Helping is what this mailing list is for.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Sharing info from eCAP adapter with other adapters

2021-07-14 Thread squid3

On 2021-07-15 09:11, Moti Berger wrote:


Meaning, it indeed added the X-My-Header as ICAP header for the
benefit of the ICAP server on the chain but it seems the value is just
a dot.
What am I doing wrong?


This is best asked via the libecap help channels (see below).



BTW, I'm struggling to find a decent eCAP interface documentation.


AFAICT, That is still apparently an item on the libecap official TODO 
list.


For now the best documentation I am aware of is Alex responses in the 
Launchpad Answers () and FAQ 
().


The clearest (huh!) I've been able to find is at 
.




Can
you please help me understand what is the difference between 'option'
and 'visitEachOption' methods?


To quote Alex from the LP Answers discussion:
 "One is of requesting a single known option (i.e., meta header). The 
other is for iterating all options".




Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] refresh_pattern and "?"

2021-07-13 Thread squid3

On 2021-07-13 05:59, Vincent Tamet wrote:

Hi,

I would like to know how to deactivate the "?" refresh_pattern filter
?


There is no such filter. So "deactivate" has no meaning.

refresh_pattern is a directive that provides default values for the 
caching Freshness heuristics defined by RFC 7234. For messages without a 
necessary cache-control or related header value.




(As most web pages nowaday should use cache-control or expire, I guess
the correct usage of headers should be enough to permit us to cache
requests with "?" !?


Yes. URLs containing '?query-string' are cached by Squid with the 
default squid.conf refresh_pattern settings.


The refresh_pattern line you noticed is to cope with servers that are 
very old and/or broken. You can remove it, but any of your clients 
visiting such a server will see the brokenness and probably blame Squid 
because "it works fine with just Browser X".



Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Refrain from Cache Manager API requests to reach ICAPs

2021-07-04 Thread squid3

On 2021-07-05 04:42, Alex Rousskov wrote:

On 7/4/21 8:44 AM, Moti Berger wrote:


I established an environment with Squid and Datadog.
It periodically calls the endpoint:

/squid-internal-mgr/counters


Those requests are also sent to the ICAPs.
Is there a way to make Squid not to pass those requests to the ICAPs?


Yes, see the adaptation_access directive. Depending on how these cache
manager requests are accepted/authenticated/etc., you may use
urlpath_regex or other ACLs to identify them.


Use the built-in "manager" ACL in current Squid versions.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] How to execute external helpers for each request ?

2021-06-25 Thread squid3

On 2021-06-26 01:16, hoper wrote:

Hi again,


If Squid trusts stale user credentials (i.e. allows new requests with
stale cached credentials without revalidating them with your
authentication helper), then this is a Squid bug.


No, I don't think there is a bug here.
Because each time my helper is used by squid, it write a line in a
dedicated log file.


A Squid bug would likely be later on.

Before we go any further. Which versino of Squid are you using.



And it seems to work well. In detail :

Let's say I have a account in my DB with: user1,password1,proxy1
As a client, I start my browser and connect myself with user1/password1

In my helper log file, all is good and I can see that squid used the 
helper,

and it's answer was "OK proxychoice=proxy1".

Now I switch from proxy1 to proxy2 for user1 in the database.

On my browser, I'm still authenticated as user1, and I'm still use 
proxy1.

(Ok, that's normal). Later, when the TTL is reached (2 minutes in the
configuration I sent),
I can see in my helper's log file that squid used it again. This time, 
the

answer was : "OK proxychoice=proxy2". So, all seems good here too.

But the routing did'nt change. The parent proxy used after 2 minutes
is still proxy1, and
it never change until I restart squid.

I hope to have better explain the problem. So you think there is a bug
somewhere,
or do we have a configuration problem ? How can we obtain the result
we are looking for ?
(Squid should change the parent proxy if needed after the
authentication TTL period).



You seem to think that user credentials are thrown away when they reach 
TTL. That is not true.


What actually happens is that shortly *before* TTL is reached they enter 
a grace period during which they will be refreshed using the helper. The 
info the helper provides is then used to *update* the existing 
credentials.


Also, the foo= annotations are additive by default. On more detailed 
inspection you will find the user has become "proxy1" *OR* "proxy2" 
allowed.





Insufficient demand for that feature does not allow me to provide a
reliable ETA at this time.


Do you have a vague idea of the cost of the developement of this 
feature ?




I'm not sure why Alex is offering a feature. A change to helper 
annotations was already implemented in Squid-5 to avoid this exact 
behaviour you are seeing.




Thanks again.




FYI. The Squid-5 code already has the feature implemented. It is only 
the Squid-4 code which behaves like above.


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] How to use request headers in external_acl_type

2021-06-25 Thread squid3

On 2021-06-26 07:18, Yosi Greenfield wrote:

Hello all,

I'm trying to use request headers in an external acl, and I'm probably
doing it incorrectly, and it's not working.



Looks like its working fine.



Here's my acl definiton:

external_acl_type ext_acl_program  %SRC %>{Connection} %>{Accept}
%>{Custom_header} %>{Host} /etc/squid/ext_acl_program.pl

The program ext_acl_program.pl simply prints out the input

   chomp ($line);
   @fields  = split(' ', $line);
   my $ip   = $fields[0];
   my $connection   = $fields[1];
   my $accept   = $fields[2];
   my $custom   = $fields[3];
   my $host = $fields[4];

   print LOGFILE  "IP: $ip\n Conn: $connection\n Accept: $accept\n
Custom: $custom\n Host: $host";

The output looks like this:

IP: 10.200.10.2
Conn: keep-alive
Accept: -
Custom: -
Host: www.wsws.com:443 [1]

As you see, it has values for %SRC, %>{Connection} and %>{Host}.  It
does not have values for %>{Accept} and %>{Custom_header}

So the question is, are these %>{} substitutions coming from
request_headers (as I thought)?


The Host header only exists in request messages so I would say they are.
It may not be the request message you are thinking about though. Request 
headers can come from clients, but they could also be generated by Squid 
or ICAP/eCAP services.




If yes, why does it only have Connection and Host, and not Accept or
my custom header?



Because those are the headers the message being printed contain.
You do not provide enough details about where the request came from. eg 
how it was created and/or changed between creation and the helper being 
called.




If they are not coming from request headers, where are they coming
from?



You can use "debug_options 11,2" to see the HTTP messages Squid is 
processing.




And mostly, how can I pass my custom header into the program?


Exactly as you configured above. Assuming that the header is actually 
"Custom_header: ..." with that underscore included.



Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Newbie question, How to fully disable/disallow https?

2021-06-22 Thread squid3

On 2021-06-23 11:20, Arctic5824 wrote:
hey sorry i accidently directly sent it again, instead of the email 
list:



On Tuesday, June 22nd, 2021 at 3:50 PM, Antony Stone wrote:

You might want to be aware that this is illegal in many countries, and 
a number of Internet Service Providers have been sued and/or fined for 
manipulating the content of websites as they pass through their 
systems.


Thanks for the warning, I dont think this will really be a problem for
me though.


 1.  What makes you believe that sites have an HTTP version?

I dont see why they wouldnt, like sure they would prefer https but why
would http not work if forced



Because this idea you have about changing advert content is not a
new thing.

It has been done and tried so many times in the past by others for
http:// traffic that the major content providers whose income depended
on those ads got together and started a project to get rid of http://
completely. They have had much success with the support of privacy
and security advocate groups.



2.  What do you think should happen when sites do have an HTTP
version,  and that consists solely of a 301 Permanent Redirect to the
HTTPS version

I didnt think of this, this would be a problem i guess, but I dont
think it would be too common.


Reality is that today the vast majority of websites still offering
http:// versions at all, do exactly that.



Maybe squid isnt the right software for this?


Squid is fine for the content adaptation part of what you are wanting.

What is not going to work is the HTTP->HTTP conversion part. That is
because of protocol and Browser features. No intermediary software can
get around those without the SSL-Bump (or similar) mechanism - as
others already mentioned that too has its limits. TLS is specifically
designed to prevent intermediaries touching the content - the only
reliable action a proxy can do is terminate unwanted TLS connections.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Internet is Slow Thru squid proxy server

2021-06-08 Thread squid3

On 2021-06-08 23:34, Avinash . wrote:

Dear team, I am using a squid proxy server for 100 + users, but
Internet speed is very slow, I try many method/option but still not
able to resolve the issue

Please find the attached config file & squidclient mgr: info file for
reference.



The mgr:info log says Squid started less than a minute ago and served 
149 requests total. That is not sufficient time nor traffic to tell how 
fast the proxy is.



Perhapse you should tell us what you have tried, and what results that 
produced (no matter how small a change).


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid modification to only read client SNI without bumping.

2021-06-08 Thread squid3

On 2021-06-08 22:51, His Shadow wrote:

Greetings. I've been trying to make a patch for squid,


Code changes should be discussed on the squid-dev mailing list.

FWIW, we (Squid devs) have already discussed this functionality change 
and I have a TODO list entry (far down sadly) of supporting your 
use-case. The way I think to approach it though is to start with the 
configuration parser. A simple peek-splice/terminate TLS traffic flow 
should not need certificates setup by admin.


If you want to pickup that TODO item please contact squid-dev to plan 
out the actual best approach with the other dev working on Squid crypto 
code.


Patch submission should be done by submitting a github PR targeted at 
our repository 'master' branch.




so that it
could read client hello on connect requests and set the SNI without
using ssl_bump, as that requires generating certificates and is too
complicated for my needs.


Should not be too complicated. We have test scripts available that can 
generate fake cert and CA for the *_port config settings. Or snakeoil 
certs can be used.


Apart from the port settings what your patch does is just this:


 acl blocklist dstdomain ...

 ssl_bump peek all
 ssl_bump splice blocklist
 ssl_bump terminate all



Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] about Kerberos Auth and LDAP Auth

2021-06-08 Thread squid3

On 2021-06-08 16:05, m k wrote:

hi all,

Thank you for always helping me with my difficulties.
With your help I am able to complete the proxy. Please help me again
this time.

I want to configure my squid authentication as follows.

Try single sign-on for squid with Kerberos authentication.

Squid will try authentication with LDAP.



Please be aware these are three very different *types* of thing.

 * "Single-Sign On" is just means that the client re-sends the *same 
credentials* to all types of service. Any auth type can be "single-sign 
on" if the client supports it, and this has nothing to do with the 
service(s).


 * Kerberos is an authentication mechanism.

 * LDAP is a database management protocol (like SQL).



Unfortunately, when Kerberos authentication fails, it retries Kerberos
authentication.
I want squid to work so that if Kerberos authentication fails, it will
try LDAP authentication next.


"LDAP authentication" does not mean what you think.

What squid.conf settings do you have?


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Limiting Connections & MySQL through SSH Tunnel

2021-06-08 Thread squid3

On 2021-06-08 00:04, Grails UK wrote:

Hello,
I hope you are well. I have two questions:

1. Is there any easy way to limit concurrent connections by a single
squid user or the local IP the client connected to.


What are you trying to achieve that make you think of doing that?




2. Our MySQL database is currently only accessible from our local
server on PythonAnywhere and any external access has to be done via an
SSH Tunnel, is there any way to SSH tunnel when using the
basic_db_auth or log_db_daemon?


Getting TCP/IP data to travel over SSH protocol tunnels is a OS routing 
detail.




Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] manual proxy configuration ...

2021-05-28 Thread squid3

On 2021-05-29 05:42, Albretch Mueller wrote:

On 5/27/21, Matus UHLAR - fantomas wrote:

On 5/25/21, ‪Amos Jeffries‬ wrote:

You enter the IP address or hostname of the squid machine into the
browser
"proxy settings" for manual configuration.


On 27.05.21 03:50, Albretch Mueller wrote:

Yeah, exactly! and how do you know the IP address or hostname?


it's your or your organizations proxy - you are supposed to know that, 
not

us.


 I thought I was going to get droned for "not doing what I was
supposed to do" (tm)



Well. These are part of the basics needed to get a networked computer 
running.
You said you already had Squid running and in use. That makes us assume 
you know these basics already, but haven't mentioned them.




What line in the conf file specifies that?

each browser has its own way of proxy settings.
Many of them support "system proxy settings".


 I am talking here about the squid conf file not the browser's and I



Squid is normally just configured with an "http_port 3128" or similar to 
receive traffic.


There *might* be an IP address there as well, or a hostname. But usually 
not. It is the network DHCP or machines network interface which 
determines the IP address Squid gets to use.





am asking because I have reasons to be believe something fishy to be
going on with the computer I expose to the Internet.



Aha. Finally an indication of what the real situation is. Its REALLY 
hard to help people who leave out important details all the time.



Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] blocking mime types works for adobe, not for teams

2021-05-26 Thread squid3

On 2021-05-27 06:58, robert k Wild wrote:

found a really good website to check http headers and i found the mime
type

https://gf.dev/http-headers-test

On Wed, 26 May 2021 at 15:11, robert k Wild wrote:


hi all,

i have in my squid config this

#deny MIME types
acl mimetype rep_mime_type "/usr/local/squid/etc/mimedeny.txt"
http_reply_access deny mimetype

mimedeny.txt

application/octet-stream
application/x-msi
application/zip
application/x-7z-compressed
application/vnd.ms-cab-compressed

it works as it blocks adobe reader download, but the url has an exe
at the end so maybe this is why


No. Mime type is unrelated to any characters in the URL.




https://admdownload.adobe.com/bin/live/readerdc_uk_d_crd_install.exe



This response has "Content-Type: application/octet-stream" which is 
listed in your blocklist.




but it doesnt block ms teams from downloading



https://go.microsoft.com/fwlink/p/?LinkID=869426=0x809=en-gb=GB=deeplink=groupChatMarketingPageWeb=directDownloadWin64


it just doesnt intercept the download at all and gives me the option
to "save file" its an exe

do you think this is because its a direct download link?


No. It is because the mime type is still not in your blocklist.

The tool at  tells me the download is hidden behind 
a number of redirections then eventually the actual resource comes up 
with a "Content-Type: application/x-msdownload" header.



HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Probable release date for squid V5

2021-05-14 Thread squid3

On 2021-05-14 21:45, nikhil deshpande wrote:

Hi Guys,

I am Nikhil from India. We use squid in our project. Currently, we can
see squid version 5 release is in beta as per this link
http://www.squid-cache.org/Versions/.
I wanted to ask what is the probable timeline for Squid version 5
stable release?



Normally I would be able to estimate a rough month. However the past 
year has seen the core developers all have workload commitments that 
delay things. So right now I am reluctant to even guess at a date.


What I can say is that there are 4 bugs that need fixing. One is nearly 
done now, but the rest still need investigating to find the actual 
problem.



Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Probable release date for squid V5

2021-05-14 Thread squid3

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSL BUMP

2021-05-12 Thread squid3

On 2021-05-10 22:26, Stephane Simon wrote:

Hello,

I try to configure https  with ssl bump.
I use redhat 8.

i follow https://blog.microlinux.fr/squid-https-centos-7/
when i restart squid, he doesn't cooperate and say:

"FATAL: The usr/lib64/squid/security_file_certgen -s
/var/lib/squid/ssl_db -M 64MB helpers are crashing too rapidly, need
help!"

i don't know how to fix this error..i dont know why i've this error ^^

Does someone have an idea please ?


The helper crashing is required by Squid to generate certificates for 
bumping.

Without it working perfectly Squid cannot handle any HTTPS traffic.




http_port 3130
http_port 3128 intercept
https_port 3129 intercept ssl-bump \
  cert=/etc/squid/ssl_cert/certificat.pem \
  generate-host-certificates=on \
  dynamic_cert_mem_cache_size=64MB

#SSL certificate generation
sslcrtd_program usr/lib64/squid/security_file_certgen -s


The path should begin with '/usr/' not just 'usr/


/var/lib/squid/ssl_db -M 64MB


Check that this /var path actually exists. That the low-privilege 
account the proxy uses has both read and write access to it.


Run the helper command to initialize the database before starting Squid. 
Do so using the low-privilege account Squid uses to ensure the database 
files have correct ownership.





sslcrtd_children 32 startup=5 idle=1

# SSL-Bump
acl step1 at_step SslBump1
ssl_bump peek step1
ssl_bump bump all



Please be aware that this configuration is trying to forge server 
certificates without having any details about the real server 
certificate. When you are past the helper problem it is likely that this 
basic configuration will cause a number of TLS problems.


For bumping as much as possible this is a better config:

 acl step1 at_step SslBump1
 ssl_bump peek step1
 ssl_bump stare all
 ssl_bump bump all


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] https_port not correctly sending ssl cert information?

2021-05-11 Thread squid3
Oh, I see. With that simple config the issue has to be lack of cert 
chain support in GnuTLS. Simply rebuilding using --with-openssl should 
resolve it.


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] [squid-announce] [ADVISORY] SQUID-2021:5 Denial of Service in HTTP Response Processing

2021-05-10 Thread squid3

__

Squid Proxy Cache Security Update Advisory SQUID-2021:5
__

Advisory ID:   | SQUID-2021:5
Date:  | May 10, 2021
Summary:   | Denial of Service in HTTP Response Processing
Affected versions: | Squid 2.x -> 2.7.STABLE9
   | Squid 3.x -> 3.5.28
   | Squid 4.x -> 4.14
   | Squid 5.x -> 5.0.5
Fixed in version:  | Squid 4.15, 5.0.6
__


__

Problem Description:

 Due to an input validation bug Squid is vulnerable to a Denial
 of Service against all clients using the proxy.

__

Severity:

 This problem allows a remote server to perform Denial of Service
 when delivering HTTP Response messages. The issue trigger is a
 header which can be expected to exist in HTTP traffic without
 any malicious intent by the server.

CVSS Score of 8.8


__

Updated Packages:

This bug is fixed by Squid versions 4.15 and 5.0.6.

 In addition, patches addressing this problem for the stable
 releases can be found in our patch archives:

Squid 4:
 



Squid 5:
 



 If you are using a prepackaged version of Squid then please refer
 to the package vendor for availability information on updated
 packages.

__

Determining if your version is vulnerable:

 Squid older than 4.15 have not been tested and should be
 assumed to be vulnerable.

 All Squid-5.x up to and including 5.0.5 are vulnerable.

__

Workaround:

 There are no known workarounds to this issue.

__

Contact details for the Squid project:

 For installation / upgrade support on binary packaged versions
 of Squid: Your first point of contact should be your binary
 package vendor.

 If you install and build Squid from the original Squid sources
 then the  mailing list is your
 primary support point. For subscription details see
 .

 For reporting of non-security bugs in the latest STABLE release
 the squid bugzilla database should be used
 .

 For reporting of security sensitive bugs send an email to the
  mailing list. It's a closed
 list (though anyone can post) and security related bug reports
 are treated in confidence until the impact has been established.

__

Credits:

 This vulnerability was discovered by Joshua Rogers of Opera
 Software.

 Fixed by Alex Rousskov of The Measurement Factory.

__

Revision history:

 2021-03-05 22:11:43 UTC Initial Report
__
END
___
squid-announce mailing list
squid-annou...@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-announce
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] [squid-announce] [ADVISORY] SQUID-2021:3 Denial of Service issue in Cache Manager

2021-05-10 Thread squid3

__

Squid Proxy Cache Security Update Advisory SQUID-2021:3
__

Advisory ID:   | SQUID-2021:3
Date:  | May 10, 2021
Summary:   | Denial of Service issue in Cache Manager
Affected versions: | Squid 1.x -> 3.5.28
   | Squid 4.x -> 4.14
   | Squid 5.x -> 5.0.4
Fixed in version:  | Squid 4.15 and 5.0.5
__

  
__

Problem Description:

 Due to an incorrect parser validation bug Squid is vulnerable to
 a Denial of Service attack against the Cache Manager API.

__

Severity:

 This problem allows a trusted client to trigger memory leaks
 which over time lead to a Denial of Service against Squid and
 the machine it is operating on.

 This attack is limited to clients with Cache Manager API access
 privilege.

CVSS Score of 7.8
https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=AV:N/AC:H/PR:H/UI:N/S:C/C:N/I:N/A:H/E:F/RL:O/RC:C/CR:X/IR:X/AR:H/MAV:N/MAC:H/MPR:H/MUI:N/MS:C/MC:X/MI:X/MA:H=3.1

__

Updated Packages:

This bug is fixed by Squid versions 4.15 and 5.0.6.

 In addition, patches addressing this problem for the stable
 releases can be found in our patch archives:

Squid 4:
 



Squid 5:
 



 If you are using a prepackaged version of Squid then please refer
 to the package vendor for availability information on updated
 packages.

__

Determining if your version is vulnerable:

 Squid older than 3.5.28 have not been tested and should be
 assumed to be vulnerable.

 All Squid-4.x up to and including 4.14 are vulnerable.

 All Squid-5.x up to and including 5.0.4 are vulnerable.

__

Workaround:

Either,

 Disable Cache Manager access entirely if not needed.

 Place the following line in squid.conf before lines containing
 "allow" :

   http_access deny manager

Or,

 Harden Cache Manager access privileges.

 For example; require authentication or other access controls in
 http_access beyond the default IP address restriction.

__

Contact details for the Squid project:

 For installation / upgrade support on binary packaged versions
 of Squid: Your first point of contact should be your binary
 package vendor.

 If you install and build Squid from the original Squid sources
 then the  mailing list is your
 primary support point. For subscription details see
 .

 For reporting of non-security bugs in the latest STABLE release
 the squid bugzilla database should be used
 .

 For reporting of security sensitive bugs send an email to the
  mailing list. It's a closed
 list (though anyone can post) and security related bug reports
 are treated in confidence until the impact has been established.

__

Credits:

 This vulnerability was discovered by Joshua Rogers of Opera
 Software.

 Fixed by Amos Jeffries of Treehouse Networks Ltd.

__

Revision history:

 2021-03-03 17:02:25 UTC Initial Report
 2021-03-16 01:59:45 UTC Patch Released
 2021-03-17 06:19:09 UTC CVE Assignment
__
END
___
squid-announce mailing list
squid-annou...@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-announce
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] [squid-announce] [ADVISORY] SQUID-2021:4 Multiple issues in HTTP Range header

2021-05-10 Thread squid3

__

Squid Proxy Cache Security Update Advisory SQUID-2021:4
__

Advisory ID:   | SQUID-2021:4
Date:  | May 10, 2021
Summary:   | Multiple issues in HTTP Range header
Affected versions: | Squid 2.5 -> 2.7.STABLE9
   | Squid 3.x -> 3.5.28
   | Squid 4.x -> 4.14
   | Squid 5.x -> 5.0.5
Fixed in version:  | Squid 4.16, 5.0.6
__

  
  
  
__

Problem Description:

 Due to an incorrect input validation bug Squid is vulnerable to
 a Denial of Service attack against all clients using the proxy.

 Due to an incorrect memory management bug Squid is vulnerable to
 a Denial of Service attack against all clients using the proxy.

 Due to an integer overflow bug Squid is vulnerable to a Denial
 of Service attack against all clients using the proxy.

__

Severity:

 These problems all allow a trusted client to perform Denial of
 Service when making HTTP Range requests.

 The integer overflow problem allows a remote server to perform
 Denial of Service when delivering responses to HTTP Range
 requests. The issue trigger is a header which can be expected to
 exist in HTTP traffic without any malicious intent.

CVSS Score of 8.0


__

Updated Packages:

This bug is fixed by Squid versions 4.15 and 5.0.6.

 In addition, patches addressing this problem for the stable
 releases can be found in our patch archives:

Squid 4:
 



Squid 5:
 



 If you are using a prepackaged version of Squid then please refer
 to the package vendor for availability information on updated
 packages.

__

Determining if your version is vulnerable:

 Squid older than 3.5.28 have not been tested and should be
 assumed to be vulnerable.

 All Squid-4.x up to and including 4.14 are vulnerable.

 All Squid-5.x up to and including 5.0.5 are vulnerable.

__

Workaround:

 There are no workarounds known for these problems.

__

Contact details for the Squid project:

 For installation / upgrade support on binary packaged versions
 of Squid: Your first point of contact should be your binary
 package vendor.

 If you install and build Squid from the original Squid sources
 then the  mailing list is your
 primary support point. For subscription details see
 .

 For reporting of non-security bugs in the latest STABLE release
 the squid bugzilla database should be used
 .

 For reporting of security sensitive bugs send an email to the
  mailing list. It's a closed
 list (though anyone can post) and security related bug reports
 are treated in confidence until the impact has been established.

__

Credits:

 This vulnerability was discovered by Joshua Rogers of Opera
 Software.

 Fixed by Alex Rousskov of The Measurement Factory.

__

Revision history:

 2021-03-19 06:49:52 UTC Initial Report of Denial of Service
 2021-03-24 08:51:08 UTC Additional Report of Use-After-Free
 2021-03-25 21:57:07 UTC Additional Report of integer-overflow
__
END
___
squid-announce mailing list
squid-annou...@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-announce
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] [squid-announce] [ADVISORY] SQUID-2021:2 Denial of Service in HTTP Response Processing

2021-05-10 Thread squid3

__

Squid Proxy Cache Security Update Advisory SQUID-2021:2
__

Advisory ID:   | SQUID-2021:2
Date:  | May 10, 2021
Summary:   | Denial of Service in HTTP Response Processing
Affected versions: | Squid 4.x -> 4.14
   | Squid 5.x -> 5.0.5
Fixed in version:  | Squid 4.15, 5.0.6
__

  
__

Problem Description:

 Due to an input validation bug Squid is vulnerable to a Denial
 of Service against all clients using the proxy.

__

Severity:

 This problem allows a remote server to perform Denial of Service
 when delivering HTTP Response messages. The issue trigger is a
 header which can be expected to exist in HTTP traffic without any
 malicious intent by the server.

CVSS Score of 7.9


__

Updated Packages:

 This bug is fixed by Squid versions 4.15 and 5.0.6.

 In addition, patches addressing this problem for the stable
 releases can be found in our patch archives:

Squid 4:
 



 If you are using a prepackaged version of Squid then please refer
 to the package vendor for availability information on updated
 packages.

__

Determining if your version is vulnerable:

 All Squid older than 4.0 are not vulnerable.

 All Squid-4.x up to and including 4.14 are vulnerable.

 All Squid-5.x up to and including 5.0.5 are vulnerable.

__

Workaround:

 There are no known workarounds for this vulnerability.

__

Contact details for the Squid project:

 For installation / upgrade support on binary packaged versions
 of Squid: Your first point of contact should be your binary
 package vendor.

 If you install and build Squid from the original Squid sources
 then the  mailing list is your
 primary support point. For subscription details see
 .

 For reporting of non-security bugs in the latest STABLE release
 the squid bugzilla database should be used
 .

 For reporting of security sensitive bugs send an email to the
  mailing list. It's a closed
 list (though anyone can post) and security related bug reports
 are treated in confidence until the impact has been established.

__

Credits:

 This vulnerability was discovered by Joshua Rogers of Opera
 Software.

 Fixed by Alex Rousskov of The Measurement Factory.

__

Revision history:

 2021-03-08 19:45:14 UTC Initial Report
 2021-03-16 15:45:11 UTC Patch Released
 2021-03-18 01:33:50 UTC CVE Allocation
__
END
___
squid-announce mailing list
squid-annou...@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-announce
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] [squid-announce] [ADVISORY] SQUID-2021:1 Denial of Service in URN processing

2021-05-10 Thread squid3

__

Squid Proxy Cache Security Update Advisory SQUID-2021:1
__

Advisory ID:   | SQUID-2021:1
Date:  | May 10, 2021
Summary:   | Denial of Service in URN processing
Affected versions: | Squid 2.0 -> 4.14
   | Squid 5.x -> 5.0.5
Fixed in version:  | Squid 4.15 and 5.0.6
__

  
__

Problem Description:

 Due to a buffer management bug Squid is vulnerable to a
 Denial of service attack against the server it is operating on.

 This attack is limited to proxies which attempt to resolve a
 "urn:" resource identifier. Support for this resolving is enabled
 by default in all Squid.

__

Severity:

 This problem allows a malicious server in collaboration with a
 trusted client to consume arbitrarily large amounts of memory
 on the server running Squid.

 Lack of available memory resources impacts all services on the
 machine running Squid. Once initiated the DoS situation will
 persist until Squid is shutdown.

CVSS Score of 8.5


__

Updated Packages:

This bug is fixed by Squid versions 4.15 and 5.0.6.

 In addition, patches addressing this problem for the stable
 releases can be found in our patch archives:

Squid 4:
 



 If you are using a prepackaged version of Squid then please refer
 to the package vendor for availability information on updated
 packages.

__

Determining if your version is vulnerable:

 Squid older than 3.5.28 have not been tested and should be
 assumed to be vulnerable.

 All Squid-4.x up to and including 4.14 are vulnerable.

 All Squid-5.x up to and including 5.0.5 are vulnerable.

__

Workaround:

 Disable URN processing by the proxy. Add these lines to
 squid.conf:

   acl URN proto URN
   http_access deny URN

__

Contact details for the Squid project:

 For installation / upgrade support on binary packaged versions
 of Squid: Your first point of contact should be your binary
 package vendor.

 If you install and build Squid from the original Squid sources
 then the  mailing list is your
 primary support point. For subscription details see
 .

 For reporting of non-security bugs in the latest STABLE release
 the squid bugzilla database should be used
 .

 For reporting of security sensitive bugs send an email to the
  mailing list. It's a closed
 list (though anyone can post) and security related bug reports
 are treated in confidence until the impact has been established.

__

Credits:

 This vulnerability was discovered by Joshua Rogers of Opera
 Software.

 Fixed by Amos Jeffries of Treehouse Networks Ltd.

__

Revision history:

 2021-02-22 06:55:38 UTC Initial Report
 2021-02-24 00:53:21 UTC Patch Released
 2021-03-17 06:19:09 UTC CVE Assignment
__
END
___
squid-announce mailing list
squid-annou...@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-announce
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] [squid-announce] Squid 5.0.6 beta is available

2021-05-10 Thread squid3

The Squid HTTP Proxy team is very pleased to announce the availability
of the Squid-5.0.6 beta release!


This release is a security release resolving several issues found in
the prior Squid releases.


The major changes to be aware of since 5.0.4:

 * SQUID-2020:11 HTTP Request Smuggling
   (CVE-2020-25097)

This problem allows a trusted client to perform HTTP Request
Smuggling and access services otherwise forbidden by Squid
security controls.

See the advisory for patches:
 




 * SQUID-2021:1 Denial of Service in URN processing
   (CVE-2021-28651)

This problem allows a malicious server in collaboration with a
trusted client to consume arbitrarily large amounts of memory
on the server running Squid.

Lack of available memory resources impacts all services on the
machine running Squid. Once initiated the DoS situation will
persist until Squid is shutdown.

See the advisory for patches:
 




 * SQUID-2021:2 Denial of Service in HTTP Response Processing
   (CVE-2021-28662)

This problem allows a remote server to perform Denial of Service
when delivering HTTP Response messages. The issue trigger is a
header which can be expected to exist in HTTP traffic without any
malicious intent by the server.

See the advisory for patches:
 




 * SQUID-2021:3 Denial of Service issue in Cache Manager
   (CVE-2021-28652)

This problem allows a trusted client to trigger memory leaks
which over time lead to a Denial of Service against Squid and
the machine it is operating on.

This attack is limited to clients with Cache Manager API access
privilege.

See the advisory for patches:
 




 * SQUID-2021:4 Multiple issues in HTTP Range header
   (CVE-2021-31806, CVE-2021-31807, CVE-2021-31808)

These problems all allow a trusted client to perform Denial of
Service when making HTTP Range requests.

The CVE-2021-31808 problem allows a remote server to perform
Denial of Service when delivering responses to HTTP Range
requests. The issue trigger is a header which can be expected
to exist in HTTP traffic without any malicious intent.

See the advisory for patches:
 




 * SQUID-2021:5 Denial of Service in HTTP Response Processing
   (CVE pending allocation)

This problem allows a remote server to perform Denial of Service
when delivering HTTP Response messages. The issue trigger is a
header which can be expected to exist in HTTP traffic without
any malicious intent by the server.

See the advisory for patches:
 




 * TLS/1.3 support improvements

Prior to TLS v1.3 Squid could detect and fetch missing intermediate
server certificates by parsing TLS ServerHello. TLS v1.3 encrypts the
relevant part of the handshake, making such "prefetch" impossible.

This release contains a workaround that should be able to identify
the missing certificates on most (but maybe not all) TLS connections.

This release enhances existing error detailing code so that more
information is logged via the existing %err_code, %err_detail,
%ssl::negotiated_version logformat
codes.

Fix certificate validation error handling. This has an immediate
positive effect on the existing reporting of the client
certificate validation errors.


 * Regression in CONNECT URI syntax

Since Peering support for SSL-Bump feature was added CONNECT
request URI have not always contained a port. Squid-5.0.5
and later now correctly send a port number on all requests.


  All users of Squid are urged to upgrade as soon as possible.


See the ChangeLog for the full list of changes in this and earlier
releases.

Please refer to the release notes at
http://www.squid-cache.org/Versions/v5/RELEASENOTES.html
when you are ready to make the switch to Squid-5

This new release can be downloaded from our HTTP or FTP servers

  http://www.squid-cache.org/Versions/v5/
  ftp://ftp.squid-cache.org/pub/squid/
  ftp://ftp.squid-cache.org/pub/archive/5/

or the mirrors. For a list of mirror sites see

  http://www.squid-cache.org/Download/http-mirrors.html
  http://www.squid-cache.org/Download/mirrors.html

If you encounter any issues with this release please file a bug report.
  http://bugs.squid-cache.org/


Amos Jeffries
___
squid-announce mailing list
squid-annou...@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-announce
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] [squid-announce] Squid 4.15 is available

2021-05-10 Thread squid3

The Squid HTTP Proxy team is very pleased to announce the availability
of the Squid-4.15 release!


This release is a security release resolving several issues found in
the prior Squid releases.


The major changes to be aware of since 4.13:

 * SQUID-2020:11 HTTP Request Smuggling
   (CVE-2020-25097)

This problem allows a trusted client to perform HTTP Request
Smuggling and access services otherwise forbidden by Squid
security controls.

See the advisory for patches:
 




 * SQUID-2021:1 Denial of Service in URN processing
   (CVE-2021-28651)

This problem allows a malicious server in collaboration with a
trusted client to consume arbitrarily large amounts of memory
on the server running Squid.

Lack of available memory resources impacts all services on the
machine running Squid. Once initiated the DoS situation will
persist until Squid is shutdown.

See the advisory for patches:
 




 * SQUID-2021:2 Denial of Service in HTTP Response Processing
   (CVE-2021-28662)

This problem allows a remote server to perform Denial of Service
when delivering HTTP Response messages. The issue trigger is a
header which can be expected to exist in HTTP traffic without any
malicious intent by the server.

See the advisory for patches:
 




 * SQUID-2021:3 Denial of Service issue in Cache Manager
   (CVE-2021-28652)

This problem allows a trusted client to trigger memory leaks
which over time lead to a Denial of Service against Squid and
the machine it is operating on.

This attack is limited to clients with Cache Manager API access
privilege.

See the advisory for patches:
 




 * SQUID-2021:4 Multiple issues in HTTP Range header
   (CVE-2021-31806, CVE-2021-31807, CVE-2021-31808)

These problems all allow a trusted client to perform Denial of
Service when making HTTP Range requests.

The CVE-2021-31808 problem allows a remote server to perform
Denial of Service when delivering responses to HTTP Range
requests. The issue trigger is a header which can be expected
to exist in HTTP traffic without any malicious intent.

See the advisory for patches:
 




 * SQUID-2021:5 Denial of Service in HTTP Response Processing
   (CVE pending allocation)

This problem allows a remote server to perform Denial of Service
when delivering HTTP Response messages. The issue trigger is a
header which can be expected to exist in HTTP traffic without
any malicious intent by the server.

See the advisory for patches:
 




  All users of Squid are urged to upgrade as soon as possible.


See the ChangeLog for the full list of changes in this and earlier
releases.

Please refer to the release notes at
http://www.squid-cache.org/Versions/v4/RELEASENOTES.html
when you are ready to make the switch to Squid-4

This new release can be downloaded from our HTTP or FTP servers

  http://www.squid-cache.org/Versions/v4/
  ftp://ftp.squid-cache.org/pub/squid/
  ftp://ftp.squid-cache.org/pub/archive/4/

or the mirrors. For a list of mirror sites see

  http://www.squid-cache.org/Download/http-mirrors.html
  http://www.squid-cache.org/Download/mirrors.html

If you encounter any issues with this release please file a bug report.
  http://bugs.squid-cache.org/


Amos Jeffries
___
squid-announce mailing list
squid-annou...@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-announce
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] FW: Re[2]: squid with tproxy

2009-05-27 Thread squid3
On Wed, 27 May 2009 12:06:25 -0500, Ritter, Nicholas
nicholas.rit...@americantv.com wrote:
 
 From: Ritter, Nicholas 
 Sent: Wednesday, May 27, 2009 12:04 PM
 To: 'Manish P. Govindji'
 Subject: RE: Re[2]: squid with tproxy
 
 I remember something importantif you are using a more recent version
of
 TPROXY then what is stated in the squid wiki articleI think the
method
 by which TPROXY is configured in iptables changed a bit to make it more
to
 the liking of the netfilter and kernel developers in an effort to get the
 TPROXY code included into the netfilter and kernel release code.
 
 My setup and the wiki article I wrote are from before these changes, and
I
 have not worked with TPROXY since, so that could be the issue here. I
have
 not downloaded the latest TPROXY code to be sure though. And I think I
 might have actually seen TPROXY as included in the most recent (ie:
2.6.29)
 kernel as experimental.

Yes TPROXYv4 is now available in a public release of all involved
softwares.
The kernel code changed somewhat during their formal merge, and squid code
had to change a lot to accommodate the fixes. So Squid may not work
properly with the Balabit patches for older kernels.

The TPROXYv4 features page contains the minimum versions of kernel,
iptables, libcap, and Squid needed for this to work.
http://wiki.squid-cache.org/Features/Tproxy4

Amos

 
 I have been meaning to setup a new squid/tproxy system, and update the
wiki
 article...just have not gotten to it yet. I suggest taking a look at the
 readme with the latest tproxy source code, or even looking at your kernel
 config to see which tproxy version is being used. If you do a dmesg
command
 and look for the TProxy module loading, I think it tells you what version
 it is.
 
 Nick
 
 
 From: Manish P. Govindji [mailto:man...@mcc.co.tz] 
 Sent: Wednesday, May 27, 2009 11:43 AM
 To: Ritter, Nicholas
 Cc: squid-users
 Subject: Re[2]: squid with tproxy
 
 
 Thanks a lot for reply, i am already tired pulling my hairs for this one.
 
 Sorry, typo its 3128.
 
 I do not have the file, /etc/sysconfig/iptables I use iptables in
rc.local
 
 #
 
 #Increase Squid file Descriptors
 ulimit -HSn 30720
 
 #Start caches
 /usr/sbin/squid
 
 #Enable Forwarding
 echo '1'  /proc/sys/net/ipv4/ip_forward
 
 #disable rp_filter
 echo 0  /proc/sys/net/ipv4/conf/lo/rp_filter
 
 iptables -t mangle -N DIVERT
 iptables -t mangle -A DIVERT -j MARK --set-mark 1
 iptables -t mangle -A DIVERT -j ACCEPT
 
 iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
 iptables -t mangle -A PREROUTING -p tcp --dport 80 -j TPROXY
--tproxy-mark
 0x1/0x1 --on-port 3129
 
 ip rule add fwmark 1 lookup 100
 ip route add local 0.0.0.0/0 dev lo table 100
 
 # defe! nces
 iptables -A FORWARD -p tcp --syn -m limit -j ACCEPT
 iptables -A FORWARD -p tcp --tcp-flags SYN,ACK,FIN,RST RST -m limit
 iptables -A FORWARD -p icmp --icmp-type echo-request -m limit
 
 #Allow established sessions to continue
 iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
 
 
 
 
 I am using squid as gateway, all the pc are on public IP and Squid is
also
 on public IP as Gateway PC. ( was working as transparent cache, but
wanted
 to use the Tproxy )
 
 Rgds,
 
 
 -Original Message-
 From: Ritter, Nicholas nicholas.rit...@americantv.com
 To: Manish govindji man...@mcc.co.tz
 Cc: squid-users squid-users@squid-cache.org
 Date: 27-05-2009 18:47
 Subject: RE: squid with tproxy
 Port 3128, or 1328? The default port is 3128, but is configurable.
  
  
 Your rules are not right...you are marking, as you should, but not
 redirecting to the squid port. In addition to sending the output of the
raw
 iptables command, send the contents of /etc/sysconfig/iptables.
  
 I think the problem is partly in the rules setup. Are you using wccp as
 well, and/or a gre interface?
  
 Also, make sure you have Full NAT enabled in the kernel. Looks like that
is
 ok though.
  
  
 
 From:Manish govindji [mailto:man...@mcc.co.tz] 
 Sent: Wednesday, May 27, 2009 6:06 AM 
 To: nicholas.rit...@americantv.com 
 Subject: squid with tproxy
  
 Hello Nicholas,
  
 I have been trying to compile squid with tproxy but am failing, have
 searched all over google but nothing of help.
  
 I have centos 5.3, installed custom kernel 2.6.28, and iptables 1.4.3,
 squid 3.1
  
 In compiling the custom kernel, i copied the old config and added the
below
 :-
  
 NF_CONNTRACK
 NETFILTER_TPROXY
 NETFILTER_XT_MATCH_SOCKET
 NETFILTER_XT_TARGET_TPROXY
  
 When i do iptables stat :-
  
 [r...@gateway ~]# iptables -t mangle -L -v -n
 Chain PREROUTING (policy ACCEPT 5768K packets, 1538M bytes)
  pkts bytes target prot opt in out
 source   destination
  
 Chain INPUT (policy ACCEPT 1494K packets, 892M bytes)
  pkts bytes target  

Re: [squid-users] Drop semi-dead peer upon zero sized replies

2007-08-02 Thread squid3
 (Replying to list because I think that's what you intended to do.)

 On Wed 01.Aug.07 09:53, Benno Blumenthal wrote:
 Angel Olivera wrote:

 But I don't know about the second part: detecting when it's down. It
 is
 sort of down, since it will reply pings et al, but no HTTP packets
 will
 come back from it until it's back into normal operation.

 Any tips will be appreciated. Thanks in advance.
 There is a monitorurl option in cache_peer, along with minimum length of
 response  -- if your flaky peer returns zero length for all requests,
 you
 could pick one as a monitorurl and it might pick up on the dead cache...

 Perfect! That worked like a charm, thanks a lot, Benno.

 I don't know how I could miss it from the sample squid.conf. I guess it
 was because of taking The Definitive Guide as a reference but not
 checking the new options.

 Thanks again.


We now provide an the Authoritative Configuration Manual for each version
of squid. These manuals are built daily and directly from the squid source
code to provide the most up to date information on squid options. Recent
releases of squid now come packaged with a copy of their Manual built
during the release process.

For Squid-2.6 the Manual is at
http://www.squid-cache.org/Versions/v2/2.6/cfgman/

For Squid-3.0 the Manual is at
http://www.squid-cache.org/Versions/v3/3.0/cfgman/


(PS. I've just updated the wiki ConfigExamples to say this now too)

Amos




Re: [squid-users] DoS Vulnerabilities involving Squid /or ICP?

2007-08-02 Thread squid3
 Hello.  I was trying to check whether there is some security hole or
 issue with our squid /or ICP that I should know about.  I looked around
 the www.squid-cache.org  the web, but didn't find anything relevant to
 the case below.  I'd appreciate any pointers.


The major security problems we are aware of are listed at
http://www.squid-cache.org/Advisories/

Any of the 8 from SQUID-2004:2 and later may apply to your 2.5s5 squid.
It is also an unsupported version. I would highly recommend upgrading to
the current 2.6 stable release.


 BACKGROUND:


 Someone from web site X claimed that someone from our site was launching
 a DoS against them.  The IP he gave was our proxy.  It turns out someone
 from our site *was* repeatedly trying to download a certain audio URL
 (perhaps non maliciously).


Most likely you have a number of wireless clients wanting to see the video
and the source isn't providing proper caching headers for it. That would
make your squid (or anyones really) download it multiple times.

snip

Amos




Re: [squid-users] Routing different addresses through different gateways

2007-08-02 Thread squid3

 Ok. So here at the office we have a T1 line and a backup DSL line.

 Basically we have NO CONTROL over the policies passed to us over the T1
 line, which means we can't have proxies set at login automatically.

 What I would like to do is connect two outside interfaces, one for the DSL
 and T1 and two inside interfaces, one for the internal network, and one
 for
 the Cisco PIX 506E were using for VPN traffic.

 However, everyone is configured to go through 'Proxy A' on the T1, so I
 was
 wondering if some transparent action could be taken so squid sends only
 the
 requests that need to go to the T1 (Citrix and some apps) and send the
 rest
 through the DSL but do this transparently?

 Also, with the Cisco PIX 506E we can't setup a VPN because the machines
 can't 'route' back to the PIX because its a different gateway on a
 different
 internet connection. Basically the route would have to be to send the VPN
 pool subnet requests back to the PIX and not the T1 router.

 Some of this may sound confusing and I apologize I find it hard to explain
 problems when the AC breaks in the office.

 Thank you!


If you have a machine on or outside the DSL that can act as a peer for
squid its simple.

The areas to look at are based on cache_peer with various acl to control
things based on app user-agent headers.

Amos




Re: [squid-users] Deleting UFS cache files directly

2007-07-22 Thread squid3
 With UFS, is it possible to manually delete cache files without causing
 problems? If so, is there any action that should be taken to have
 squid pick-up the changes?

 What I was thinking of doing is periodically running something like:

   find /usr/local/squid/cache/?? -type f -atime +2d -size +2M -delete

 to remove large files.

 Alternately is there a utility that can do this kind of thing?


cachemgr.cgi and squidclient are provided along with the squid
installation to assist with things like this.

Amos



Re: [squid-users] Windows Update not working

2007-07-18 Thread squid3
 Also, I've tried the recommendation found below which I thought may
 solve the problem as I am using NTLM auth for my squid setup but it did
 not work.

 http://www.mail-archive.com/squid-users@squid-cache.org/msg32828.html

 Elvar wrote:
 Hello list,

 I have two identical FreeBSD firewalls running squid-2.6.5 at two
 different school systems and roughly about two months ago  the windows
 update site stopped working at both sites. Any time a user tries to
 run windows update it eventually times out. Everyones web browser is
 set up to point directly to the firewall running squid on port 8080
 which is dansguardian-2.9.8.0. Has anyone else had this happen? Is
 anyone else having problems getting windows update to work through
 Squid / Dansguardian? If so and you have found a resolution I would
 greatly appreciate it if you could share the fix details.


I have seen this happen when experimenting with transparency. Though the
cause can also occur with other proxy setups.

It seems WindowsUpdate starts nicely on HTTP and loads the M$ pages then
to do the actual system scan it needs a *direct* HTTPS connection to
call-home with. The solution for me was to allow SSL outbound through the
firewall to the IP of www.update.microsoft.com.

The successful https link lasts for an entire 1-2 seconds then disappears
from the process. But if it fails WU goes to its 'error timed out/unable
to connect/check your http settings' screen.

Amos




RE: [squid-users] Allowing accidentally blocked sites

2007-07-15 Thread squid3
text re-sequenced

 fre 2007-07-13 klockan 23:49 -0400 skrev [EMAIL PROTECTED]:
 what i have in the squid.conf is something like this:

 acl porn url_regex -i /etc/squid/porn
 acl allowed_site url_regex -i /etc/squid/allowed

 http_access deny porn

 replace the above with

 http_access deny !allowed_site porn

 Regards
 Henrik


 fre 2007-07-13 klockan 23:49 -0400 skrev [EMAIL PROTECTED]:
 If i do that, all clients will be allowed to Access these allowed sites
 without password or IPAddress verification. Considering that squid tries
 to
 find the first matching rule and doesn´t read any further. It will only
 reach that rule and ignore the rest of the access rules.

You are not entirely corrcet there.

Firstly, Squid will only allow sites if the allow permission is granted.
The ACL you were given was a DENY, it will at worst prevent access to a
site it shouldn't (the case you have right now).

Secondly, the ACL had two parts, BOTH must match before the rule is
considered. Thus is will fail to deny on any allowed_sites even if they
are porn.

Also, please don't top-post.

Amos


 Administrador del Nodo C.Habana
 telefono: 863-1648
 web: www.ciudad.jovenclub.cu
 e-mail: [EMAIL PROTECTED]

 -Mensaje original-
 De: Henrik Nordstrom [mailto:[EMAIL PROTECTED]
 Enviado el: sábado, 14 de julio de 2007 3:46
 Para: [EMAIL PROTECTED]
 CC: squid-users@squid-cache.org
 Asunto: Re: [squid-users] Allowing accidentally blocked sites

 fre 2007-07-13 klockan 23:49 -0400 skrev [EMAIL PROTECTED]:
 I am filtering some porn sites using url_regex. I have some gneral
 patterns in the file that block most of the bad sites but it also
 blocks good sites which URLs contain those key words that are
 registered in the url_regex file. I want to allow those sites
 including safe internet navigation






Re: [squid-users] IPv6 Support

2007-07-15 Thread squid3
 Which version of Squid support IPv6? And How can I Enable it?


Squid version 3.0 is the one officially supported with an IPv6 branch
see http://devel.squid-cache.org/squid3-ipv6/ for details.

Download from CVS. Instructions in http://devel.squid-cache.org/CVS.html
using BRANCHNAME of 'squid3-ipv6'.
run the following:
  ./bootstrap.sh
  ./configure --enable-ipv6  plus any other options you want
  make check
  maek install

Any problems, even the smallest, let me know.


There is also a third-party patch to 2.6.STABLE13 which appears to be
maintained.

Amos Jeffries
Squid Development Team (IPv6)




Re: [squid-users] ACL and http_access Confusion

2007-07-05 Thread squid3
From: Emilio Casbas [EMAIL PROTECTED]

Vadim Pushkin escribió:
Hello;

I have an ACL which contains IP addresses that I want to allow outbound
requests to.

acl allowed_IPs dstdomain /net/squid/allowed-IP-Dests

I have another ACL which is intended to capture all destinations which
 use
an IP address versus FQDN, which one of these two is correct for this
purpose?

acl numeric_IPs url_regex ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+
or
acl numeric_IPs urlpath_regex ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+

Finally, I want to deny all outbound requests to ACL numeric IP's (IP
addresses only), *unless* the requested IP address is contained in my
 ACL
allowed_IPs.

Would the below work for this?

http_access deny CONNECT numeric_IPs !allowed_IPs


If you are going to use in CONNECT you have to use dstdom_regex.
CONNECT only have hostname and port.

Emilio C.

 So, replace

 acl numeric_IPs urlpath_regex ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+

 with

 acl numeric_IPs dstdom_regex ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+

 and

 will this work?

 http_access deny CONNECT numeric_IPs !allowed_IPs


Um, I'm starting to get a little confused here myself after that reply.

When you are wanting to test the actual destination IP you can use the
'dst' type ACL (squid will do any DNS lokoup needed to find it before
testing).

When you are wanting to test for people sending CONNECT 1.2.3.4 HTTP/1.1
etc. then dstdomain (for pre-known IPA), or dstdom_regex (to catch all
IPA) is needed.


Amos



Re: [squid-users] squid and content filtering

2007-07-05 Thread squid3
 Hello,
 I'm running squid 2.6 with icap support on FreeBSD 6.2. I've done some
 searching on this, but was wondering what the current trend was with squid
 and content filtering? The last time i tried it it was with squid 2.5 and
 adding in the content filter slowed browsing and the internet down
 extremely
 noticeably. I'm using bannerfilter as a banner redirector and am happy
 with
 it, and am looking in to items for av, but i wanted to look at content
 filtering as well. Particularly i'd like to block flash and javascript,
 but
 only on certain clients.
 Thanks.
 Dave.


ICAP has been through a major improvment in Squid recently, 2.6 has 'okay'
support. The very latest Squid 3.0.PRE6 or daily snapshot are much, much
better again with ICAP. Alex is also currently working through the last
known ICAP bugs if you encounter any problems.

I think these days any slowing would be due to slow ICAP servers rather
than Squid.

Amos




Re: [squid-users] Different scenarios in a reverse proxy environment

2007-07-05 Thread squid3
 2007/7/5, Emilio Casbas [EMAIL PROTECTED]:

 I think this example would be redundant, We could achive the same
 objective with:

 cache_peer 12.34.56.78   parent80  0  no-query originserver
 name=CCTV

 acl service_cctv dstdomain .cctv.com
 cache_peer_access CCTV allow service_cctv


 No.This site of cctv.com has lots of virtual hosts,and those virtual
 hosts are located in different original servers.So for us the
 statement of 'dstdomain .cctv.com' is not useful at all.

You should still be able to condense the *_access lines down to a single
instance. Like so:

acl service_cctv dstdomain d1.cctv.com
acl service_cctv dstdomain d2.cctv.com
acl service_cctv dstdomain d3.cctv.com
cache_peer_access CCTV allow service_cctv

With a different acl name for each distinct-origin set of virtual hosts.


Amos



Re: [squid-users] Squid ACL

2007-07-05 Thread squid3
 Hello,

 i need to solve following problem.
 I have an ldap-server, which i use to authenticate the user.
 If the user is in the group, he has access to the group A. If the
 authentications fails, he has access to the group B.

 Can anyone tell me, how i can solve this problem.

 I have already have an authentication, but the problem is, that if the
 user tries to authenticate, but he has no rights, the
 authentication-window
 comes again and again. But the user has to be in the group
 to_domains_without_auth and the other domains should be blocked.

 So, the relevant code looks like:

 auth_param basic program /etc/squid/ldapauth.pl
 acl for_inetusers proxy_auth REQUIRED

 acl to_domains_without_auth dstdomain
 /var/ipcop/proxy/advanced/acls/dst_noauth
  .acl


 Can anyone help me?


Check the order of http_access * lines in your squid.conf.
They are processed in order, and for_inetusers needs to be preceeded by
any ACL that allow people through without Auth.

For example:

http_access allow anybody_without_auth
http_access allow for_inetusers
http_access deny all

Amos



Re: [squid-users] Is it possible to allow access to a single site and allow all embedded images to show as well?

2007-07-05 Thread squid3
 Hi,

 Is it possible to allow access to a single site, eg. a forum site, and
 allow all the images to display as well even if they are not hosted on
 the forum site itself?

Squid by itself doesn;t scan eth data portion of HTTP objects. It has no
way of knowing where the images are going to come from. you will need an
external checker (ie ICAP) to do this on its behalf, AND ALSO some way for
it to get the images past squid on the fly.


 For example, if a forum user makes a posts and embeds and image from
 Flickr or any other image hosting site, I would still want that image
 to display. However, if the user tried to leave the forum site, the
 user would be blocked.


Do you have control of the forum web server?
I would suggest for now that its an easier solution to configure squid to
allow the webserver access to the general net, make an internal redirector
page that pulls those images into the webserver and sends them out to the
users as if they were on the webserver itself.
Just be very careful that the redirector will only work for internal
clients, and for items placed in the forum.

Amos



Re: [squid-users] Youtube cache ?

2007-07-03 Thread squid3
 Excellent hack! Thank you for that. I have added it to the wiki config
 FAQ as the reference for all these YouTube questions.

 http://wiki.squid-cache.org/ConfigExamples/DynamicContent

 Any other sites this would work well on?

 Matt


Theoretically it should work on any site that uses ? (php, perl, asp, ash,
shtml - based.)
With default config and ? blocking turned off IIRC the only thing that
prevents caching are the HTTP headers themselves (explicit no-caches) and
the cache file-size policy.

Amos



Re: [squid-users] Cache Manager FAQ wiki - updated info

2007-07-03 Thread squid3
 Hi,

 I was trying to set up my squid manager client by following the
 instructions
 at http://wiki.squid-cache.org/SquidFaq/CacheManager

 Unfortunately these instructions do not appear to correspond to my set-up.

 I have squid 2.6.STABLE9, Apache/2.0.55 running on Ubuntu 6.10 (kernel
 2.6.17-10-generic) 64-bit.


Ah, thank you for locating this oversight. I have corrected it by
splitting the Apache section for 1.x and 2.x specific configs.

Amos



Re: [squid-users] unregister

2007-07-03 Thread squid3
 unregister


Please read instructions that accompany each posting (In the mail header).

Amos



Re: [squid-users] FreeBSD Squid timeout issue

2007-06-28 Thread squid3
 Hello,
 Thanks for your suggestions. I checked my squid.conf and the acls for
 chat and spyware were of type dstdomain, porn was url_regexp, i changed
 that
 to dstdomain and now when i do a squid -k reconfigure i am getting syntax
 errors. AS for the file sizes chat has 2 lines, spyware has 1440 lines,
 and
 of course the big one the porn rejection file has 15025 lines.

Oh, aye, that way huge for regexp to handle.

 The error
 i'm
 repeatedly getting now and i didn't get it when the file was url_regexp
 was
 that i have subdomains of parent domains and they are ignored.

Hmm, sure this is an error? not a warning? It sound to me like a little
maintenance needs doing on that file.
 - Duplicates can be removed.
 - 'example.com can' be removed if you have '.example.com' elsewhere.
 - 'www.example.com' can be removed if you have '.example.com' elsewhere.
Sounds like the last of these two are what you are being warned about.

If your still having trouble you can email me the file and I'll check it
myself.

 Does anyone
 use spyware, porn, and chat rejections, and if so where did you obtain
 them?
 Also, i'm wondering why my cache isn't clearing out the oldest items,
 is
 my cache replacement policy bad?

Quite possibly, my squid expertise doesn't extend into the replacement
policies, yet. You will have to look to one oef the others for help.


 Thanks.
 Dave.

 - Original Message -
 From: [EMAIL PROTECTED]
 To: squid-users@squid-cache.org
 Sent: Tuesday, June 26, 2007 9:27 PM
 Subject: Re: [squid-users] FreeBSD Squid timeout issue


 Hello,
 Thanks for all replies.
 I've got a good hard disk, i've been checking that and haven't
 found
 any
 problems or seen any error msgs in my logs.
 I've adjusted my high cache size from 100% to 95% but i'm starting
 to
 look at is squid purging oldest items from my cache? It seems like when
 the
 cache gets full or nearly so i start having this issue?
 As for my pornography and spyware rejection files they are each a
 considerable size, they are lists of sites i don't want visited,
 downloaded,
 or to have anything to do with. If there's a way to speed this up i'm
 all
 for it.
 Thanks.
 Dave.


 Make sure that you are using dst or dstdomain as the ACL types on teh
 lareg lists instead of regex.
 The regex is quite slow and large lists often become a drag. After
 splitting the lists into 'need regex' and dstdomain eth speed increase
 is
 still often worth the extra time spent maintaining two lists.

 Make sure there is extra space on the cache disk. All the tutorials
 mention making the cache 60%-80% of drive size. I can't recall what the
 exact reasons were but it had something to do with OS-level handling on
 the drive.

 Amos







Re: [squid-users] Cannot connect to Squid's default port

2007-06-28 Thread squid3
 Running iptables-save | grep 3128 return nothings (No output, no error,
 just another prompt).
 I'm not aware of any other firewall. Baffled

 And if you execute iptables-save | grep 3128 from the command line (as
 root)? It's possible there is iptables rules Webmin doesn't know about..
 Or could there be another firewall between the client and the Squid
 server?

  Thank you for your reply. I'm using webmin, which reports that No
 IPtables firewall has been
  setup yet on your system.. Frustrated,


Are you doing transparent stuff anywhere?
What does your squid.conf have for that port? and the others that work?
By 'error log' earlier did you mean to say 'cache.log'?
I have seen this effect when squid encounterd an error it can't recover
from and never sends the browser an error.

Amos



Re: [squid-users] How can I avoid of that httpReadReply: Excess data from

2007-06-27 Thread squid3

 Hi,

 I am receiving more Excess data errors in my cache.log file

 2007/06/27 06:46:44| httpReadReply: Excess data from GET
 http://www.dailyink.com/fv.php/Comics/Rhymes%20with%20Orange/log_iaovt8.gif;
 2007/06/27 06:46:44| httpReadReply: Excess data from GET
 http://www.dailyink.com/fv.php/Comics/Sally%20Forth/sal_hwhlty.gif;
 2007/06/27 06:46:48| httpReadReply: Excess data from GET
 http://www.dailyink.com/fv.php/Comics/Zits/zit_hwhm7a.gif;

 I am using Squid Cache: Version 2.5.STABLE13

 Kindly let me know how I can avoid this...

 Regards,
 Sathyan Arjunan
 Unix Support | +1 408-962-2500 Extn : 22824
 Kindly copy [EMAIL PROTECTED] or reach us @ 22818 for any
 correspondence alike to ensure your email are being replied in timely
 manner


This is not an error, but squid being overly verbose. The remote server if
broken but squid can cope with it.

First, try an upgrade to Squid 2.6. It's much nicer, and is supported.

Alternatively, look at the fix in
http://www.squid-cache.org/bugs/show_bug.cgi?id=1265

Amos




Re: [squid-users] --enable-fd-config

2007-06-27 Thread squid3
 ons 2007-06-27 klockan 10:37 -0500 skrev Fiero, Paul:
 We are having problems with our web browsing coming to a crawl.  This
 appears to be a file descriptors issue.  I am gathering that squid uses
 --enable-fd-config to deal with file descriptors now?  How do I
 configure my squid server to use more than 1024 descriptors?

 I used to set 'ulimit -HSn 32768' and then included that in my squid
 init file and all was happy.  But on this installation it appears that
 the old method doesn't work and I see reference to --enable-fd-config
 when I do squid -v.

 No idea what that --enable-fd-config thing is. It's not a standard Squid
 configure option. But it I am to guess it's probably a vendor patch to
 allow runtime configuration of the number of filedescriptors via
 squid.conf or the init script..

 Regards
 Henrik


You neglect to mention exactly which version of squid you are running...

But, Just for the time waster I thought I'd see where that option came from.
It appears to be a RedHat/FedoraCore addition to their release of
2.5STABLE3, and carried into the 2.6 by FC as recently as:

Fedora Core 6 Update: squid-2.6.STABLE6-1.fc6
Date:   Tue, 12 Dec 2006 11:16:32 -0500

If you are in fact using the FC/RH package you will need to contact them
first for support.

The bugs on this are:
  https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=72896
  http://www.squid-cache.org/bugs/show_bug.cgi?id=435

The patch they seem to be using is at:
  https://bugzilla.redhat.com/bugzilla/attachment.cgi?id=121905

Henrik, their discussions about killing select match some that I recall
reading about in dev once. I'm not sure myself if they were ever applied
to 2.6/3.0 but it may be time to check #435 and see if its still valid
given the patch RH have come up with.

Amos




Re: [squid-users] provide html pages embedded in a standard page using squid. Is this possible?

2007-06-26 Thread squid3
 tis 2007-06-26 klockan 23:59 +0200 skrev js:
 Hi list,

 Can I configure Squid this way that it always provides a client the html
 page below?

 Not easily. Do do this you need to rewrite the content of any downloaded
 HTML object to add javascript code checking that it's loaded from within
 the frame, and reload the frameset otherwise..

 My recommendation in how to do this would be to write an ICAP server
 doing the needed HTML rewrites. Squid-3 has ICAP support, and there is
 unofficial patches for Squid-2.


In which case, you would be much better off taking a good look at the
OBJECT tag in HTML/XML and coding the re-writing to simply place an
OBJECT at the top of any served pages with a link to your banner content
inside.

Not that performing a man-in-middle attack on peoples HTML traffic is ever
considered a good thing. If its your own pages there are better options
based at the webserver.

Amos




Re: [squid-users] FreeBSD Squid timeout issue

2007-06-26 Thread squid3
 Hello,
 Thanks for all replies.
 I've got a good hard disk, i've been checking that and haven't found
 any
 problems or seen any error msgs in my logs.
 I've adjusted my high cache size from 100% to 95% but i'm starting to
 look at is squid purging oldest items from my cache? It seems like when
 the
 cache gets full or nearly so i start having this issue?
 As for my pornography and spyware rejection files they are each a
 considerable size, they are lists of sites i don't want visited,
 downloaded,
 or to have anything to do with. If there's a way to speed this up i'm all
 for it.
 Thanks.
 Dave.


Make sure that you are using dst or dstdomain as the ACL types on teh
lareg lists instead of regex.
 The regex is quite slow and large lists often become a drag. After
splitting the lists into 'need regex' and dstdomain eth speed increase is
still often worth the extra time spent maintaining two lists.

Make sure there is extra space on the cache disk. All the tutorials
mention making the cache 60%-80% of drive size. I can't recall what the
exact reasons were but it had something to do with OS-level handling on
the drive.

Amos




Re: [squid-users] Google Safe Browsing API - Integration with squid?

2007-06-24 Thread squid3
 Henrik Nordstrom wrote:
 sön 2007-06-24 klockan 23:49 +0200 skrev Andreas Pettersson
 I'm not sure I follow you here..
 If phisher has control of evil.com he could send out send out unique
 urls in each and every spam, all pointing to the same physical host.
 Sure, MD5 hashes is efficient, but the number of possible urls is
 nearly
 unlimited. It would be much easier to list the host instead.

 And the Google SafeBrowsing lookup algorithm allows just that.. It's not
 just an MD5 of the complete URL. The URL is processed in many steps of
 varying granularity, each producing an MD5 to look up in the blacklist.

 http://code.google.com/apis/safebrowsing/developers_guide.html#PerformingLookups

 Note: In the worst case there is 5 * 6 = 30 different lookups per URL.
 Normally less than 10 however

 [walks away and stands in the corner]
 Believe it or not, I actually read that guide before making my initial
 post, but apparently it completely vanished from my memory...
 Perhaps It happened when Phishtank was brought up.


If you're going to take that route the most efficient way to do it is to
allow an admin-configured RHSBL or RBL with an ACL on the dst or
dstdomain, (lookup the SURBL query algorithm). Rather than any single
custom API. rbldnsd can be setup and used easily by anyone in conjunction
with a squid.

I currently use external helpers to check against ~30 RBL and ~3 RHSBL.
Making a built-in ACL is on my wishlist, but way down since external
helpers do it okay for now.

Amos




Re: [squid-users] Squid and Windows Update

2007-06-21 Thread squid3
 Henrik Nordstrom wrote:
 tor 2007-06-21 klockan 14:22 +0100 skrev Julian Pilfold-Bagwell:

 If I am to guess you might need to allow access to the windows update
 servers without using authentication.

 Is it possible to do that while retaining authentication for users?

 Yes.

 Just allow access to the windows update servers before where you
 normally require authentication.

 Regards
 Henrik

 That's what we do and it works very well. We do the same for common
 antivirus update sites too. :-)

 Just a thought on WindowsUpdate via squid though, it's very very slow
 through squid. Seems to take
 many minutes to check for updates, but when bypassing the proxy this is
 not the case. I wonder if
 this is normal for squid?


It is a side effect of WindowsUpdate that has been seen before on occasion
under some squid configs.

WindowsUpdate apparently pulls its data from the main servers using
partial Ranges. Squid does not to my knowledge fully support storage of
partial ranges (we have plans to improve this but no sponsor yet I think).
Also some configurations are set to always pull the entire file when a
range is requested.
The cachability settings of the WU servers may also be a factor.

If your config has been set to always pull the entire file and cache it,
you could try allowing squid to pull ranges and not cache them.


Amos




Re: [squid-users] Delay pools (throttling) for specified websites?

2007-06-20 Thread squid3
 No takers? Thanks in advance. :-)
 D.Radel.


 D  E Radel wrote:
 Hi all,

 I am currently using a delay pool to throttle users a certain ldap_group
 to a slow speed.

 I wish to throttle access to certain bandwidth heavy websites. I added
 the ACL slow_sites to my existing delay pool as follows:

 acl slow_sites url_regex -i /squid_stuff/slow_sites.txt

 # Delay pools
 #-
 delay_pools 1
 delay_class 1 3
 delay_access 1 allow slow_group slow_sites
 delay_access 1 deny all
 delay_parameters 1 25/25 -1/-1 8000/25

 It didn't slow down access to the sites listed in the textfile. Whereas,
 the delay pool was already working correctly on the users in slow_group
 before my adjustments.

 Any help greatly appreciated. Many thanks in advance.
 D.Radel.


I read that config as meaning only slow slow_group when accessing slow_sites.

Seems to me you want a config like:

 delay_access 1 allow slow_group
 delay_access 1 allow slow_sites

Which would slow BOTH lists independent of each other.

Amos




Re: [squid-users] Start page for Squid

2007-06-20 Thread squid3
 Dear all,

 I am looking for a way of redirecting my users to a start page when
 they first start their browser through my transparent squid proxy. I
 have found one (old) example on the web that explains that one could use
 a perl script as this:

 #!/usr/bin/perl
 $|=1;
 my %logged_in;

 while() {
   if (!defined($logged_in{$_})) {
 $logged_in{$_} = 1;
 print ERR\n;
   } else {
 print OK\n;
   }
 }

 So I put this in a file called sessions in /usr/local/bin and give it
 execution possibilities.

 Further, the example explains that squid.conf should have something
 similar:

 external_acl_type session ...
 acl session external session

 So I add:

 external_acl_type session /usr/local/bin/session
 acl session external session

 But it needs a FORMAT but I am not sure which...so I add %LOGIN.

 Further more the following needs to be added:

 http_access deny !session
 deny_info http://some_url/ session

 But it doesn't work and I am not redirected. Could someone please direct
 me?

 Thanks in advance!

 best regards,
 Joakim Lagerqvist


FORMAT is a list of the details you want passed to the helper.
I have a helper that does a similar job for me to what you are trying to
setup. The config for that is:

external_acl_type prepay_test ttl=0 negative_ttl=0 %SRC %DST
/etc/squid3/squid-check-users.sh
acl prepaid_users external prepay_test
deny_info http://wifi.treenet.co.nz/ prepaid_users
http_access deny localnet !prepaid_users
http_access allow localnet prepaid_users


Amos




Re: [squid-users] Squid as web page cache for dynamic content

2007-06-20 Thread squid3
 Hello,

 I'm totally new to Squid and working for the largest IT news portal in
 Scandinavia. We're looking to replacing our current server side page
 cache environment with a hardware solution (such as Netapp NetCache
 for example). Now would this be possible to do maybe with
 Squid instead?

 So I don't want to use Squid for client proxy, only website content
 cache/acceleration and if possible also distribution for streaming
 media (web tv)..

 // kimmo


Very probably. NetCache and Squid are alternatives to each other in most
areas.
Squid also has an excellent record as a reverse proxy (acclerator) caching
web traffic produced by backend servers so duplicate requests save
processing.

Amos




Re: [squid-users] POST is never cached?

2007-06-20 Thread squid3
 Hello members,

 I want to be sure that squid would not cache any POST pages.
 For example,given this page:
 http://www.example.com/login.php
 It receive user's post contents like username and password.
 Woud squid cache this .php page?

 thanks!


No it does not cache POST data.

Try testing it at http://www.ircache.net/cgi-bin/cacheability.py

Amos



Re: [squid-users] Solaris 10, Squid-2.6.stable13

2007-06-13 Thread squid3
 That very well could be DNS related.  When I don't set the visible_host
 and I attempt to create the cache, squid complains about needing to set
 the visible_host parameter.  The squid server sits behind a cisco router,
 so it doesn't have a FQDN, only a relative and private name.

 Do you have any recommendations, my dns servers ns1 and ns2.linuxlouis.net
 resolve correctly for all of the external Internet known domains I serve,
 but again the inside system bandaboo1 doesn't have a FQDN.

 Any ideas?


DNS and rDNS are broken.
All Internet conected hosts should have an FQDN that their IP rDNS points
to. It should be constructed using the unique machine name and the
companies official domain name (would that be linuxlouise.net.?).
A machine which has no rDNS on the modern internet, particularly a proxy,
is just asking for connection problems.

Amos


 On Wed, 13 Jun 2007, Henrik Nordstrom wrote:

 ons 2007-06-13 klockan 10:18 -0400 skrev [EMAIL PROTECTED]:
 Hello dist,
 I've currently got OpenLDAP, PostgreSQL, and Squid all working
 together.
 Squid for some reason is taking forever to load any URL's requested,
 even
 when I disable the auth_param integratino to OpenLDAP.

 Sounds like you might have a DNS problem or similar. Probably not
 authentication related at all.

 Regards
 Henrik


 --
 Louis Gonzales
 [EMAIL PROTECTED]
 http://www.linuxlouis.net






Re: [squid-users] Poor performance

2007-06-12 Thread squid3
 Hi list.
 - I've a network with 1600 PCs. I'm using to fws (Fedora 3 with
 IPTABLES), and transparent proxy with squid, installed on both FWs.
 - I splited the net to 800 PCs for each fw.
 - I have a very poor network performance. I have a link of 6MBs.
 - I intend also that squid force the cache to some files (EXE, GIF,
 JPG...), as it would supposed to do.
 - How to set squid as FQDN?

Don't quite understand this last question.
How to set a program as a domain name? please explain.


 Thanks in advance;
 Andrey


 - Interesting things on squid.conf

 cache_swap_low 80
 cache_swap_high 85

 ipcache_size 2048
 fqdncache_size 5000


 httpd_accel_host virtual
 httpd_accel_port 80
 httpd_accel_with_proxy on
 httpd_accel_uses_host_header on


You appear to be running squid2.5. For transparent proxy try upgrade to
2.6, which is much better at it.


 # cache_dir ufs /var/cache/squid tamanho_maximo diretorios subdirs
 cache_dir ufs /var/cache/squid 1000 200 10


Try using aufs as filsystem type. It's quite a bit faster than ufs with
high traffic. Also for the size of your network you may want to increase
the available cache size.


 cache_mem 500 MB
 maximum_object_size 1000 KB
 minimum_object_size 1 KB
 maximum_object_size_in_memory 300 KB

 fqdncache_size 1

 cache_replacement_policy lru
 memory_replacement_policy lru

 cache_store_log none

 #
 # Usa o proprio dnsmasq como server  :-)
 #
 dns_nameservers 192.168.0.252
 dns_testnames www.ufmg.br






Re: [squid-users] pac and dat woes

2007-05-10 Thread squid3
 On Thu, May 10, 2007, Pitti, Raul wrote:
 I am having problem with Firefox 1.5, FF2.XX and IE 6 and 7 and proxy
 autoconfiguration.  After a few days of searching and trying, i am
 unable to use autoconfiguration for my proxy.

 I've made WPAD work but I've not made it work with a DHCP configuration.
 I've done mine with DNS.

 Does anyone here have an example of a WPAD+DHCP configuration? If so I'd
 like to talk to you and document it on the Wiki.


grr ... top-poster.

Yes I have WPAD+DHCP going. I encountered very similar problems.
There were two workaround I had to use

First, was discarding all the common online instructions. They only seem
to work for one or the other not both Ffx and IE.

DO NOT rename option 252 inside the dhcp config. Each time you need it
send it explicitly by number. There is something about the way most DHCP
agents do name aliasing that IE hates.

Secondly, the DNS wpad.* MUST have * equal to at least one of the 'domain'
settings in resolv.conf in linux and 'default-domain' in dhcp.conf for
windows (there is probably a machine domain config for windows but I don't
use it).

Aside from that, each Ffx has to be set explicitly to 'Automatically
Detect Network Settings'. The default is a forced DIRECT connection.

I will be back at the machines that do this in a few hours and can give
you exact examples then.

Amos


 1. I have the following dhcpd.conf file:

 max-lease-time 28800;
 default-lease-time 14400;
 option subnet-mask 255.255.255.0;
 ddns-update-style interim;
 option WPAD code 252 = text;
 #
 # rimith.local
 subnet 192.168.1.0 netmask 255.255.225.0 {
 option WPAD http://192.168.1.17/proxy.pac;;
 option routers 192.168.1.20;
 dynamic-bootp-lease-length 10;
 ignore client-updates;
 option domain-name-servers 192.168.1.17, 200.75.200.2;
 max-lease-time 14400;
 ddns-updates off;
 default-lease-time 4000;
 range 192.168.1.126 192.168.1.239;
 }

 2.  the MIME type has been set on the webserver.

 3. also, i have my internal dns set to point wpad.rimith.local to the
 server 192.168.1.17, and also i have a link for wpad.dat pointing to
 proxy.pac on the root of the webserver.

 None of my clients are able to set the proxy automatically.  But if I
 set the address for the pac file manually, everything works o.k.

 can someone shed some light on my problem?

 thanks!
 RP





 --
 
 Ra?l Pitt? Palma, Eng.

 Global Engineering and Technology S.A.
 mobile (507)-6616-0194
 office (507)-390-4338
 Republic of Panama
 www.globaltecsa.com

 --
 - Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid
 Support -
 - $25/pm entry-level bandwidth-capped VPSes available in WA -





Re: [squid-users] Bandwidth Requirements

2007-05-10 Thread squid3
 I am looking at implementing squid for one of my clients and have a
 question regarding bandwidth usage. In the scenario I will have multiple
 locations with very few PC's approximately 2-3 machines per location.

 If I setup a main squid server in one of my main locations with a
 standard DSL connection (3.0Mbps down and 512K up) and VPN the stores
 into that main server, will I notice a large delay when waiting for
 pages to load?

 My second question is if I use that scenario will the internet traffic
 all flow under through the proxy or will it just check the URL and then
 use the default route which will be the local internet connect?

 Thanks in advance.

 Dustin


Um, the best use of Squid is to prevent usage of slow links like your 512K
up. If the clients are on the other end of it to squid then you really
need a great reason to force them to use it.

On the information you have given the answers are definately, and maybe.
But some info on what you are trying to do may change that.

Amos



Re: [squid-users] new website: final beta

2007-05-09 Thread squid3
 On Wed, May 09, 2007 at 02:14:33PM +0200, Ralf Hildebrandt wrote:
 
  Nice work Adrian!

 Definitely.


 Struth Bruce! Nice one mate!

 Sort of quoting one of Yahweh's olde proverbs:
 ...squidmaster, cache thy self

 Will the final site be cache-able?

 I don't have the web skills that you do, but I found the easiest way to
 make php's cache-able was to lynx dump the php to a .html, and have
 apache serve index.html in preference to index.phtml. Naturally, all
 links to pages must be to the .html and not the .php:


Whereas I have a completely alternate experience with cachability.
PHP has the ability to easily prepend headers that specify cachability and
duration.
Alternatively apache can do that itself with VirtualHost or .htaccess
configs.

Amos




Re: [squid-users] new website: final beta

2007-05-09 Thread squid3
 On Wed, May 09, 2007, Craig Skinner wrote:
 On Wed, May 09, 2007 at 02:14:33PM +0200, Ralf Hildebrandt wrote:
  
   Nice work Adrian!
 
  Definitely.
 

 Struth Bruce! Nice one mate!

 Sort of quoting one of Yahweh's olde proverbs:
 ...squidmaster, cache thy self

 Will the final site be cache-able?

 I don't have the web skills that you do, but I found the easiest way to
 make php's cache-able was to lynx dump the php to a .html, and have
 apache serve index.html in preference to index.phtml. Naturally, all
 links to pages must be to the .html and not the .php:

 It will be. I just haven't yet added E-Tag and Expiry generation to the
 PHP code. I'll see what I can do. I haven't found an example of a really
 good dynamic site that actually sets appropriate cachability tags
 (and does so with minimal load to the server - there's no point in having
 to do the whole database query set and parse the database replies
 just to generate etags, for example!) so I figure this can double as
 that.

 Now, where's that spare time..




 Adrian


Sitting here in the coutry next door.

I've never done ETag, but the way you have setup the templating is not too
dissimilar to the way my sites work. I have some ideas for base metrics
I'll talk about when I see you on irc tonight.

Amos





RE: [squid-users] Squid + Policy-Based Routing +LoadBalancing/Clustering???

2007-04-29 Thread squid3
 Aaa, I see your point. I wasn't thinking before I spoke.  To bypass
 the normal route to the outside world would be in violation of our
 security policy and would set a precedent that I don't think our CIO is
 ready to defend


That sounds ... to me as a security consultant ... like you have a very
troubling security setup there. The load balancer _outside_ the FW
inaccessible to squid directly??

You should be considering both load balancer, squid and any other servers
as valuable company resources to protect from both internet and some
clients. FW outbound and inbound but not between them (unless your _very_
paranoid and have a FW on each machine ... which is a story for later...).

But that is all besides Henriks point. Which was...

Squid should be able to go out via FW directly for vetting not through a
load balancer which may easily circle the loop back to squid again , and
again, 

Thus the paths should look like this ...

User-FW/Router-Balancer-Squid-FW-Internet
and
Internet-FW-Squid-FW/Router-User

FW and Router should be considered as fast like a switch, somthing that
can be traversed easily more than once, but only as an invisible hop to
elsewhere.

There is no need for squid to go through the balancer twice. The
squid-internet part _cannot_ be balanced at your end by the nature of the
protocols.
Doing so merely doubles the traffic going through your hardware. Not
exactly something you want to do under any circumstances.

Amos

pPS. Oh and PLEASE do not claim confidentiality on writing which are
published for the entire world to see in perpetuity.


 ===

 The information contained in this ELECTRONIC MAIL transmission is
 confidential.  It may also be a privileged work product or proprietary
 information. This information is intended for the exclusive use of the
 addressee(s). If you are not the intended recipient, you are hereby
 notified that any use, disclosure, dissemination, distribution [other than
 to the addressee(s)], copying or taking of any action because of this
 information is strictly prohibited.

 ===

 -Original Message-
 From: Henrik Nordstrom [mailto:[EMAIL PROTECTED]
 Sent: Sunday, April 29, 2007 12:26 PM
 To: Fiero, Paul
 Cc: squid-users@squid-cache.org
 Subject: RE: [squid-users] Squid + Policy-Based Routing
 +LoadBalancing/Clustering???

 lör 2007-04-28 klockan 22:10 -0500 skrev Fiero, Paul:
 Ack, that isn't the answer I was looking for.  We do a load balancer
 that we could use but, unfortunately it means traffic would go from
 the router, through the firewall, through the load balancer, to squid,
 back through the load balancer, back through the firewall then out to
 the internet and then it would return through that path.

 Why? The load balancer path is only for traffic Clients-Squid, how Squid
 then fetches the content is irrelevant.
 Regards
 Henrik







Re: [squid-users] reverse proxy for squid2.5.10

2007-03-04 Thread squid3
 Hi,

 we're using squid2.5.10 (basically because we're using debian apt
 package).

Did you mean to say you stuck with Debian stable release?

squid 2.6 and squid 3.0 are both available through Debian apt at unstable
level.


Amos




Re: [squid-users] Errors when Starting Squid

2007-03-02 Thread squid3
 Folks,
 When I start squid, i get the following errors:

 helperOpenServers: Starting 10 'sqred.plx' processes
 2007/03/01 21:45:50| ipcCreate: CHILD: c:/sqred/sqred.plx: (8) Exec
 format error

snip many duplicates

 I have noticed that it is due to this command in the .conf file

 url_rewrite_program c:/sqred/sqred.plx

 When this line is commented, the proxy works fine.

 Does anyone have an idea as to what the Exec Format error is?

 Thanks

 Alan


This is squid attempting to start its child processes. As it does so
Windows returns the Exec format error error and causes squid to abandon
the startup procedure.

The Windows Server Documentation indicates this error is given out by
windows when a binary file cannot be executed. Usually on corrupt
binaries.

Check that the c:/sqred/sqred.plx file is actually an executable format in
win32 acceptable format. The windows command line should be able to
execute it or give you a better description of the problem.

Amos