[squid-users] Performance (RPS) on 2.7
Hi, I'm looking for some roundabout requests/second expectations for a Squid 2.7 machine (Modern quad core, 8+GB RAM, RHEL 5.4 x64) with all caching disabled. I will not be caching any requests coming into the Squid server (no cache_dirs, etc). All of the baseline stats that I can find seem to indicate that the servers are caching. I can provide more data if needed. Any ideas? Thanks, Josh
RE: [squid-users] Performance (RPS) on 2.7
Could you point me to the published stats? I was expecting to at least do 500-600 requests/sec. ISA can do this on mediocre hardware with no problem. Thanks, Josh -Original Message- From: Amos Jeffries [mailto:squ...@treenet.co.nz] Sent: Tuesday, February 02, 2010 2:16 PM To: squid-users@squid-cache.org Subject: Re: [squid-users] Performance (RPS) on 2.7 Baird, Josh wrote: Hi, I'm looking for some roundabout requests/second expectations for a Squid 2.7 machine (Modern quad core, 8+GB RAM, RHEL 5.4 x64) with all caching disabled. I will not be caching any requests coming into the Squid server (no cache_dirs, etc). All of the baseline stats that I can find seem to indicate that the servers are caching. I can provide more data if needed. Any ideas? Take the published stats and divide by ten. Network fetches are about 10x-15x slower than local disk fetches. Amos -- Please be using Current Stable Squid 2.7.STABLE7 or 3.0.STABLE23 Current Beta Squid 3.1.0.16
RE: [squid-users] Performance (RPS) on 2.7
FWIW, I'm talking about 20-30Mbit of traffic. Josh -Original Message- From: Amos Jeffries [mailto:squ...@treenet.co.nz] Sent: Tuesday, February 02, 2010 2:16 PM To: squid-users@squid-cache.org Subject: Re: [squid-users] Performance (RPS) on 2.7 Baird, Josh wrote: Hi, I'm looking for some roundabout requests/second expectations for a Squid 2.7 machine (Modern quad core, 8+GB RAM, RHEL 5.4 x64) with all caching disabled. I will not be caching any requests coming into the Squid server (no cache_dirs, etc). All of the baseline stats that I can find seem to indicate that the servers are caching. I can provide more data if needed. Any ideas? Take the published stats and divide by ten. Network fetches are about 10x-15x slower than local disk fetches. Amos -- Please be using Current Stable Squid 2.7.STABLE7 or 3.0.STABLE23 Current Beta Squid 3.1.0.16
[squid-users] Requests per sec from squidclient?
Does squid keep an internal counter of requests (HTTP, etc) per second? All I see from 'squidclient mgr:info' is a requests per minute counter for HTTP requests: Number of HTTP requests received: 92 Average HTTP requests per minute since start: 26.7 Thanks, Josh
[squid-users] Ignore requests from certain hosts in access_log
I am trying to ignore requests from two IP addresses in my access_log. These two hosts connect every second and do health checks of the proxy service and I would like to eliminate the access_log spam that they create. Here is what I am trying: acl loadbalancers src 172.26.100.136/255.255.255.255 acl loadbalancers src 172.26.100.137/255.255.255.255 access_log /var/log/squid/access.log squid !loadbalancers This does not seem to have any effect. Requests from 172.26.100.136 and .137 are still appearing in the access_log. Any ideas? Running Squid 2.6 from EL5: squid-2.6.STABLE21-3.el5. Thanks, Josh
Re: [squid-users] Ignore requests from certain hosts in access_log
Ok, that sort of worked. I have a pair of load balancers sitting in front of my Squid proxy farm. The load balancers insert the X-Forwarded-For header into each HTTP request which allows Squid to log their connections using their real client source IP (extracted from X-Forwarded-For). In reality, the connections to the squid servers are being made directly from the load balancers. When I use log_access to deny logging to the load balancer's IP addresses, -nothing- gets logged to access_log. I am attempting to not log the health HTTP checks from 10.26.100.130/10.26.100.131 but still log the other traffic. It doesn't seem that log_access is X-Forwarded-For aware? Any ideas? acl loadbalancers src 10.26.100.130/255.255.255.255 acl loadbalancers src 10.26.100.131/255.255.255.255 log_access deny !loadbalancers Thanks, Josh From: Baird, Josh jba...@follett.com I am trying to ignore requests from two IP addresses in my access_log. These two hosts connect every second and do health checks of the proxy service and I would like to eliminate the access_log spam that theycreate. Here is what I am trying: acl loadbalancers src 172.26.100.136/255.255.255.255 acl loadbalancers src 172.26.100.137/255.255.255.255 access_log /var/log/squid/access.log squid !loadbalancers This does not seem to have any effect. Requests from 172.26.100.136 and .137 are still appearing in the access_log. Any ideas? What about 'log_access' ? JD
RE: [squid-users] Ignore requests from certain hosts in access_log
Hi Amos, Same results. Nothing coming from the load balancers is being logged (even requests using X-Forwarded-For). Here is my configuration: acl loadbalancers src x.x.x.y/255.255.255.255 acl loadbalancers src x.x.x.z/255.255.255.255 follow_x_forwarded_for allow loadbalancers log_uses_indirect_client on acl_uses_indirect_client on # Define Logging (do not log loadbalancer health checks) access_log /var/log/squid/access.log squid log_access deny !loadbalancers Without the log_access directive enabled, all requests are logged using their X-Forwarded-For source address: 1268749629.423354 172.26.100.23 TCP_MISS/200 1475 GET http://webmail.blah.net/? - DIRECT/72.29.72.189 text/plain These are the types of requests that I am trying to prevent from being logged: 1268749630.481 0 x.x.x.y TCP_DENIED/400 2570 GET error:invalid-request - NONE/- text/html (where x.x.x.y is the load balancer, and the request is a health check of the web proxy service) Thanks, Josh -Original Message- From: Amos Jeffries [mailto:squ...@treenet.co.nz] Sent: Monday, March 15, 2010 6:52 PM To: squid-users@squid-cache.org Subject: Re: [squid-users] Ignore requests from certain hosts in access_log On Mon, 15 Mar 2010 12:15:49 -0500, Baird, Josh jba...@follett.com wrote: Ok, that sort of worked. I have a pair of load balancers sitting in front of my Squid proxy farm. The load balancers insert the X-Forwarded-For header into each HTTP request which allows Squid to log their connections using their real client source IP (extracted from X-Forwarded-For). In reality, the connections to the squid servers are being made directly from the load balancers. When I use log_access to deny logging to the load balancer's IP addresses, -nothing- gets logged to access_log. I am attempting to not log the health HTTP checks from 10.26.100.130/10.26.100.131 but still log the other traffic. It doesn't seem that log_access is X-Forwarded-For aware? Any ideas? acl loadbalancers src 10.26.100.130/255.255.255.255 acl loadbalancers src 10.26.100.131/255.255.255.255 log_access deny !loadbalancers Ah, you will require these as well: # to trust what the load balancers report for XFF follow_x_forwarded_for allow loadbalancers # to use the XFF details in the logs log_uses_indirect_client on # to use the XFF details in ACL tests # telling loadbalancer generated requests from relayed acl_uses_indirect_client on Amos
RE: [squid-users] Ignore requests from certain hosts in access_log
Amos, Do you think that what I am trying to achieve is possible? Thanks, Josh -Original Message- From: Baird, Josh Sent: Tuesday, March 16, 2010 9:25 AM To: Amos Jeffries; squid-users@squid-cache.org Subject: RE: [squid-users] Ignore requests from certain hosts in access_log Hi Amos, Same results. Nothing coming from the load balancers is being logged (even requests using X-Forwarded-For). Here is my configuration: acl loadbalancers src x.x.x.y/255.255.255.255 acl loadbalancers src x.x.x.z/255.255.255.255 follow_x_forwarded_for allow loadbalancers log_uses_indirect_client on acl_uses_indirect_client on # Define Logging (do not log loadbalancer health checks) access_log /var/log/squid/access.log squid log_access deny !loadbalancers Without the log_access directive enabled, all requests are logged using their X-Forwarded-For source address: 1268749629.423354 172.26.100.23 TCP_MISS/200 1475 GET http://webmail.blah.net/? - DIRECT/72.29.72.189 text/plain These are the types of requests that I am trying to prevent from being logged: 1268749630.481 0 x.x.x.y TCP_DENIED/400 2570 GET error:invalid-request - NONE/- text/html (where x.x.x.y is the load balancer, and the request is a health check of the web proxy service) Thanks, Josh -Original Message- From: Amos Jeffries [mailto:squ...@treenet.co.nz] Sent: Monday, March 15, 2010 6:52 PM To: squid-users@squid-cache.org Subject: Re: [squid-users] Ignore requests from certain hosts in access_log On Mon, 15 Mar 2010 12:15:49 -0500, Baird, Josh jba...@follett.com wrote: Ok, that sort of worked. I have a pair of load balancers sitting in front of my Squid proxy farm. The load balancers insert the X-Forwarded-For header into each HTTP request which allows Squid to log their connections using their real client source IP (extracted from X-Forwarded-For). In reality, the connections to the squid servers are being made directly from the load balancers. When I use log_access to deny logging to the load balancer's IP addresses, -nothing- gets logged to access_log. I am attempting to not log the health HTTP checks from 10.26.100.130/10.26.100.131 but still log the other traffic. It doesn't seem that log_access is X-Forwarded-For aware? Any ideas? acl loadbalancers src 10.26.100.130/255.255.255.255 acl loadbalancers src 10.26.100.131/255.255.255.255 log_access deny !loadbalancers Ah, you will require these as well: # to trust what the load balancers report for XFF follow_x_forwarded_for allow loadbalancers # to use the XFF details in the logs log_uses_indirect_client on # to use the XFF details in ACL tests # telling loadbalancer generated requests from relayed acl_uses_indirect_client on Amos
RE: [squid-users] Ignore requests from certain hosts in access_log
And, you still see the non-healthcheck, normal traffic logged using the X-Forwarded-For information? Here is my entire config, maybe this will help: # What port do we want to listen on? http_port 80 # Define refresh patterns for content types refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern . 0 20% 4320 # Define network ACL's acl all src 0.0.0.0/0.0.0.0 acl localhost src 127.0.0.1/255.255.255.255 acl localnet src 10.0.0.0/8 # RFC 1918 possible internal network acl localnet src 172.16.0.0/12 # RFC 1918 possible internal network acl localnet src 192.168.0.0/16 # RFC 1918 possible internal network acl loadbalancers src 10.26.100.136/255.255.255.255 acl loadbalancers src 10.26.100.137/255.255.255.255 # Define access ACL's. To allow SSL tunneling to a new port, add that port # to the ssl_ports ACL. To allow HTTP access over new ports, add that port # to the safe_ports ACL, and so on. acl manager proto cache_object acl ssl_ports port /etc/squid/acl-ssl_ports acl safe_ports port /etc/squid/acl-safe_ports acl deny_sites dstdomain /etc/squid/acl-deny_sites acl deny_browsers browser /etc/squid/acl-deny_browsers acl CONNECT method CONNECT # Define HTTP access rules http_access deny manager !localhost http_access deny !safe_ports http_access deny CONNECT !ssl_ports http_access deny deny_sites http_access deny deny_browsers http_access allow localhost http_access allow localnet http_access deny all # Allow icp_access to allowed_src_hosts # icp_access allow allowed_src_hosts # icp_access deny all_src # We want to append the X-Forwarded-For header for Websense follow_x_forwarded_for allow loadbalancers log_uses_indirect_client on acl_uses_indirect_client on # Define Logging (do not log loadbalancer health checks) access_log /var/log/squid/access.log squid log_access deny !loadbalancers coredump_dir /var/spool/squid pid_filename /var/run/squid.pid httpd_suppress_version_string on shutdown_lifetime 5 seconds # We don't cache, so there is no need to waste disk I/O on cache logging cache_store_log none # Define SNMP properties # We will proxy requestst to Squid's internal agent from net-snmp acl snmpprivate snmp_community fcsnmp1ro snmp_port 3401 snmp_access allow snmpprivate localhost snmp_access deny all # Allow non-FQDN hostnames, even though they are bad bad bad! dns_defnames on # Disable all caching cache deny all cache_dir null /tmp # Misc Configuration negative_ttl 0 -Original Message- From: Amos Jeffries [mailto:squ...@treenet.co.nz] Sent: Friday, March 19, 2010 6:55 PM To: squid-users@squid-cache.org Subject: Re: [squid-users] Ignore requests from certain hosts in access_log Baird, Josh wrote: Amos, Do you think that what I am trying to achieve is possible? Yes. Do exactly the same myself with a simple !aclname at the end of access_log directives. I can't figure out why neither that nor the longer log_access is working for you. Amos -Original Message- From: Baird, Josh Sent: Tuesday, March 16, 2010 9:25 AM To: Amos Jeffries; squid-users@squid-cache.org Subject: RE: [squid-users] Ignore requests from certain hosts in access_log Hi Amos, Same results. Nothing coming from the load balancers is being logged (even requests using X-Forwarded-For). Here is my configuration: acl loadbalancers src x.x.x.y/255.255.255.255 acl loadbalancers src x.x.x.z/255.255.255.255 follow_x_forwarded_for allow loadbalancers log_uses_indirect_client on acl_uses_indirect_client on # Define Logging (do not log loadbalancer health checks) access_log /var/log/squid/access.log squid log_access deny !loadbalancers Without the log_access directive enabled, all requests are logged using their X-Forwarded-For source address: 1268749629.423354 172.26.100.23 TCP_MISS/200 1475 GET http://webmail.blah.net/? - DIRECT/72.29.72.189 text/plain These are the types of requests that I am trying to prevent from being logged: 1268749630.481 0 x.x.x.y TCP_DENIED/400 2570 GET error:invalid-request - NONE/- text/html (where x.x.x.y is the load balancer, and the request is a health check of the web proxy service) Thanks, Josh -Original Message- From: Amos Jeffries [mailto:squ...@treenet.co.nz] Sent: Monday, March 15, 2010 6:52 PM To: squid-users@squid-cache.org Subject: Re: [squid-users] Ignore requests from certain hosts in access_log On Mon, 15 Mar 2010 12:15:49 -0500, Baird, Josh jba...@follett.com wrote: Ok, that sort of worked. I have a pair of load balancers sitting in front of my Squid proxy farm. The load balancers insert the X-Forwarded-For header into each HTTP request which allows Squid to log their connections using their real client source IP (extracted from X-Forwarded-For). In reality, the connections to the squid servers are being made directly from the load balancers. When I use log_access to deny logging to the load balancer's IP addresses, -nothing- gets
RE: [squid-users] Ignore requests from certain hosts in access_log
Wow, I still can't seem to get this working! I can't figure out what I am doing wrong: # Put the load balancers in an ACL so we can ignore requests (health checks) from them acl loadbalancers src 172.26.100.136/255.255.255.255 acl loadbalancers src 172.26.100.137/255.255.255.255 # We want to append the X-Forwarded-For header follow_x_forwarded_for allow loadbalancers log_uses_indirect_client on acl_uses_indirect_client on # Define Logging (do not log loadbalancer health checks) access_log /var/log/squid/access.log squid log_access deny loadbalancers coredump_dir /var/spool/squid pid_filename /var/run/squid.pid httpd_suppress_version_string on shutdown_lifetime 5 seconds # We don't cache, so there is no need to waste disk I/O on cache logging cache_store_log none These changes aren't suppressing any logs: Health checks still show up: 1269265701.388 0 172.26.100.136 TCP_DENIED/400 2570 GET error:invalid-request - NONE/- text/html 1269265703.009 0 172.26.100.137 TCP_DENIED/400 2570 GET error:invalid-request - NONE/- text/html 1269265706.389 0 172.26.100.136 TCP_DENIED/400 2570 GET error:invalid-request - NONE/- text/html 1269265708.010 0 172.26.100.137 TCP_DENIED/400 2570 GET error:invalid-request - NONE/- text/html .. as well as normal traffic using X-Forwarded-For. What am I doing wrong here? Thanks, Josh -Original Message- From: Amos Jeffries [mailto:squ...@treenet.co.nz] Sent: Friday, March 19, 2010 7:29 PM To: squid-users@squid-cache.org Subject: Re: [squid-users] Ignore requests from certain hosts in access_log Baird, Josh wrote: And, you still see the non-healthcheck, normal traffic logged using the X-Forwarded-For information? Yes. Here is my entire config, maybe this will help: snip # We want to append the X-Forwarded-For header for Websense follow_x_forwarded_for allow loadbalancers log_uses_indirect_client on acl_uses_indirect_client on # Define Logging (do not log loadbalancer health checks) access_log /var/log/squid/access.log squid log_access deny !loadbalancers Gah. Stupid me not reading that right earlier. Means: deny all requests that are NOT loadbalancers. You are wanting: log_access deny loadbalancers So sorry. Amos -- Please be using Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25 Current Beta Squid 3.1.0.18
[squid-users] Heath Check HTTP Request to Squid
I need to configure a pair of load balancers in front of Squid to send periodic health HTTP requests to my Squid servers to make sure they are up and functioning properly. How should I structure this HTTP request? A GET / results in an invalid-request. What type of request can I use that will differentiate its self from normal proxied requests and not cause Squid to bark at me for it being invalid? Thanks
RE: [squid-users] Heath Check HTTP Request to Squid
What I have done is configured the load balancers to do a GET on a bogus URL: GET http://health.check/please/ignore Then, to ignore these requests to prevent log spam: acl healthcheck dstdomain health.check log_access deny healthcheck Thanks, Josh -Original Message- From: Baird, Josh Sent: Tuesday, March 23, 2010 9:45 AM To: squid-users@squid-cache.org Subject: [squid-users] Heath Check HTTP Request to Squid I need to configure a pair of load balancers in front of Squid to send periodic health HTTP requests to my Squid servers to make sure they are up and functioning properly. How should I structure this HTTP request? A GET / results in an invalid-request. What type of request can I use that will differentiate its self from normal proxied requests and not cause Squid to bark at me for it being invalid? Thanks
[squid-users] HTTPS and Squid
Typically, all of our proxy clients connect to our Squid servers via HTTP (TCP/80). If they request a HTTPS site, Squid will CONNECT to the site and tunnel the data back to the client via HTTP. I have a scenario now where the entire stream needs to be HTTPS: User(HTTPS)Squid-(HTTPS)Destination Server on Internet How would I support this in Squid? Would I need to add a https_port and install a SSL certificate on the proxy server? Would the proxy server then decrypt data from the User and rencrypt it using Destination Server's SSL certificate on the way out to the Internet? Thanks, Josh
[squid-users] RE: HTTPS and Squid
Ok, perhaps I misunderstood how CONNECT works. When Squid CONNECT's to a remote webserver via HTTPS, the tunnel is created between the user and the remote server.. so is all data sent over HTTPS (from the remote server to the client using the squid proxy)? Thanks, Josh -Original Message- From: Baird, Josh Sent: Friday, May 07, 2010 1:17 PM To: 'squid-users@squid-cache.org' Subject: HTTPS and Squid Typically, all of our proxy clients connect to our Squid servers via HTTP (TCP/80). If they request a HTTPS site, Squid will CONNECT to the site and tunnel the data back to the client via HTTP. I have a scenario now where the entire stream needs to be HTTPS: User(HTTPS)Squid-(HTTPS)Destination Server on Internet How would I support this in Squid? Would I need to add a https_port and install a SSL certificate on the proxy server? Would the proxy server then decrypt data from the User and rencrypt it using Destination Server's SSL certificate on the way out to the Internet? Thanks, Josh
RE: [squid-users] Active/Backup Squid cluster
Agreed. Heartbeat is likely the easiest way to achieve your active/passive desired configuration. If you want to introduce load balancing, you can take a look at LVS for Linux or a more expensive, hardware based solution like F5's BigIP. Josh -Original Message- From: Henrik Nordström [mailto:hen...@henriknordstrom.net] Sent: Monday, June 21, 2010 11:00 AM To: Nick Cairncross Cc: squid-users@squid-cache.org Subject: Re: [squid-users] Active/Backup Squid cluster mån 2010-06-21 klockan 14:11 +0100 skrev Nick Cairncross: One thing though is that I'm not wanting to NLB - just have the failover capability if I want it. Does your setup still allow that? Then you only need heartbeat with a VIP for each client VLAN. Regards Henrik
[squid-users] Redirector 302 Redirects not working for CONNECT method
Hi, We are currently running Squid 2.6 out of the RHEL 5.5 repos. We use WebSense to filter web traffic which communicates with Squid via a redirector plugin. HTTP blocking works fine, but when users try to access a HTTPS page that is blocked, in IE7, the user gets a generic The Page Cannot Be Found error (not a Squid specific error). I believe this is due to: http://bugs.squid-cache.org/show_bug.cgi?id=1412 There seems to have been a patch created for this (by Henrik)? I take it that this patch was not included in the EL5 Squid 2.6 RPM: squid-2.6.STABLE21-6.el5 Are my assumptions correct? If so, what are my options for fixing my problem of displaying the generic error message? What should happen is that the redirector plugin will redirect to a WebSense server and display a block page, but displaying a more specific Squid error page would be sufficient in this case. Additional information may be found at: http://kb.websense.com/display/4/kb/article.aspx?aid=3260n=1docid=1629 073tab=search Thanks, Josh
RE: [squid-users] Redirector 302 Redirects not working for CONNECT method
So, this patch is useless to me? Do you know of *any* workaround that will allow me to display a more specific error message? ISA somehow pulls this off. Thanks, Josh -Original Message- From: Henrik Nordström [mailto:hen...@henriknordstrom.net] Sent: Monday, June 21, 2010 12:45 PM To: Baird, Josh Cc: squid-users@squid-cache.org Subject: Re: [squid-users] Redirector 302 Redirects not working for CONNECT method mån 2010-06-21 klockan 11:34 -0500 skrev Baird, Josh: HTTP blocking works fine, but when users try to access a HTTPS page that is blocked, in IE7, the user gets a generic The Page Cannot Be Found error (not a Squid specific error). I believe this is due to: http://bugs.squid-cache.org/show_bug.cgi?id=1412 That bug report is very old and patch released in squid-2.5.STABLE12 (22 Oct 2005). You are running a 2.6 release released many years after squid-2.6.STABLE21 (27 June 2008). There has been url rewriter issues in later versions as well, but I do not remember which versions. However, browsers are very picky about non-https responses in response to CONNECT tunnel requests these days, and generally do not want to view any error messages sent by a proxy in response to CONNECT claiming some security issues.. Regards Henrik
RE: [squid-users] Redirector 302 Redirects not working for CONNECT method
Ah, ok. Just read the documentation for deny_info. So, typically, the last ACL is the last http_access deny line, which I have as http_access deny all Applying the deny_info to the all ACL does not seem to be working, so I am guessing that squid is actually denying the request on another ACL. Is there a way to debug this and figure out what ACL I should be using with deny_info? Thanks, Josh -Original Message- From: Henrik Nordström [mailto:hen...@henriknordstrom.net] Sent: Monday, June 21, 2010 1:09 PM To: Baird, Josh Cc: squid-users@squid-cache.org Subject: RE: [squid-users] Redirector 302 Redirects not working for CONNECT method mån 2010-06-21 klockan 13:02 -0500 skrev Baird, Josh: So, this patch is useless to me? Do you know of *any* workaround that will allow me to display a more specific error message? ISA somehow pulls this off. http_access + deny_info? Regards Henrik
[squid-users] Acceptable Service Times?
Hi, I have a pair of forward proxy's (Squid 2.6STABLE 21/EL5 averaging about 500-600requests per minute currently. All caching has been disabled. Some users are reporting high latency and slow browsing. Below is a snapshot of stats from Squid. Could someone tell me if anything stands out that would cause a problem? Service times look to be acceptable to me, but perhaps I am looking at them incorrectly? You can also view a graph of 5min service times at http://thunder.jbdesign.net/~jbaird/servicetimes.png Squid Object Cache: Version 2.6.STABLE21 Start Time: Sun, 27 Jun 2010 15:17:41 GMT Current Time: Mon, 28 Jun 2010 20:04:09 GMT Connection information for squid: Number of clients accessing cache: 5 Number of HTTP requests received: 848091 Number of ICP messages received:0 Number of ICP messages sent:0 Number of queued ICP replies: 0 Request failure ratio: 0.00 Average HTTP requests per minute since start: 491.2 Average ICP messages per minute since start:0.0 Select loop called: 20321720 times, 5.097 ms avg Cache information for squid: Request Hit Ratios: 5min: 0.0%, 60min: 0.0% Byte Hit Ratios:5min: 0.7%, 60min: 1.0% Request Memory Hit Ratios: 5min: 0.0%, 60min: 0.0% Request Disk Hit Ratios:5min: 0.0%, 60min: 0.0% Storage Swap size: 0 KB Storage Mem size: 288 KB Mean Object Size: 0.00 KB Requests given to unlinkd: 0 Median Service Times (seconds) 5 min60 min: HTTP Requests (All): 0.06286 0.05633 Cache Misses: 0.06286 0.05633 Cache Hits:0.0 0.0 Near Hits: 0.0 0.0 Not-Modified Replies: 0.0 0.0 DNS Lookups: 0.00573 0.00669 ICP Queries: 0.0 0.0 Resource usage for squid: UP Time:103588.205 seconds CPU Time: 493.167 seconds CPU Usage: 0.48% CPU Usage, 5 minute avg:1.35% CPU Usage, 60 minute avg: 1.63% Process Data Segment Size via sbrk(): 17996 KB Maximum Resident Size: 0 KB Page faults with physical i/o: 0 Memory usage for squid via mallinfo(): Total space in arena: 18128 KB Ordinary blocks:14029 KB213 blks Small blocks: 0 KB 0 blks Holding blocks: 356 KB 1 blks Free Small blocks: 0 KB Free Ordinary blocks:4098 KB Total in use: 14385 KB 78% Total free: 4098 KB 22% Total size: 18484 KB Memory accounted for: Total accounted: 4184 KB memPoolAlloc calls: 116115679 memPoolFree calls: 116109502 File descriptor usage for squid: Maximum number of file descriptors: 1024 Largest file desc currently in use:544 Number of file desc currently in use: 408 Files queued for open: 0 Available number of file descriptors: 616 Reserved number of file descriptors: 100 Store Disk files open: 0 IO loop method: epoll Internal Data Structures: 57 StoreEntries 57 StoreEntries with MemObjects 26 Hot Object Cache Items 0 on-disk objects Thanks, Josh
[squid-users] Problem accessing site with variable in URL
Hi, I have a Squid 2.6STABLE-21 (EL5) forward proxy that is having problems with one site: http://gw.vtrenz.net/?DPO95NI5KU It looks like Squid is dropping the text in the URL after the ?, causing the remote website to return the incorrect data: 1278944523.919 1223 172.26.103.175 TCP_MISS/500 1723 GET http://gw.vtrenz.net/? - DIRECT/74.112.68.36 text/html Is this normal for access_log's to drop variables like this, or is Squid really requesting the URL without the additional text after the ?? Is anyone else able to reproduce this? Thanks, Josh
Re: [squid-users] Problem accessing site with variable in URL
Amos, Do you have any other ideas on why this site would break using Squid? Thanks, Josh Baird, Josh wrote: Hi, I have a Squid 2.6STABLE-21 (EL5) forward proxy that is having problems with one site: http://gw.vtrenz.net/?DPO95NI5KU It looks like Squid is dropping the text in the URL after the ?, causing the remote website to return the incorrect data: 1278944523.919 1223 172.26.103.175 TCP_MISS/500 1723 GET http://gw.vtrenz.net/? - DIRECT/74.112.68.36 text/html Is this normal for access_log's to drop variables like this, or is Squid really requesting the URL without the additional text after the ?? Is anyone else able to reproduce this? Squid does not normally log the query string. It can be many KB long. It still gets passed along in the transaction though. Configure: strip_query_terms off Amos
[squid-users] More Squid+Facebook problems?
Has anyone noticed any issues accessing Facebook this morning behind a forward Squid proxy (I am running 2.6STABLE21/EL5). It seems like the first time that I access the site, Squid is returning a Read Error - Connection Reset by Peer (104). Refreshing the page usually temporarily fixes the problem and then sometimes Facebook will just display a plain white page, etc. Nothing seems to be logged to cache_log or access_log. Any ideas? I know there was a Squid+Facebook issue discovered a couple of weeks ago, but I believe that was since fixed on Facebook's side. Thanks, Josh
[squid-users] Reverse DNS Problems and Delays
Should I encounter long delays when accessing a HTTP site via IP (not FQDN/friendly name/etc) that does not have a valid reverse DNS record? I am encountering an issue where it takes 25-30 seconds to access a site that does not have a valid reverse DNS record. Using Squid 2.6/EL5. Thanks, Josh
[squid-users] OT - FTP Proxy?
Sorry for the off-topic post, but this seems like a decent place to ask. What FTP proxy are people using these days? Is there a better alternative to Frox? Thanks, Josh
[squid-users] Performance between 2.x/3.1
Are there any docs that reference performance differences between 2.6/7 and 3.1? I'm running several 2.6 clusters (forward proxy) with all caching disabled doing 20-30mbps per node. The nodes are not far from idle in terms of CPU and memory. They are currently running RHEL5/x86_64. Should I expect to see similar performance on 3.1, or even better? Thanks, Josh
RE: [squid-users] can´t access site fna.gov.co:8081
You aren't allowing tunneling/CONNECT to TCP/8081. It would appear that you need to adjust your ACLs to allow this. -Original Message- From: Oscar Andrés Eraso Moncayo [mailto:oscar.er...@sisa.com.co] Sent: Wednesday, April 27, 2011 1:07 PM To: squid-users@squid-cache.org Subject: [squid-users] can´t access site fna.gov.co:8081 Hello i can´t access to site https://www.fna.gov.co:8081/BancaVirtual/a/login.jsp, the error in the log is 1303923860.335 4 10.120.5.41 TCP_DENIED/403 1455 CONNECT www.fna.gov.co:8081 no proxy settings in the browser the access is correct, help me please, best regards,
RE: [squid-users] Can Squid Load Balancing be Dynamic/Conditional against SNMP Monitoring?
I would take a look at LVS or HAProxy. Josh -Original Message- From: Billie Martin [mailto:ex.wife.bil...@gmail.com] Sent: Monday, August 15, 2011 3:34 PM To: squid-users@squid-cache.org Subject: [squid-users] Can Squid Load Balancing be Dynamic/Conditional against SNMP Monitoring? I understand how Squid can be configured as a Load Balancer rotating round-robin among a set of web servers IP addresses: - http://wiki.squid-cache.org/ConfigExamples/Strange/RotatingIPs - http://dlc.sun.com/osol/docs/content/SQUIDBALANCE/ggyxf.html The problem is that if one of those web servers is down (either administratively or unplanned), Squid will continue to send traffic to it, right? If SNMP is enabled on Squid (see below), can Squid monitor the web servers over SNMP and dynamically allocate traffic based on whether the servers are up or not? If this is possible, how might it be configured, and where might it be documented? Is there a better way to do this? Would it be better to manage the dynamic process with Heartbeat and Linux HA (with http://en.wikipedia.org/wiki/Linux-HA), even if there was only a single Squid server and not a cluster? I would greatly appreciate ANY discussion on this. Advantages, disadvantages, configurations, alternatives, etc. Many thanks in advance. --- To use SNMP with squid, it must be enabled with the configure script, and rebuilt. To enable SNMP in squid go to squid src directory and follow the steps given below : ./configure --enable-snmp [ ... other configure options ] make all make install And edit following tags in squid.conf file : acl aclname snmp_community public snmp_access aclname Once you configure squid and SNMP server, Start SNMP and squid. ---
[squid-users] Throughput per client stats?
Hi, What is the best tool to use to figure out heavy users behind a Squid forward proxy? I'm looking for throughput/usage data per client IP (we log using the X-Forwarded-For header, so the tool would need to use this value to report on). We are currently using RHEL5/squid-2.6.STABLE21-6.el5. Thanks, Josh
RE: [squid-users] Squid, SNMP/Zenoss and mib.txt?
For what it's worth, I have written a ZenPack for Squid. Contact me off list if you want a copy. Thanks, Josh -Original Message- From: Peter Gaughran [mailto:peter.gaugh...@nuim.ie] Sent: Wednesday, March 07, 2012 9:55 AM To: Amos Jeffries Cc: squid-users@squid-cache.org Subject: Re: [squid-users] Squid, SNMP/Zenoss and mib.txt? Perfect - that's great, thank you! SNMP should be enabled unless they disabled it. The squid -v output can confirm if there is anything customised in your squid. At the very least if Squid allowed you to configure a snmp_port then its available. The absence of mib.txt is a bit annoying, but it just means you will have to work with raw numbers. Everything should still work normally without it. You can download a copy of the 3.1 MIB.txt at http://bazaar.launchpad.net/~squid/squid/3.1/view/head:/src/mib.txt The Squid OID numbers are all listed in a human readable format at http://wiki.squid-cache.org/Features/Snmp#Squid_OIDs for reference along with the details of how to work snmpwalk with Squid (there are some tricky gotchas walking the IP address indexed tables). Amos
RE: [squid-users] RPS
.. and you won't find that number, because that number does not exist. It depends on a number of factors including, but not limited to: the type of traffic traversing the proxy, caching/no caching, authentication methods, architecture, Squid version, amount of traffic, traffic patterns, ACLs, etc, etc. The list goes on and on. I believe the wiki contains some examples of larger configurations and what they have been capable of, as well as this mailing list archive which can be searched at marc.info. Josh -Original Message- From: Student University [mailto:studen...@gmail.com] Sent: Saturday, March 17, 2012 4:27 AM To: squid-users@squid-cache.org Subject: [squid-users] RPS Hi , i searched again again ,,, but i didn't find what the exact MAX RPS single squid machine can achieve ,,, Thanks , Liley ,
RE: [squid-users] Re: RPS
Good numbers. I believe that it would be very beneficial to the community if you wouldn't mind sharing the kernel tweaks and squid tweaks that you used to achieve these numbers. Thanks, Josh -Original Message- From: GarethC [mailto:gar...@garethcoffey.com] Sent: Tuesday, March 20, 2012 12:26 PM To: squid-users@squid-cache.org Subject: [squid-users] Re: RPS Hi there, As an example, I set up Squid 2.7 on a HP BL460c (4x Quad-core CPU, 24GB RAM) with Redhat 5 running bonded NICs over a 2x 2G port channel to a Cisco 6509. It took several days of testing to get the Kernel tuned to be able to handle a high rate of connections (things like tcp_max_syn_backlog, tcp_tw_recycle, tcp_rmem, tcp_fin_timeout etc). Squid was also tuned to maximise use of memory, as opposed to disk cache. The maximum sustained connections achieved was in the region of ~2,000 conns per second, and equated to ~980Mbps for a single server. The content that was being requested was purely static html and images. Hope that gives you some sort of view as to what is achievable. Gareth - Follow me on... My Blog Twitter LinkedIn Facebook -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/RPS-tp4480226p4489420.html Sent from the Squid - Users mailing list archive at Nabble.com.
[squid-users] Connection Reset by Peer (104)
Hi, Running 2.6.STABLE21-6 (RHEL5) here. I am unable to access http://www.nacuboannualmeeting.org/. The error that is thrown is: The following error was encountered: Read Error The system returned: (104) Connection reset by peer My access.log shows: 04/May/2012:08:49:03 -0500348 172.24.75.138 TCP_MISS/502 1484 GET http://www.nacuboannualmeeting.org/ - DIRECT/64.211.220.113 text/html I have tried the two suggestions in the FAQ: echo 0 /proc/sys/net/ipv4/tcp_ecn echo 0 /proc/sys/net/ipv4/tcp_window_scaling Neither of these solved the problem. Does anyone have any other ideas on how I can solve this? Thanks, Josh
[squid-users] NTLM Authentication Issues
Hi, Running squid-2.6STABLE-6.el5 (RHEL5) here. Trying to configure NTLM authentication. I successfully configured krb/samba and have verified successful authentication using: $ /usr/bin/ntlm_auth --username=jbaird password: NT_STATUS_OK: Success (0x0) I can also enumerate groups and users successfully using wbinfo -u and wbinfo -g However, when I add the squid-2.5-basic helper to ntlm_auth, I receive ERR: $ /usr/bin/ntlm_auth --helper-protocol=squid-2.5-basic na+jbaird password ERR I believe this is causing my squid configuration to fail: snip # NTLM configuration auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp auth_param ntlm children 30 acl NTLMUsers proxy_auth REQUIRED http_access allow all NTLMUsers /snip Does anyone have any tips on how to troubleshoot? Should I be using a different helper-protocol for ntlm_auth? Thanks, Josh
[squid-users] RE: NTLM Authentication Issues
I got ntlm_auth to work successfully, as I was using the incorrect winbind separate. However, Squid continues to ask me for credentials always. Even when I enter correct domain credentials, it does not work. A tcpdump between the Squid server and the domain controller only show 1 SMB request from the proxy to the DC. Does anyone have any ideas on how I can further troubleshoot this? Thanks. -Original Message- From: Baird, Josh [mailto:jba...@follett.com] Sent: Wednesday, July 18, 2012 10:01 AM To: squid-users@squid-cache.org Subject: [squid-users] NTLM Authentication Issues Hi, Running squid-2.6STABLE-6.el5 (RHEL5) here. Trying to configure NTLM authentication. I successfully configured krb/samba and have verified successful authentication using: $ /usr/bin/ntlm_auth --username=jbaird password: NT_STATUS_OK: Success (0x0) I can also enumerate groups and users successfully using wbinfo -u and wbinfo -g However, when I add the squid-2.5-basic helper to ntlm_auth, I receive ERR: $ /usr/bin/ntlm_auth --helper-protocol=squid-2.5-basic na+jbaird password ERR I believe this is causing my squid configuration to fail: snip # NTLM configuration auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp auth_param ntlm children 30 acl NTLMUsers proxy_auth REQUIRED http_access allow all NTLMUsers /snip Does anyone have any tips on how to troubleshoot? Should I be using a different helper-protocol for ntlm_auth? Thanks, Josh
[squid-users] Non-browser applications using NTLM+Squid?
Hi, I'm wondering what others are doing about non-browser applications (Anti-virus software that fetches updates, instant messengers over HTTP, etc) that sit behind a Squid proxy that requires NTLM authentication? These applications, in my experience, use Windows' proxy settings to proxy their outbound traffic, but can't speak NTLM, so the application is prevented from proxying any traffic. Would a Kerberos integrated Squid be a possible solution to this problem? Thanks, Josh
RE: [squid-users] Non-browser applications using NTLM+Squid?
Not sure why I didn't think of that. Thanks! Josh From: Eliezer Croitoru [elie...@ngtech.co.il] Sent: Thursday, July 19, 2012 6:12 PM To: squid-users@squid-cache.org Subject: Re: [squid-users] Non-browser applications using NTLM+Squid? On 7/19/2012 11:29 PM, Baird, Josh wrote: Hi, I'm wondering what others are doing about non-browser applications (Anti-virus software that fetches updates, instant messengers over HTTP, etc) that sit behind a Squid proxy that requires NTLM authentication? These applications, in my experience, use Windows' proxy settings to proxy their outbound traffic, but can't speak NTLM, so the application is prevented from proxying any traffic. Would a Kerberos integrated Squid be a possible solution to this problem? Thanks, Josh very simple.. just allow them all before the authentication acls such as in: acl updates dstdomain .windowsupdates.microsoft.com .antivirusupdates.org acl updates1 dst 192.168.0.1/32 http_access allow localnet updates http_access allow localnet updates1 http_access allow localnet ntlm_auth_helper http_access deny all Regards, Eliezer -- Eliezer Croitoru https://www1.ngtech.co.il IT consulting for Nonprofit organizations eliezer at ngtech.co.il
RE: [squid-users] Non-browser applications using NTLM+Squid?
How would I go about only forcing certain hosts to use NTLM auth, but allowing everyone else to use the proxy un-authenticated? I have a ACL that contain's src's of IP's that I need to force to use NTLM: acl requirentlm proxy_auth REQUIRED acl requirentlmhosts src 1.1.1.1/255.255.255.255 http_acccess allow requirentlmhosts requirentlm This takes care of forcing requirentlmhosts to auth, but if I have another http_access rule that allows everyone else, what keeps requirentlmhosts from getting out without auth? Thanks, Josh -Original Message- From: Baird, Josh Sent: Thursday, July 19, 2012 9:39 PM To: Eliezer Croitoru; squid-users@squid-cache.org Subject: RE: [squid-users] Non-browser applications using NTLM+Squid? Not sure why I didn't think of that. Thanks! Josh From: Eliezer Croitoru [elie...@ngtech.co.il] Sent: Thursday, July 19, 2012 6:12 PM To: squid-users@squid-cache.org Subject: Re: [squid-users] Non-browser applications using NTLM+Squid? On 7/19/2012 11:29 PM, Baird, Josh wrote: Hi, I'm wondering what others are doing about non-browser applications (Anti-virus software that fetches updates, instant messengers over HTTP, etc) that sit behind a Squid proxy that requires NTLM authentication? These applications, in my experience, use Windows' proxy settings to proxy their outbound traffic, but can't speak NTLM, so the application is prevented from proxying any traffic. Would a Kerberos integrated Squid be a possible solution to this problem? Thanks, Josh very simple.. just allow them all before the authentication acls such as in: acl updates dstdomain .windowsupdates.microsoft.com .antivirusupdates.org acl updates1 dst 192.168.0.1/32 http_access allow localnet updates http_access allow localnet updates1 http_access allow localnet ntlm_auth_helper http_access deny all Regards, Eliezer -- Eliezer Croitoru https://www1.ngtech.co.il IT consulting for Nonprofit organizations eliezer at ngtech.co.il
[squid-users] Include directive in 2.6?
Hi, Can someone confirm if the include directive is supported in 2.6? I'm running squid-2.6.STABLE21-6.el5, and have include /etc/squid/conf.d/*.conf in my squid.conf. No errors are reported, but the configuration files do not seem to actually be included. Thanks, Josh
RE: [squid-users] Which RAID for this ?
I would probably do one RAID1 for the OS, and then one RAID10 for everything else... but, it really depends on how much load this particular box will be under. Josh -Original Message- From: Will I am [mailto:souleesty...@gmail.com] Sent: Monday, September 17, 2012 11:41 AM To: squid-users@squid-cache.org Subject: [squid-users] Which RAID for this ? Hi there, I'm a novice in squid world and I would like to know what is the best raid performance for this : HP proliant DL380 2 x SCSI disk 36.4 GB 15K 4 x SCSI disk 72.8 GB 15K I thought RAID 5 but it seems it's not the best solution for this. What do you think ? Thanks Will --
[squid-users] Problem accessing a site
Hi, Our Squid 2.7 proxies are failing on a specific request: KeyValue ResponseHTTP/1.0 400 Bad Request Server squid Date Wed, 28 Nov 2012 13:07:29 GMT Content-Typetext/html Content-Length2144 Expires Wed, 28 Nov 2012 13:07:29 GMT X-Squid-ErrorERR_INVALID_URL 0 X-Cache MISS from proxy.corp.com X-Cache-Lookup NONE from proxy.corp.com:80 Via 1.0 proxy.corp.com:80 (squid) Proxy-Connection close The request header is: KeyValue Request GET
RE: [squid-users] Problem accessing a site
Top posting here as well (sorry). These proxies are actually squid 2.6 (RHEL5), sorry about that. So, because it is only 4.5k or so, you don't think the header size is an issue? I'm not sure how to debug this problem any further. Any suggestions? Thanks. -Original Message- From: Nishant Sharma [mailto:codemarau...@gmail.com] Sent: Thursday, November 29, 2012 10:32 PM To: squid-users@squid-cache.org Subject: Re: [squid-users] Problem accessing a site Sorry for top posting, my mobile device is crazy. I have seen SugarCRM also having these weird long URLs. But I also faintly remember a compile time option in a header file to increase this limit. -Nin 11/30/12, Amos Jeffries squ...@treenet.co.nz wrote: On 30/11/2012 6:06 a.m., jeffrey j donovan wrote: On Nov 29, 2012, at 11:14 AM, Baird, Josh jba...@follett.com wrote: Hi, Our Squid 2.7 proxies are failing on a specific request: snip The request header is: KeyValue Request GET http://api.copiamobile.com/marketing-api/msQuiz/markFeaturedQuizzes? callback=jQuery171017257169384743326_1354106706654quizzes=%5B%7B%22 quizId%22%3A1%2C%22featured%22%3Afalse%7D%2C%7B%22quizId%22%3A2%2C%2 2featured%22%3Afalse%7D%2C%7B%22quizId%22%3A3%2C%22featured%22%3Afal se%7D%2C%7B%22quizId%22%3A4%2C%22featured%22%3Afalse%7D%2C%7B%22quiz Id%22%3A5%2C%22featured%22%3Afalse%7D%2C%7B%22quizId%22%3A6%2C%22fea tured%22%3Afalse%7D%2C%7B%22quizId%22%3A7%2C%22featured%22%3Afalse%7 D%2C%7B%22quizId%22%3A8%2C%22featured%22%3Afalse%7D%2C%7B%22quizId%2 2%3A9%2C%22featured%22%3Afalse%7D%2C%7B%22quizId%22%3A10%2C%22featur ed%22%3Afalse%7D%2C%7B%22quizId%22%3A11%2C%22featured%22%3Afalse%7D% 2C%7B%22quizId%22%3A12%2C%22featured%22%3Afalse%7D%2C%7B%22quizId%22 %3A13%2C%22featured%22%3Afalse%7D%2C%7B%22quizId%22%3A14%2C%22featur ed%22%3Afalse%7D%2C%7B%22quizId%22%3A15%2C%22featured%22%3Afalse%7D% 2C%7B%22quizId%22%3A16%2C%22featured%22%3Afalse%7D%2C%7B%22quizId%22 %3A17%2C%22featured%22%3Afalse%7D%2C%7B%22quizId%22%3A18%2C%22featur ed%22%3Afalse%7D%2C%7B%22quizId%22%3A19%2C%22featured%22%3Afalse%7D% 2C%7B%22quizId%22%3A20%2C%22featured%22%3Afalse%7D%2C%7B%22quizId%22 %3A21%2C%22featured%22%3Afalse%7D%2C%7B%22quizId%22%3A22%2C%22featur ed%22%3Afalse%7D%2C%7B%22quizId%22%3A23%2C%22featured%22%3Afalse%7D% 2C%7B%22quizId%22%3A24%2C%22featured%22%3Afalse%7D%2C%7B%22quizId%22 %3A25%2C%22featured%22%3Afalse%7D%2C%7B%22quizId%22%3A26%2C%22featur ed%22%3Afalse%7D%2C%7B%22quizId%22%3A27%2C%22featured%22%3Afalse%7D% 2C%7B%22quizId%22%3A28%2C%22featured%22%3Afalse%7D%2C%7B%22quizId%22 %3A29%2C%22featured%22%3Afalse%7D%2C%7B%22quizId%22%3A30%2C%22featur ed%22%3Afalse%7D%2C%7B%22quizId%22%3A31%2C%22featured%22%3Afalse%7D% 2C%7B%22quizId%22%3A32%2C%22featured%22%3Afalse%7D%2C%7B%22quizId%22 %3A33%2C%22featured%22%3Afalse%7D%2C%7B%22quizId%22%3A34%2C%22featur ed%22%3Afalse%7D%2C%7B%22quizId%22%3A35%2C%22featured%22%3Afalse%7D% 2C%7B%22quizId%22%3A36%2C%22featured%22%3Afalse%7D%2C%7B%22quizId%22 %3A37%2C%22featured%22%3Afalse%7D%2C%7B%22quizId%22%3A38%2C%22featur ed%22%3Afalse%7D%2C%7B%22quizId%22%3A39%2C%22featured%22%3Afalse%7D% 2C%7B%22quizId%22%3A45%2C%22featured%22%3Afalse%7D%2C%7B%22quizId%22 %3A46%2C%22featured%22%3Afalse%7D%2C%7B%22quizId%22%3A47%2C%22featur ed%22%3Atrue%7D%2C%7B%22quizId%22%3A48%2C%22featured%22%3Afalse%7D%2 C%7B%22quizId%22%3A49%2C%22featured%22%3Afalse%7D%2C%7B%22quizId%22% 3A50%2C%22featured%22%3Afalse%7D%2C%7B%22quizId%22%3A51%2C%22feature d%22%3Afalse%7D%2C%7B%22quizId%22%3A52%2C%22featured%22%3Afalse%7D%2 C%7B%22quizId%22%3A53%2C%22featured%22%3Afalse%7D%2C%7B%22quizId%22% 3A54%2C%22featured%22%3Afalse%7D%2C%7B%22quizId%22%3A55%2C%22feature d%22%3Afalse%7D%2C%7B%22quizId%22%3A56%2C%22featured%22%3Afalse%7D%2 C%7B%22quizId%22%3A58%2C%22featured%22%3Afalse%7D%2C%7B%22quizId%22% 3A59%2C%22featured%22%3Afalse%7D%2C%7B%22quizId%22%3A60%2C%22feature d%22%3Afalse%7D%2C%7B%22quizId%22%3A61%2C%22featured%22%3Afalse%7D%2 C%7B%22quizId%22%3A62%2C%22featured%22%3Afalse%7D%2C%7B%22quizId%22% 3A63%2C%22featured%22%3Afalse%7D%2C%7B%22quizId%22%3A64%2C%22feature d%22%3Afalse%7D%2C%7B%22quizId%22%3A65%2C%22featured%22%3Afalse%7D%2 C%7B%22quizId%22%3A66%2C%22featured%22%3Afalse%7D%2C%7B%22quizId%22% 3A67%2C%22featured%22%3Afalse%7D%2C%7B%22quizId%22%3A68%2C%22feature d%22%3Afalse%7D%2C%7B%22quizId%22%3A69%2C%22featured%22%3Afalse%7D%2 C%7B%22quizId%22%3A71%2C%22featured%22%3Afalse%7D%2C%7B%22quizId%22% 3A73%2C%22featured%22%3Afalse%7D%2C%7B%22quizId%22%3A74%2C%22feature d%22%3Afalse%7D%2C%7B%22quizId%22%3A75%2C%22featured%22%3Afalse%7D%2 C%7B%22quizId%22%3A77%2C%22featured%22%3Atrue%7D%2C%7B%22quizId%22%3 A81%2C%22featured%22%3Afalse%7D%2C%7B%22quizId%22%3A85%2C%22featured %22%3Afalse%7D%2C%7B%22quizId%22%3A87%2C%22featured%22%3Afalse%7D%2C %7B%22quizId%22%3A88%2C%22featured%22%3Afalse%7D%2C%7B%22quizId%22%3 A90%2C%22featured%22%3Afalse%7D%2C%7B%22quizId%22%3A91%2C%22featured %22%3Afalse%7D%2C
RE: [squid-users] Squid3 reverse proxy ntlm authentication
Try again. -Original Message- From: m...@uninet.com.br [mailto:m...@uninet.com.br] Sent: Wednesday, October 03, 2012 4:54 PM To: squid-users@squid-cache.org Subject: [squid-users] Squid3 reverse proxy ntlm authentication Importance: High I need to configure the Squid3 to authenticate via NTLM reverse proxy authentication. I have instaled and configured the squid but the browser requires the password again and again. Anyone have a clue to help me? Here my configuration: ./configure --prefix=/usr/local/squid --exec_prefix=/usr/local/squid --enable-ssl --enable-auth-ntlm=ntlm,basic --enable-basic-auth-helpers=winbind --enable-ntlm-auth-helpers=winbind --enable-external-aclhelpers=winbind_group,wbinfo_group --enable-delay-pools --enable-removal-policies --enable-underscores --enable-cache-digests --disable-ident-lookups --enable-truncate --with-winbind-auth-challenge --- squid.conf ### pure ntlm authentication auth_param ntlm program /usr/lib/squid/ntlm_auth auth_param ntlm children 10 auth_param ntlm keep_alive off ### provide basic authentication via ldap for clients not authenticated via kerberos/ntlm #auth_param basic program /usr/lib/squid3/squid_ldap_auth -R -b dc=example,dc=local -D squid@example.local -W /etc/squid3/ldappass.txt -f sAMAccountName=%s -h dc1.example.loc al #auth_param basic children 10 #auth_param basic realm Internet Proxy #auth_param basic credentialsttl 1 minute acl warp dstdomain warpx.uninet.com.br acl xymon dstdomain monitorx.uninet.com.br acl uninet dstdomain www.uninet.com.br acl admin src 200.220.1.0/24 acl admin src 200.220.102.0/24 acl unisys src 129.222.0.0/16 acl unisys src 129.224.0.0/16 acl unisysvpn src 172.0.0.0/8 acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT http_port 80 accel https_port 443 accel cert=/usr/local/squid/CA/cacert.pem key=/usr/local/squid/CA/cakey.pem cache_peer 200.220.0.103 parent 80 0 no-query no-digest connection-auth=on originserver proxy-only no-netdb-exchange login=PASS name=warpsite cache_peer_access warpsite allow warp cache_peer 200.220.0.139 parent 443 0 no-query no-digest originserver login=PASS ssl sslflags=DONT_VERIFY_PEER name=xymonsite cache_peer_access xymonsite allow xymon cache_peer 200.220.0.120 parent 80 0 no-query no-digest originserver name=uninetsite cache_peer_access uninetsite allow uninet #http_access allow all http_access allow manager localhost http_access deny manager http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access deny all coredump_dir /var/spool/squid3 cache deny all # Add any of your own refresh_pattern entries above these. refresh_pattern ^ftp: 144020% 10080 refresh_pattern ^gopher:14400% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . 0 20% 4320 cache_effective_user proxy - thanks Emilio
[squid-users] Going into hit-only-mode for 5 minutes
Hi, We recently started having problems where our Squid 2.6 (squid-2.6.STABLE21-6.el5) proxy servers would stop serving requests. In my cache.log, I see many of these: 2015/04/14 01:13:45| Failure Ratio at 26.15 2015/04/14 01:13:45| Going into hit-only-mode for 5 minutes... 2015/04/14 01:18:46| Failure Ratio at 3.55 2015/04/14 01:18:46| Going into hit-only-mode for 5 minutes... 2015/04/14 01:23:46| Failure Ratio at 1.02 2015/04/14 01:23:46| Going into hit-only-mode for 5 minutes... ... 2015/04/14 06:50:58| idnsSendQuery: Can't send query, no DNS socket! 2015/04/14 06:50:58| idnsSendQuery: Can't send query, no DNS socket! 2015/04/14 06:50:58| idnsSendQuery: Can't send query, no DNS socket! 2015/04/14 06:50:58| idnsSendQuery: Can't send query, no DNS socket! I suspect this is the problem - the proxy is running out of DNS sockets. I have already determined that there are not problems with the DNS servers that these proxies are using (in their /etc/resolv.conf). Could this be caused by a bad user chewing up DNS sockets/children with invalid URL requests? The going into hit-only-mode errors appear to be ICP related? In this case, I believe we have ICP completely disabled: # icp_access allow allowed_src_hosts # icp_access deny all_src Could anyone offer any suggestions or advice to help figure out what is causing these problems? Thanks, Josh ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Going into hit-only-mode for 5 minutes
Could anyone offer any suggestions or advice to help figure out what is causing these problems? 1) upgrade. 2) seriously, upgrade. 3) try adding via on to your squid.conf. If you start to get warnings about forwarding loops its working. Otherwise you got big problems - see (2). Could the 'Going into hit-only-mode for 5 minute' messages be attributed to spotty/slow DNS resolution, though? When a proxy is in 'hit-only-mode,' is it able to respond to normal (non ICP) clients? Josh ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
[squid-users] Unable to increase max_filedescr
Hi, I'm running 2.6STABLE (yes, I know it's ancient) and I'm unable to increase max_filedescr beyond 16384. # grep max_file /etc/squid/squid.conf max_filedesc32768 # ulimit -n 32678 # squidclient -p 80 mgr:info | grep 'Maximum number' Maximum number of file descriptors: 16384 I have restarted squid, re-logged back in, etc. I'm able to modify it to be anything less than 16384. Any idea what is preventing me from scaling beyond 16384? This is RHEL5. Thanks, Josh ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Unable to increase max_filedescr
Replying to myself, but it appears that this package was compiled using '--max-fd=16384'. Is there anyway, other than re-compiling and building new packages, to increase beyond this? Josh -Original Message- From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf Of Baird, Josh Sent: Tuesday, June 21, 2016 8:46 PM To: squid-users@lists.squid-cache.org Subject: [squid-users] Unable to increase max_filedescr Hi, I'm running 2.6STABLE (yes, I know it's ancient) and I'm unable to increase max_filedescr beyond 16384. # grep max_file /etc/squid/squid.conf max_filedesc32768 # ulimit -n 32678 # squidclient -p 80 mgr:info | grep 'Maximum number' Maximum number of file descriptors: 16384 I have restarted squid, re-logged back in, etc. I'm able to modify it to be anything less than 16384. Any idea what is preventing me from scaling beyond 16384? This is RHEL5. Thanks, Josh ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users