On 11/20/2013 12:04 AM, Mohd Akhbar wrote:
I compiled squid on Centos 6.2 64bit with
./configure --prefix=/usr --includedir=/usr/include
--datadir=/usr/share --bindir=/usr/sbin --libexecdir=/usr/lib/squid
--localstatedir=/var --sysconfdir=/etc/squid
My compiled size for squid runtime
They generate huge log files. We turn them off. Here it a patch for 3.3.10 if
you need to suppress them.
Some of the cache log options should have config entries as they generate
clutter and hide more important issues. We remove the following as well:
* Username ACLs are not reliable here
*
Date: Thu, 27 Sep 2012 21:08:12 +0200
From: e...@g.jct.ac.il
To: t...@raynersw.com
CC: squid-users@squid-cache.org
Subject: Re: [squid-users] clarification of delay_initial_bucket_level
2012/9/27 t...@raynersw.com t...@raynersw.com:
Hmm, I just
PS: I do a reconfigure once an hour, but my traffic is controlled.
Jenny as far as I can tell from your mail you are running a restart
(service squid3 restart or /etc/init.d/squid3 restart) and not a
reload, reloads in my experience are very fast, they fix almost
everything and are close
I don't think Eliezer meant reloading per se as much as my question
which was reloading every 5 minutes.
I reload very frequently as well- not on a timer but triggered by events, and
it can happen as often as every minute. I understand that it's not optimal
but I don't see it causing
Why don't you send some donations to the man: aypp2...@treenet.co.nz
He has singlehandedly attended fixed everyone's problems here for years and
not once asked anything in return.
Jenny
Date: Wed, 29 Aug 2012 16:28:58 -0500
To: squid-users@squid-cache.org
From: knap...@realtime.net
nonhierarchical_direct off
Jenny
Date: Sat, 18 Aug 2012 18:31:14 +0100
From: a.f...@ntlworld.com
To: squid-users@squid-cache.org
Subject: [squid-users] ACL processing in Squid 3.2
I may be missing something here, but it looks like ACL processing is
broken for at least some HTTPS requests
Apologies for top posting, from Squid FAQs:
Certain types of requests cannot be cached or are served faster going direct,
and Squid is optimized to send them over direct connections by default. The
nonhierarchical_direct off directive tells Squid to send these requests via the
parent anyway.
I
In your /etc/rc.d/init.d/squid file, or whatever script is starting squid, put:
ulimit -HSn 65536
Jenny
From: sunyuc...@gmail.com
Date: Thu, 16 Aug 2012 20:03:05 -0700
To: squid-users@squid-cache.org
Subject: [squid-users] Re: 3.2.1 file descriptor is locked to 1024?
I found that if I
-users@squid-cache.org
No, I just launch it with ./squid -f squid.conf , no script.
I think this is a problem with default config , it might be
initialized wrong in the default config.
On Fri, Aug 17, 2012 at 1:09 AM, Jenny Lee bodycar...@live.com wrote:
In your /etc/rc.d/init.d/squid file
, set it again won't
solve it. it's squid that don't want to use more than 1024 unless told
so explicitly in the config.
On Fri, Aug 17, 2012 at 2:04 AM, Jenny Lee bodycar...@live.com wrote:
So put it before that, then:
ulimit -HSn 65536; ./squid -f squid.conf
Jenny
Date: Fri, 3 Aug 2012 14:16:29 +0200
From: hugo.dep...@gmail.com
To: squid-users@squid-cache.org
Subject: [squid-users] Squid memory usage
Dear community,
I am running squid3 on Linux Debian squeeze.(3.1.6).
I encounter a suddenly a high memory usage on my virtual machine don't
Date: Sat, 24 Mar 2012 12:07:34 -0700
From: nwv...@nottheoilrig.com
To: squid-users@squid-cache.org
Subject: [squid-users] Popular log analysis tools? SARG?
Which are the most popular log analysis tools? SARG?
The Squid website features a comprehensive list of log analysis tools
[1].
Dears ,
how we can achieve 5000 RPS through squid
Thanks in advance
Liley
In your dreams.
Jenny
To: squid-users@squid-cache.org
Date: Thu, 19 Jan 2012 10:33:31 +1300
From: squ...@treenet.co.nz
Subject: Re: [squid-users] Are comments in external files allowed
On 18.01.2012 14:45, James Robertson wrote:
Excuse the basic question but is adding comments to external files
allowed in
Date: Tue, 17 Jan 2012 10:02:14 -0800
From: jth...@gmail.com
To: squid-users@squid-cache.org
Subject: [squid-users] Squid Hardware to Handle 150Mbps Peaks
We currently have a commercial proxy solution in place but since we increased
our bandwidth to 150meg connection, the proxy is slowing
Date: Mon, 9 Jan 2012 15:53:22 +1100
From: leigh.wedd...@bigpond.com
To: squid-users@squid-cache.org
Subject: [squid-users] Squid only forwards GET requests to cache_peer
Hi,
I have a problem with squid only forwarding HTTP GET requests to cache_peers.
My setup is that the corporate
Date: Sat, 24 Dec 2011 13:16:45 +1300
From: squ...@treenet.co.nz
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Squid 3.2.0.14 beta is available
On 24/12/2011 12:15 p.m., Jenny Lee wrote:
Date: Sat, 24 Dec 2011 10:38:58 +1300
From: squ...@treenet.co.nz
To: squid-users
Date: Sat, 24 Dec 2011 10:38:58 +1300
From: squ...@treenet.co.nz
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Squid 3.2.0.14 beta is available
On 24/12/2011 9:25 a.m., Saleh Madi wrote:
Hi Amos,
After I set the memory_cache_shared off in the config file of the squid
,
From: hen...@henriknordstrom.net
To: tdo...@associatedbrands.com
CC: squid-users@squid-cache.org
Date: Wed, 21 Dec 2011 19:36:51 +0100
Subject: RE: [squid-users] After reloading squid3, takes about 2 minutes to
serve pages?
tis 2011-12-20 klockan 10:48 -0500 skrev Terry Dobbs:
I am
+ skrev Jenny Lee:It takes me a minute and
half to reach full load when squid doing 100 req/sec is sent a reconfigure.
Squid barely serves anything during this time (but it is functional). All my
timeouts are low. It was not like this on 3.2.0.1. How big is your on-disk
cache
I dont understand how you are managing to have anything to do with Tor to start
with.
Tor is speaking SOCKS5. You need Polipo to speak HTTP on the client side and
SOCKS on the server side.
I have actively tried to connect to 2 of our SOCKS5 machines (and Tor) via my
Squid and I could not
is with traffic of tor should be blocked. Outgoing client
traffic to the tor network or incoming httpd requests from tor exit nodes ?
Andreas
-Ursprüngliche Nachricht-
Von: Jenny Lee [mailto:bodycar...@live.com]
Gesendet: Sonntag, 4. Dezember 2011 00:09
An: charlie@gmail.com
K. first problem:
# host download.windowsupdate.com
...
download.windowsupdate.com.c.footprint.net has address 204.160.124.126
download.windowsupdate.com.c.footprint.net has address 8.27.83.126
download.windowsupdate.com.c.footprint.net has address 8.254.3.254
Client is connecting to
Date: Tue, 29 Nov 2011 00:59:29 +1300
From: squ...@treenet.co.nz
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Commercial Squid tweak speeds things up
significantly!
On 26/11/2011 8:02 p.m., - Mikael - wrote:
Could you name this product and point at some documentation it
I am running CentOS v5.1 with Squid-2.6 STABLE22 and Tproxy
(cttproxy-2.6.18-2.0.6). My kernel is kernel-2.6.18-92. This is the most
reliable setup I ever made running Squid. My problem is that I am having
serious connections troubles when running squid over 155000 conntrack
From: listas.n...@cnett.com.br
To: bodycar...@live.com; squid-users@squid-cache.org
Date: Thu, 17 Nov 2011 15:55:20 -0300
Subject: RES: [squid-users] Squid box dropping connections
Hello Jenny,
Thanks for your answer. Sorry I haven't wrote but my
Hi,
We're having issues with log file roll over in squid - when squid is under
heavy load and the log files are very big, triggering a log file roll over
(squid -k rotate) makes squid unresponsive, and has to be killed manually
with a kill -9.
You would be better off moving the log
Date: Wed, 26 Oct 2011 17:28:21 -0700
From: dnw...@gmail.com
To: squid-users@squid-cache.org
Subject: [squid-users] Is there any way to configure Squid to use local
/etc/hosts in name resolution?
Hi there,
I'm using Squid 3.1 as part of a proxy chain. I'm trying to make
Squid use
Date: Wed, 26 Oct 2011 18:30:37 -0700
From: dnw...@gmail.com
To: bodycar...@live.com
CC: squid-users@squid-cache.org
Subject: Re: [squid-users] Is there any way to configure Squid to use local
/etc/hosts in name resolution?
Hi Jenny,
Thanks very
That is because the file is not there as squid says.
Change 'ad_block.txt' to 'ad.block.txt' in your script and all will be fine.
Jenny
From: zongosa...@gmail.com
To: squid-users@squid-cache.org
Date: Tue, 25 Oct 2011 21:11:50 +0100
Subject: RE: [squid-users] empty acl
Amos,
Thanks
Date: Thu, 13 Oct 2011 10:59:09 +0200
From: leonardodiserpierodavi...@gmail.com
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Recurrent crashes and warnings: Your cache is
running out of filedescriptors
On Wed, Oct 12, 2011 at 3:09 AM, Amos Jeffries squ...@treenet.co.nz
Perhaps you are running out of inodes?
df -i should give you what you are looking for.
Well done. df reports indeed that I am out of inodes (100% used).
I've seen that a Sarg daily report contains about 170'000 files. I am
starting tar.gzipping them.
Thank you very much Jenny.
Date: Sun, 9 Oct 2011 20:45:07 -0700
From: maill...@jg555.com
To: squid-users@squid-cache.org
Subject: [squid-users] ACL's by Specific Date and Time
I use my squid server at home for me to keep my eyes on my kids
internet. Was wondering if it was possible to allow or deny access by a
Date: Sat, 8 Oct 2011 16:15:10 -0400
From: wil...@optimumwireless.com
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Facebook page very slow to respond
I disabled squid and I'm doing simple FORWARDING and things work, this
tells me that I'm having a configuration issue with
Date: Thu, 29 Sep 2011 11:24:55 -0400
From: charlie@gmail.com
To: squid-users@squid-cache.org
Subject: [squid-users] block TOR
There is any way to block TOR with my Squid ?
How do you get it working with tor in the first place?
I really tried for one of our users. Even used Amos's
Date: Tue, 20 Sep 2011 21:51:23 +0300
From: nmi...@noa.gr
To: bodycar...@live.com
CC: squid-users@squid-cache.org
Subject: Re: [squid-users] Secure user authentication on a web proxy
On 20/9/2011 8:58 μμ, Jenny Lee wrote:
I don't know if stunnel
Please also note that I also tried using Squid + Stunnel to achieve
secure user authentication, according to these directions:
http://www.jeffyestrumskas.com/index.php/how-to-setup-a-secure-web-proxy-using-ssl-encryption-squid-caching-proxy-and-pam-authentication/
(except that I used
Thank you for your hard work. Most of the squirks seem to be gone.
Lots of: WARNING: always_direct resulted in 3. Username ACLs are not reliable
here.
Why don't we have IP address logged in cache log? It is diffult to find
anything when you get a GB of debug log by the time you run a
acl random was the issue. Adding an explicit always_direct fixed it.
Jenny
From: bodycar...@live.com
To: squ...@treenet.co.nz; squid-annou...@squid-cache.org;
squid-users@squid-cache.org
Subject: RE: [squid-users] Squid 3.2.0.12 beta is available
Date: Sun, 18 Sep 2011 12:29:57 +1200
From: squ...@treenet.co.nz
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Squid 3.2.0.12 beta is available
On 18/09/11 03:28, Jenny Lee wrote:
acl random was the issue. Adding an explicit
Date: Fri, 9 Sep 2011 12:50:24 +1200
From: squ...@treenet.co.nz
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Authentication Prompts
On 09/09/11 06:28, Matt Cochran wrote:
I've been trying to model two different kinds of users in ACLs, where the
kids are authenticated by
- Correct parsing of large Gopher indexes
This gopher/WAIS... Does anyone use it actually?
Yes maybe in 1994 or during the days of Wildcat BBS.
I think developers should consider removing this code.
Jenny
My honest opinion is that this is a totally unnecessary change. And a brutal
one too.
What difference does it make if it is 8 chars or 888 chars? It is going
plaintext over the wire.
For people having established systems, these functions are scattered everywhere
-- in CGIs, PHPs, password
---
Date: Sun, 28 Aug 2011 23:26:25 +1200
From: squ...@treenet.co.nz
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Squid 3.0.STABLE26 is available
On 28/08/11 21:19, Jenny Lee wrote:
- Correct parsing of large Gopher indexes
Date: Thu, 4 Aug 2011 10:45:57 +0200
From: ju...@klunky.co.uk
To: squid-users@squid-cache.org
Subject: [squid-users] Debian Squeeze/Squid/ --enable-http-violations /
header_replace User-Agent no effect
Hi,
I have recompiled squid3 on Debian Squeeze because the Debian repo' deb
omits
On Wed, 20 Jul 2011 09:13:34 +1200, Gregory Machin wrote:
Hi.
Been a long time since I last looked at a squid proxy. After add a
proxy to the network , browsing seems to have slowed considerably. I
have build a squid proxy , this is configured into the network on via
our Sonicwall
Hello,
i have got a Squid version 3.1.8 running on a CentOS 5.x.
Since it works for url and content filtering, in conjunction with
Dansguardian in front, for some hunderd of users, the load average of that
machine is sometimes very high (also 5.0 or 8.0...).
The biggest process is squid,
local web clients, i.e., not even set its browser to use the
locally-running squid?
J
On Sun, Jul 10, 2011 at 9:50 PM, Jenny Lee bodycar...@live.com wrote:
Is this a bug? If the network is down, shouldn't squid just generate
an error page, like ERR_CONNECT_FAIL, and not collapse like
How can you expect *machineS* to get a response from squid if network is
down?
Proxy server. Squid accepts clients on inside interface and
connects to internet servers on outside interface.
Outside interface goes down with inside interface still alive.
I would actually like to have the
Is this a bug? If the network is down, shouldn't squid just generate
an error page, like ERR_CONNECT_FAIL, and not collapse like this?
Logically, how would you expect squid to convey ERR_CONNECT_FAIL to the client
if the network is down?
I can think of only one case where this might make
Are you cloning interent for Iran?
Jenny
Dear all,
i have a squid server and separate server which has a million page from
million URL,you know that i can insert page into cache via squidclient
MYURL,but it uses GET http command and download page,now i have this
page and just wanna
Dear all,
I don't know to use which ext4 stable or reiserFS for squid.
Which has high performance?
I think reiserFS is not a wise choice.
- Its user base is limited and becoming less and less
- It had corruption issues in the past (especially with postfix)
- No vendor supports it
- Its
Good Lord!!!
The amount of free RAM in my system keeps decreasing, What happens
when it RAM reaches to zero? Is it that it remove old object and free
up space?
It is probably being used by buffer and cache.
free -m enter
should show you how much available memory and cache there is.
Subject: Re: [squid-users] Memory issues
free -m
total used free shared buffers cached
Mem: 3722 3011 710 0 305 1352
-/+ buffers/cache: 1353 2369
Swap: 2047 21 2025
Do I genuinely require to increase the memory of this system?
No. It looks good.
I don't understand where you came up
NP: (rant warning) if you followed most any online tutorial for
disabling IPv6 in RHEL. Most only go so far as to make the kernel drop
IPv6 packets. Rather than actually turning the OFF kernel control which
would inform the relevant software that it cannot use IPv6 ports. So it
sends a
Dear Jenny and Amos,
I thought it worth mentioning that I too am having troubles with the
ACL processing of the request_header_access User-Agent configuration
directive. It seems like Jenny's issue is the same one I am seeing.
Using a src ACL in the directive doesn't work when you have a
Ouch! Add these at least:
$IPT6 -A INPUT -j REJECT
$IPT6 -A OUTPUT -j REJECT
$IPT6 -A FORWARD -j REJECT
$IPT6 -P INPUT DROP
$IPT6 -P OUTPUT DROP
$IPT6 -P FORWARD DROP
fi
And *that* is exactly the type of false disable I was talking about.
Squid and other software will
Hello Squid Team,
Thank you for much awaited 3.2.0.9 release. This one seem to have one major
issue:
1) Peers are not honored. All connections going direct. I tried everything
possible but of no use. Can someone verify?
Others:
2) assertion failed: mem.cc:190: MemPools[type] == NULL
3)
to [::]: (2) No such file or directory
On 12/06/11 20:21, Jenny Lee wrote:
Subject: Re: [squid-users] WORKERS: Any compile option to enable?
commBind: Cannot bind socket FD 13 to [::]: (2) No such file or directory
On 12/06/11 16:17, Jenny Lee wrote:
I can't get the workers work
On Sat, Jun 11, 2011 at 9:40 PM, Jenny Lee bodycar...@live.com wrote:
I like to know how you are able to do 13000 requests/sec.
tcp_fin_timeout is 60 seconds default on all *NIXes and available ephemeral
port range is 64K.
I can't do more than 1K requests/sec even with tcp_tw_reuse
) No such file or directory
On 12/06/11 16:17, Jenny Lee wrote:
I can't get the workers work. They are started fine. However I get:
kid1| commBind: Cannot bind socket FD 13 to [::]: (2) No such file or
directory
kid2| commBind: Cannot bind socket FD 13 to [::]: (2) No such file
On 12/06/11 16:17, Jenny Lee wrote:
I can't get the workers work. They are started fine. However I get:
kid1| commBind: Cannot bind socket FD 13 to [::]: (2) No such file or
directory
kid2| commBind: Cannot bind socket FD 13 to [::]: (2) No such file or
directory
kid3
Subject: Re: [squid-users] WORKERS: Any compile option to enable?
commBind: Cannot bind socket FD 13 to [::]: (2) No such file or directory
On 12/06/11 16:17, Jenny Lee wrote:
I can't get the workers work. They are started fine. However I get:
kid1| commBind: Cannot bind socket
Date: Sun, 12 Jun 2011 19:54:10 +1200
From: squ...@treenet.co.nz
To: squid-users@squid-cache.org
Subject: Re: [squid-users] squid 3.2.0.5 smp scaling issues
On 12/06/11 18:46, Jenny Lee wrote:
On Sat, Jun 11, 2011 at 9:40 PM, Jenny Lee wrote
Date: Sun, 12 Jun 2011 03:02:23 -0700
From: da...@lang.hm
To: bodycar...@live.com
CC: squ...@treenet.co.nz; squid-users@squid-cache.org
Subject: RE: [squid-users] squid 3.2.0.5 smp scaling issues
On Sun, 12 Jun 2011, Jenny Lee wrote:
On 12/06/11 18:46, Jenny Lee wrote:
On Sat, Jun
13 to [::]: (2) No such file or directory
On 12/06/11 20:21, Jenny Lee wrote:
Subject: Re: [squid-users] WORKERS: Any compile option to enable?
commBind: Cannot bind socket FD 13 to [::]: (2) No such file or directory
On 12/06/11 16:17, Jenny Lee wrote:
I can't get the workers work
Date: Sun, 12 Jun 2011 22:47:25 +1200
From: squ...@treenet.co.nz
To: squid-users@squid-cache.org
Subject: Re: [squid-users] squid 3.2.0.5 smp scaling issues
On 12/06/11 22:20, Jenny Lee wrote:
Date: Sun, 12 Jun 2011 03:02:23 -0700
From: da
Date: Sun, 12 Jun 2011 03:35:28 -0700
From: da...@lang.hm
To: bodycar...@live.com
CC: squid-users@squid-cache.org
Subject: RE: [squid-users] squid 3.2.0.5 smp scaling issues
On Sun, 12 Jun 2011, Jenny Lee wrote:
Date: Sun, 12 Jun 2011 03:02:23
Date: Sun, 12 Jun 2011 14:26:09 +1200
From: squ...@treenet.co.nz
To: squid-users@squid-cache.org
Subject: Re: [squid-users] kid1| assertion failed: helper.cc:697:
hlp-childs.n_running 0
On 12/06/11 14:16, Jenny Lee wrote:
Dear Squid
Dear Squid Users,
I get this occasionally with with NCSA auth followed by a restart.
What does it mean?
Jenny
RHEL6 x64
Squid 3.2.0.7
Date: Sun, 12 Jun 2011 14:26:09 +1200
From: squ...@treenet.co.nz
To: squid-users@squid-cache.org
Subject: Re: [squid-users] kid1| assertion failed: helper.cc:697:
hlp-childs.n_running 0
On 12/06/11 14:16, Jenny Lee wrote:
Dear Squid Users
Hello David,
We read your benchmarks with interest. Thank you for the work.
I have mentioned --disable-ipv6 issue before and its solution. Attaching it
for your perusal.
Jenny
one thing that I've found is that even with --disable-ipv6 squid will
still use IPv6 on a system that
I can't get the workers work. They are started fine. However I get:
kid1| commBind: Cannot bind socket FD 13 to [::]: (2) No such file or directory
kid2| commBind: Cannot bind socket FD 13 to [::]: (2) No such file or directory
kid3| commBind: Cannot bind socket FD 9 to [::]: (2) No such file
I also cannot shut down squid when workers are enabled.
squid -k shutdown gives No Running Copy
I have to run a killall -9 squid
Also what happens when I have 2 cores but start 7 workers?
Jenny
From: bodycar...@live.com
To:
I like to know how you are able to do 13000 requests/sec.
tcp_fin_timeout is 60 seconds default on all *NIXes and available ephemeral
port range is 64K.
I can't do more than 1K requests/sec even with tcp_tw_reuse/tcp_tw_recycle with
ab. I get commBind errors due to connections in TIME_WAIT.
I just realized that Cookie headers are also not obeyed when going through
peers.
Everything works going direct, but nothing works if you are using any peers.
I surely cannot be the only person out of all squid users that is bitten by
this anomaly.
Jenny
From: bodycar...@live.com
Hello Amos,
To: squid-users@squid-cache.org
Date: Thu, 9 Jun 2011 13:02:49 +1200
From: squ...@treenet.co.nz
Subject: Re: FW: [squid-users] Why doesn't REQUEST_HEADER_ACCESS work
properly with aclnames?
On Wed, 8 Jun 2011 17:01:39 +, Jenny Lee
Hello Squid Users,
I have a machine that has static connections (running apache, vsftpd, etc).
Upstream bandwidth is costly, so I would like to use our D-S-L connection to
save up on some traffic.
On D-S-L line, IP changes at each authentication [(PPPoE authentication using a
secondary IP
Hello Amos,
Is it possible to bind squid to an interface?
Squid uses the bind() API to the kernel. So no.
Thanks.
I think this sounded absurd :) Other option probably tcp_outgoing_tos/mark?
Have you tried to get it working without Squid needing a particular
sending IP? When Squid
Hello Squid Users,
cache_peer 2.2.2.2parent 31280 name=PARENT_X
On http connections, access log shows PARENT_X entry.
On https connections, access log shows 2.2.2.2 entry.
This messes up log processing.
Is there any reason for this?
Thanks.
Jenny
3.2.0.7
I would like to thank squid team for the good work on 3.2.0.7.
I went from 3.2.0.1 to 3.2.0.7 straight to development and faced no issues.
Runs reliably since 2 weeks.
1. Irritating 0 HTTP Response Code on CONNECT to peers fixed.
2. Equally irritating CD_SIBLING_HIT and all CD_ are
4. --disable-ipv6 does not work. We had to modify configure to include
#define USE_IPV6 0 to remove ipv6.
5. -fPIE does not work as always (standard on RHEL).
Is that all a list of fixed? or are 4 5 still problems?
Hello Amos,
#4 and #5 are are still problems.
#5 is bug 2996.
No difference whatever is done. PEER1, !PEER1, !PEER2... No peer... Seperate
lines...
SRC IP is never available, so it always fails. PEER is available though, I
can make it work with using just PEER1. Going direct works also as expected.
Thanks.
Jenny
kid1| ACLChecklist::preCheck:
kid1| ACLChecklist::preCheck: 0x7504abc0 checking
'request_header_access User-Agent allow OFFICE_IP !PEER1'
kid1| ACLList::matches: checking OFFICE_IP
kid1| ACL::checklistMatches: checking 'OFFICE_IP'
kid1| aclIpAddrNetworkCompare: compare:
Date: Wed, 4 May 2011 19:36:56 -0400
From: far...@itouchpoint.com
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Access log not using logformat config line.
I don't have any specific access_log config line, but that's not the
issue.
It seems to me that ACL SRC is NEVER checked when going to a Peer.
WHAT I WANT TO DO:
acl OFFICE src 1.1.1.1
request_header_access User-Agent allow OFFICE
request_header_access User-Agent deny all
request-header_replace User-Agent BOGUS AGENT
[OFFICE UA should not be modified
Date: Fri, 29 Apr 2011 01:12:55 +1200
From: squ...@treenet.co.nz
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Persistent Connections to Parent Proxy
On 28/04/11 20:19, Mathias Fischer wrote:
Hi,
We use squid together with a
I'm a little confused by this scenario and your statement It would be
nice if the crawler identified itself.
Is it spoofing an agent name identical that on your OFFICE machines?
Even the absence of a U-A header is identification in a way.
That was just an example. In its simplest form:
DO
Reality after looking at the code:
Mangling is done after peer selection right at the last milli-second
before sending the headers down the wire. It is done on all HTTP
requests including CONNECT tunnels when they are relayed.
Peering info *is* available. But src ACL does not check for
I have 3.2.0.1 and unfortunately this does not work either. I will check on
3.2.0.7 (would that make a difference?).
May do. I don't recall changing anything there directly but the passing
around of request details has been fixed in a few places earlier which
may affect it.
Also,
When you say earlier, what would be the upper end of the timeframe?
(1 week, 1 month?)
By early I mean earlier than 1st May which was the next scheduled
monthly beta.
Specifically as soon as I can migrate a half dozen bug fixes around,
test for build failures and write the ChangeLog. 72
What is the definition of OFFICE ?
request_header_access are fast ACL which will not wait for unavailable
details to be fetched.
Ah! proxy_auth :)
Jenny
acl OFFICE src 2.2.2.2
request_header_access User-Agent allow OFFICE
request_header_access User-Agent deny all
header_replace
To: squid-users@squid-cache.org
Date: Tue, 19 Apr 2011 14:36:31 +1200
From: squ...@treenet.co.nz
Subject: RE: [squid-users] Why doesn't REQUEST_HEADER_ACCESS work properly
with aclnames?
On Mon, 18 Apr 2011 19:15:53 +, Jenny Lee wrote:
What is the definition of OFFICE
Sorry for not answering. There was just had nothing I could be sure
about until now...
3.2.0.7 will be out early (and very soon) with fixes for the critical
and blocker bugs currently known to exist in 3.2.0.6 tarballs. The fixes
are now in 3.HEAD awaiting some maintenance and any
On Wed, 6 Apr 2011 11:26:09 +0800, Sharl.Jimh.Tsin wrote:
how about the dev branch? i found the tarball of 6th version of
3.2.0.x,any information?
The bundles were made, however we have already found a few nasty
problems.
I'm giving it a few more days to see how much can be fixed.
Amos
Hello Squid Folks,
Here is an excerpt from squid.conf.documented:
# TAG: request_header_access
# Usage: request_header_access header_name allow|deny [!]aclname ...
This seems to work only as:
request_header_access User-Agent deny all
Why can't I do:
request_header_access
Hello Amos,
What is the definition of OFFICE ?
request_header_access are fast ACL which will not wait for unavailable
details to be fetched.
Ah! proxy_auth :)
Jenny
Hello Squid folks,
When are we going to see oa in logformat in 3.2?
It has been a very long while this existed on 2.7 but seems to be forgotten for
3.2.
I see it is commented in Token.cc. Ditto in 3.HEAD.
Thanks
Jenny
On 29/03/11 02:45, Amos Jeffries wrote:
On 29/03/11 01:31, Jenny Lee wrote:
Hello Squid folks,
When are we going to see oa in logformat in 3.2?
Thanks for the reminder. The next 3.2 should have it.
I should also mention the 3.2 version will be %la to fit
1 - 100 of 126 matches
Mail list logo