What makes you think that? HTTP/1.1 does not require FIN and as far as I can
see those URL responses all contain explicit Connection:keep-alive.
The content of those packets between Squid and server may shed some more
light on it. Try saving a full packet dump and viewing it with wireshark.
Hello everyone,
I need logformat only with the user write in browser. It's possible?
For example: User1 write in your browser www.one.com. Thats translate
in log of squid as:
request--- www.one.com
request--- .one.com
request--- .one.com/aaa.gif
...
I need all request are www.one.com.
Hi John,
As Amos mentioned, it would be great to see http payload of the packets
(use -Afnn switches for this). Also traffic on proxy itself is more
interesting.
IIRC, 502 status code means that your proxy has some issues when reading
data from origin server. Do you see anything suspicious in
On 9/10/2013 6:55 p.m., Stefano Malini wrote:
hi Dave,
so, i changed the line
http_access deny myLan
to
http_access deny myLan all.
but it's the same behavior. Squid doesn't stop.
In the logs file there is 127.0.0.1 for every http request, what does it mean?
It means the request is
On Wed, Oct 09, 2013 at 08:41:55AM +1000, John Kenyon wrote:
Hi All,
Hope someone can shed some light on a problem I am experiencing... I can
reproduce a (104) Connection reset by peer error consistently on a certain
website when trying to login.
When the 502 bad gateway issue appears
On 9/10/2013 7:21 p.m., Usuario Lista wrote:
Hello everyone,
I need logformat only with the user write in browser. It's possible?
No. Squid is only aware what it gets asked for.
For example: User1 write in your browser www.one.com. Thats translate
in log of squid as:
request--- www.one.com
Often, when I see the 502 errors in our squid 3.3.9 access.log, the problem
is in
our company firewall. Firewall blocks packets from origin server, squid is not
able to receive them and writes 502 errors to access.log.
Try to check your firewall logs. HTH.
--
Peter Benko
Hi Peter,
Thanks Alex and Amos.
I`m using now two acl's: dst and dstdomain. It's work fine.
Shawn, no I want bump more domains. Yes,I thought about this method.
My first message was an example.
2013/10/8 shawn wilson ag4ve...@gmail.com:
If I understand correctly, you want to bump IPs from one domain,
Amos Jeffries wrote:
On 9/10/2013 9:39 a.m., Dash Four wrote:
I have the following problem: I use the hosts file to store static
address mappings, usually containing sites which use geo address
mapping (in other words, determine the ip address one is going to use
depending on the geographic
On 10/10/2013 1:00 a.m., Dash Four wrote:
Amos Jeffries wrote:
On 9/10/2013 9:39 a.m., Dash Four wrote:
I have the following problem: I use the hosts file to store static
address mappings, usually containing sites which use geo address
mapping (in other words, determine the ip address one is
Looks like turning off x-forwarded-for, has been disabled now. Nothing
works. I've tried:
forwarded_for delete
forwarded_for off
forwarded_for transparent
request_header_replace X-Forwarded-For 127.0.0.1
request_header_access X-Forwarded-For deny all
reply_header_access X-Forwarded-For deny all
On 10/09/2013 06:00 AM, Dash Four wrote:
I then stopped (-k shutdown)
and then started squid without touching the existing cache - again,
squid was still referring to the old host-ip mapping.
For HTTP misses, the above is not possible (as Amos said) because Squid
does not store its DNS caches
On 10/09/2013 10:15 AM, merc1...@f-m.fm wrote:
Looks like turning off x-forwarded-for, has been disabled now. Nothing
works.
To see what I'm talking about, go to
http://www.ericgiguere.com/tools/http-header-viewer.html
The above web page hosts a script that cannot be used as intended
Hello,
I need to run several Squid child nodes on budget VPS servers to serve
as web caches. I have a parent proxy that serves as a content filter.
So the content filter doesn't become the bottleneck in terms of
bandwidth would it be possible to do the following?
1. Child proxy receives request
Well for Heaven's sake.
What motivation could he possibly have for dinking with teh headers?
On Wed, Oct 9, 2013, at 11:08, Alex Rousskov wrote:
On 10/09/2013 10:15 AM, merc1...@f-m.fm wrote:
Looks like turning off x-forwarded-for, has been disabled now. Nothing
works.
To see what
I think you missed Alex's point.
That page itself sits behind a reverse proxy that adds X-Forwarded-For.
So using that for your testing isn't going to help.
On 10/09/2013 03:01 PM, merc1...@f-m.fm wrote:
Well for Heaven's sake.
What motivation could he possibly have for dinking with teh
Hey,
In the old days of hosts this one the only option.
if you are really into this thing you can use a resolver for this task.
If you will start a DNS resolver that sits on the let say localhost and
has the option to lookup another services you can use this resolver to
do the same thing.
I
I was sitting trying to figure out what every config directive of squid
does.
I starte reading again and again And then I see:
http://www.squid-cache.org/Doc/config/read_ahead_gap/
Why would I need that anyway? wont my little 3.X ghz machine will do all
the tricks?
So I tried to understand
Didn't miss his point and I understand exactly what he said.
My question is what possible motive could ericgiguere have for
misrepresenting headers, on a header query site?
It just doesn't make sense.
On Wed, Oct 9, 2013, at 12:05, Will Roberts wrote:
I think you missed Alex's point.
That
I'm sure it wasn't malicious. That tool was put up in 2003. At some
point in the past 10 years he probably put a reverse proxy in front of
his site. Maybe you should email him and tell him he's broken his header
tool.
On 10/09/2013 03:55 PM, merc1...@f-m.fm wrote:
Didn't miss his point and I
On 10/09/2013 01:38 PM, Eliezer Croitoru wrote:
I was sitting trying to figure out what every config directive of squid
does. I starte reading again and again And then I see:
http://www.squid-cache.org/Doc/config/read_ahead_gap/
As you probably know, the option is supposed to limit internal
Hi i am using Squid 3.3.9 with Kerberos authentication on my network.
we know have a requirement where we need to give guest users access on
the same proxy, is it possible to run squid on a additional port and
have different ACL's for those users connecting to that port?
I know ideally having a
Hey Alex,
Thanks for the words of explanation.
I could think of another situation which the client is pumping the
network in couple TCP layers and WWW(port 80) is not the most
prioritized traffic.
In the above case the client tries again and again to fetch in the TCP
level but the milk is not
On 10/09/2013 12:28 PM, Jeffrey Mealo wrote:
I need to run several Squid child nodes on budget VPS servers to serve
as web caches. I have a parent proxy that serves as a content filter.
Does the content filter make its decision based on the request [URL]
only? Or does it convert a HEAD request
looks like multiple instances is a option.
On Wed, Oct 9, 2013 at 11:04 PM, JC Putter jcput...@gmail.com wrote:
Hi i am using Squid 3.3.9 with Kerberos authentication on my network.
we know have a requirement where we need to give guest users access on
the same proxy, is it possible to run
So this rule:
iptables -t NAT -A PREROUTING -p tcp -i eth0 --dport 80 -m hashlimit
--hashlimit 100/second \
--hashlimit-burst 100 --hashlimit-mode dstport --hashlimit-name
rate limit 80\
-J REDIRECT --to-port $AbuseServerTriggerAndNotifyPage
Should do the trick..
But as Amos
On 10/09/2013 03:57 PM, Eliezer Croitoru wrote:
While external_acl_type is very tempting a eCAP would be the better
choice for performence reasons.
ICAP has the upper hand while allowing concurrency by defalut.
So external_acl_type is nice and helps a lot but it would add some over
Hi John,
As Amos mentioned, it would be great to see http payload of the
packets (use - Afnn switches for this). Also traffic on proxy itself is more
interesting.
IIRC, 502 status code means that your proxy has some issues when
reading data from origin server. Do you see anything
hi,
we had a php script included as external_acl_type within debian 5 and 6
within squid2
where every request get counted within a database.
After upgrading to 3.1 the script dies, as it happens pretty often,
squid does not give via standardin (or the script couldn't read it)
we had this
No need for two instances ...
just get squid listening on how many ports you need it to:
http_port port1
http_port port2
...
http_port portN
create ACLs for each port
acl port1 myport port1
acl port1 myport port2
...
acl portN myport portN
and get all your http_access rules
Hi all,
I've been banging my head against an auth issue with Squid3 for some
time now, I'm hoping someone here will be able to shine a light on it.
I have installed squid 3 (3.1.20) on Debian 7 (using the Debian package)
I've configured it according to the wiki doc:
On 10/10/2013 9:05 a.m., Will Roberts wrote:
I'm sure it wasn't malicious. That tool was put up in 2003. At some
point in the past 10 years he probably put a reverse proxy in front of
his site. Maybe you should email him and tell him he's broken his
header tool.
But ... has he actually
On 10/10/2013 10:27 a.m., Alex Rousskov wrote:
On 10/09/2013 12:28 PM, Jeffrey Mealo wrote:
I need to run several Squid child nodes on budget VPS servers to serve
as web caches. I have a parent proxy that serves as a content filter.
Does the content filter make its decision based on the
On 10/10/2013 12:33 p.m., Thomas Stegbauer wrote:
hi,
we had a php script included as external_acl_type within debian 5 and
6 within squid2
where every request get counted within a database.
After upgrading to 3.1 the script dies, as it happens pretty often,
squid does not give via
my server runs squid 3.3.9,and workers is set to be 3
the following from cache.log:
2013/10/10 08:48:36 kid1| ctx: enter level 0:
'http://calendar.snsapp.qq.com/cgi-bin/my_calendar_app_flag?g_tk=1964882004'
Server: QZHTTP-2.38.17}1| WARNING: suspicious CR characters in HTTP header
{Keep-Alive:
On 10/10/2013 12:40 p.m., Leonardo Rodrigues wrote:
No need for two instances ...
just get squid listening on how many ports you need it to:
http_port port1
http_port port2
...
http_port portN
create ACLs for each port
acl port1 myport port1
acl port1 myport port2
...
acl portN
On Wed, Oct 9, 2013, at 20:35, Amos Jeffries wrote:
All such online header tools are really only delivering a report of the
headers which reached them. None of them have ever displayed The
Truth(tm). The internals of the browser itself contains a set of layers
doing header additions and
On 10/10/2013 2:19 p.m., Luke Pascoe wrote:
Hi all,
I've been banging my head against an auth issue with Squid3 for some
time now, I'm hoping someone here will be able to shine a light on it.
I have installed squid 3 (3.1.20) on Debian 7 (using the Debian package)
I've configured it according
38 matches
Mail list logo