Hi,
I'm running about 50ish squids on different machines, and they work
great except for 4 of them. The 4 I'm having problems with are running
on Debian 6 in a Virtuozzo VPS.
Kernel: 2.6.18-028stab070.5
Squid3: 3.1.6-1.2
Non-access related config options:
cache_dir ufs /var/spool/squid3 100
On 07/02/2011 09:33 PM, Amos Jeffries wrote:
There are a few memory leaks and resource over-consumption problems
resolved since 3.1.6. Please try the 3.1.12 package from Debian
Wheezy/Testing repositories.
Amos,
I looked at the change log for 3.1.13 and all the versions back to the
one I've
On 07/03/2011 11:32 AM, Will Roberts wrote:
On 07/02/2011 09:33 PM, Amos Jeffries wrote:
There are a few memory leaks and resource over-consumption problems
resolved since 3.1.6. Please try the 3.1.12 package from Debian
Wheezy/Testing repositories.
Amos,
3.1.12-1 has been working great
On 07/18/2011 06:33 PM, Amos Jeffries wrote:
Sounds like the problem is in logrotate. The squid -k rotate with
logfile_rotate 0 just closes the logs and reopens whatever is now using
the filenames. logrotate is fully responsible for moving the file and
creating a new one ready for the squid
Here are simple steps to reproduce the problem I see with 3.1.12:
1. Add 127.0.0.1 bogus to your /etc/hosts
2. Add http_port bogus:8080 to your squid.conf
3. Restart squid
4. Modify /etc/hosts so that bogus no longer resolves
5. Force a log rotation: logrotate -f /etc/logrotate.d/squid3
which
Hi Amos,
On Fri, Jul 22, 2011 at 12:32 AM, Amos Jeffries squ...@treenet.co.nz wrote:
Can you check the contents of the squid.pid file vs the processes that are
actually running between step (6) and (7).
Sure, here are the processes after each step that would cause a change:
Step 3:
1879 ?
On Wed, Aug 10, 2011 at 12:56 PM, alexus ale...@gmail.com wrote:
2) add ip for 24h to a trusted list, so it will not prompt for
userid/password until 24h is expired
Add an external_acl_helper that checks against a list of trusted IPs.
It's up to you to maintain that list, but that shouldn't be
Alexei,
Can you provide us with the output of ip -6 route?
I've seen this problem when my server was configured with a bogus IPv6 route.
--Will
On Fri, Aug 19, 2011 at 8:34 AM, Alexei Ustyuzhaninov
al...@alust.homeunix.com wrote:
On 17.08.2011 03:53, Amos Jeffries wrote:
On Tue, 16 Aug 2011
Hello,
I'd like to capture POST data sent with requests in addition to the
headers that are normally available through squid's logging. I've got
the code and I'm looking mainly at the client and access log classes.
Any ideas as to what the best approach would be in order to do this?
Add another
On Thu, Sep 1, 2011 at 1:10 AM, Amos Jeffries squ...@treenet.co.nz wrote:
Before you go any further. Why? I'd like to is nowhere near good enough
reason to touch or even look at that data.
I suppose because I said so isn't good enough either? :)
Seriously though, what I'm trying to do is
On 09/11/2011 11:14 PM, John Kenyon wrote:
Hi Michael,
I tried disabling ipv6 - no luck! Still getting 30-60 second wait to load this
page.
It *should* take 2-3 seconds to get the initial page up... how long does it
take for you?
https://www.my.commbank.com.au/netbank/Logon/Logon.aspx
On 10/20/2011 04:58 PM, John H. Nyhuis wrote:
Hi, I'd appreciate some squid.conf assistance.
I'm trying to to build a squid in the middle server that will bump a client's
http request to https. This is not exposed to the real world; it's to help
developers who need to capture and decrypt
Hi,
I'm trying to use SSLBump, but whenever I visit a new HTTPS website I'm
asked to authenticate again, and it the prompt makes it seem like it's
coming from the website instead of the proxy. Then if I hit a site over
HTTPS that does require a user/pass, via a browser prompt, squid assumes
On 10/29/2011 10:50 AM, Martin Birgmeier wrote:
I have full IPv4/IPv6 connectivity - with a glitch: one host which
announces both IPv4 and IPv6 addresses can in fact only be reached over
IPv4.
How do I configure squid to try only the IPv4 address for this host?
You can specify the host's IP
Hi,
I just updated to 3.2.0.13 because it looked like my squid was having
problems updating certs and I saw a fix for that in the changelog. I
seem to still have the same problem, but now with a different (better)
error message.
It looks like the certificate is properly removed from disk,
On 10/30/2011 11:46 PM, Alex Rousskov wrote:
Hi Will,
Please file a bug report with Squid bugzilla, including the exact
error message and other relevant details. Posting cache.log with
debug_options set to ALL,9 may be helpful, especially if you can
reproduce the problem with just a few
Hi,
I'm trying to use Squid 3.2.0.14, but whenever I run squid it exits
after only printing:
FATAL: Ipc::Mem::Segment::create failed to
shm_open(/squid-squid-page-pool.shm): (13) Permission denied
For the moment I've switched HAVE_SHM to 0 so I can continue testing,
but what does the
sudo apt-get install squid3
(at least on Debian)
--Will
On Wed, Jan 25, 2012 at 1:57 PM, berry guru berryg...@gmail.com wrote:
I'm wondering how to install the latest version of Squid ...version
3.1 on Ubuntu server using apt-get. When I run the command 'sudo
apt-get install squid' its
'. Will this
suffice? I'm afraid I'll have problems having both Squid
installations on this server.
On Wed, Jan 25, 2012 at 11:01 AM, berry guru berryg...@gmail.com wrote:
Dang! I was putting in the squid-3 for some odd reason. Thanks Will!
On Wed, Jan 25, 2012 at 11:00 AM, Will Roberts
You can use a hosts file to force certain domains to resolve to different IPs.
http://www.squid-cache.org/Doc/config/hosts_file/
--Will
On Mon, Jan 30, 2012 at 1:00 PM, Carter, David dcar...@ddcadvocacy.com wrote:
I looked in the FAQ, but I'm sure even what to call what I'm looking for. I
Hi,
Is anyone using digest auth with squid 3.2.0.14? Can you try accessing
your squid with the correct username and wrong password? That fails as
I'd expect in squid 3.2.0.13 (and 3.1.18), but succeeds in 3.2.0.14.
At the moment I don't see how that could be a problem in my setup, but
I'd love to
Hi,
I'm trying to log the name of the ACL that allowed/denied access for a
particular request. I have a patch that seems to work fine on all my
machines except one. On that one machine it'll work fine for several
hours, but then begins logging other garbage; sometimes parts of URLs,
other
On 04/02/2012 08:41 PM, Amos Jeffries wrote:
On 03.04.2012 12:02, Will Roberts wrote:
What you are logging is the last ACL tested. In the case of default
rules, they do not get tested as matches, so the deny line there above
will deny with ACL name bar.
Right. In my config the last ACL tested
I think you're seeing bug #3405 which has a temporary patch attached:
http://bugs.squid-cache.org/show_bug.cgi?id=3405
--Will
On Tue, Apr 10, 2012 at 5:52 AM, Bijoy Lobo bijoy.l...@paladion.net wrote:
I have configured SSL-BUMP and Dynamic SSL creation. However, my first
attempt to facebook
Hi,
I've found my squid 'stuck' a couple times the past week. It'll be
spinning on one of my cores and not responding to connections or any
signals and I have to kill -9 it and restart.
I know the 3.2 branch has moved on a bit, but I'd like to try and debug
this to make sure it's something
On 05/27/2012 07:35 AM, Amos Jeffries wrote:
strace is ususally the best for this type of issue. That will show where
its looping and you can then lookup changes to the component in the
changeset archive to see if anything similar is fixed.
Amos, thanks I'll try that next time it happens.
I've got this happening again, this time with 3.2.0.17.
On 05/27/2012 11:20 AM, Will Roberts wrote:
On 05/27/2012 07:35 AM, Amos Jeffries wrote:
strace is ususally the best for this type of issue. That will show where
its looping and you can then lookup changes to the component
On 06/17/2012 08:08 PM, Will Roberts wrote:
strace is producing no output. Infinite loop without syscalls?
I also tried attaching with gdb, but even as root I'm getting ptrace:
Operation not permitted. Any ideas on what that means? Or other ways to
get some information for you guys?
I'm still
On 07/09/2012 02:18 AM, Alan wrote:
A quick search suggest that you are using some kernel security crap, I
don't know much about it but try this:
echo 0 /proc/sys/kernel/yama/ptrace_scope
Or simply start squid from gdb instead of attaching to the existing process.
Alan,
I believe I stumbled
On 08/18/2012 08:02 AM, Robert Collins wrote:
On Sat, Aug 18, 2012 at 2:51 PM, Bennett Haselton benn...@peacefire.org wrote:
I installed squid 3.1.10 on CentOS 6.3 with the default squid.conf.
When I test it out from localhost:
The following error was encountered while trying to retrieve the
Hi,
I'm using squid 3.1.20 as a reverse proxy to provide an SSL frontend as
well as caching.
I'm looking for configuration directives that would allow me to prevent
squid from being susceptible to CRIME. Is there a way to pass a flag to
the SSL library to disable compression (CRIME)?
On 01/03/2013 11:16 PM, Woon Khai Swen wrote:
Dear all,
I found out the self signed ssl root cert for transparent SSL interception (SSL
Bump + origin cert mimicking + dynamic cert generation) is valid only for 365
days max, no matter how many additional days specified in openssl cert
On Tue, Jan 15, 2013 at 2:39 PM, dweimer dwei...@dweimer.net wrote:
01-15-2013 12:24:34PM 0 10.20.146.43 NONE/400 388 HEAD / - NONE/-
text/html
01-15-2013 01:00:01PM 0 10.20.146.43 NONE/400 388 HEAD / - NONE/-
text/html
Someone's doing a HEAD request against your proxy as if it was
On 04/03/2013 12:50 PM, Vernet Jerome wrote:
It's OK now, thanks for help ! Unfortunatly, 3.1.23 do not help for bugs we
have (like on http://entreprises.edf.com, where with squid nothing will
display).
For what it's worth, I can view that website normally through my 3.1.23
squid install.
I think you missed Alex's point.
That page itself sits behind a reverse proxy that adds X-Forwarded-For.
So using that for your testing isn't going to help.
On 10/09/2013 03:01 PM, merc1...@f-m.fm wrote:
Well for Heaven's sake.
What motivation could he possibly have for dinking with teh
understand exactly what he said.
My question is what possible motive could ericgiguere have for
misrepresenting headers, on a header query site?
It just doesn't make sense.
On Wed, Oct 9, 2013, at 12:05, Will Roberts wrote:
I think you missed Alex's point.
That page itself sits behind
Unless you do SSL bumping, Squid will not touch the contents of HTTPS
connections.
--Will
On 11/22/2013 09:12 AM, Madhav V Diwan wrote:
Add this directive to your squid.conf file
forwarded_for on
The documentation for the directive is here:
Hi,
I'm trying to use the SMP Scale feature added in 3.2 and I'm having a
little trouble activating it. If I add workers = 2 to my squid.conf I
get the following error during startup:
FATAL: Bungled /etc/squid3/squid.conf line 3: workers = 1
I built my own instead of using a pre-built
On 12/30/2013 11:16 AM, Alex Crow wrote:
Hi,
Are you sure you don't have it in twice once you add your line? Check
line 3 of the conf to make sure it's not there already.
Cheers
Alex
Alex,
Yes I'm sure it's only in the file once, it's pretty small:
# CUSTOM OPTIONS
#
On 12/30/2013 01:00 PM, Alex Crow wrote:
Should it not be:
workers 2
rather than
workers = 2
?
Alex
Yep it sure should be. Thanks.
Hi,
I'm working with an SmpScale configuration with 2 workers defined. Each
worker has its own set of unique ports that it listens on. The
coordinator process doesn't have any http_port lines and generates tons
of these warnings:
ERROR: No forward-proxy ports configured
That doesn't seem
Developer
www.getoffmalawn.com
On Thu, Jan 16, 2014 at 9:19 AM, Will Roberts ironwil...@gmail.com wrote:
Hi,
I'm working with an SmpScale configuration with 2 workers defined. Each
worker has its own set of unique ports that it listens on. The coordinator
process doesn't have any http_port lines
On 01/15/2014 07:32 PM, Amos Jeffries wrote:
Something strange going on here with your Coordinator. That error is
only produced when actively generating a response that needs to embed
a URI for some resource served by Squid.
What is your coordinator doing that needs it to be aware of the
On 01/16/2014 04:32 AM, Amos Jeffries wrote:
Aha! Excellent catch.
For a next test (and workaround) try adding an if for the coordinator
process number around a dummy mime.conf file with just a comment in it.
If that works, please report this as a bug with the relevant details.
Amos,
That
Hi,
I'm having a problem with some of my squids where they'll crash with one
of these two messages:
FATAL: dying from an unhandled exception: AddOpenedHttpSocket(s-listenConn)
FATAL: dying from an unhandled exception: HttpSockets[NHttpSockets] 0
I haven't seen anything on the list with that
of messages about closing old connections due to
lifetime timeout, is there any possibility that we're hitting a fd
limit? Or something else that would cause opening a connection to fail?
--Will
On 01/21/2014 05:53 PM, Will Roberts wrote:
Hi,
I'm having a problem with some of my squids where
of them. I can check how many fds
are in use next time this happens.
On 01/21/2014 05:53 PM, Will Roberts wrote:
Hi,
I'm having a problem with some of my squids where they'll crash with
one of these two messages:
FATAL: dying from an unhandled exception:
AddOpenedHttpSocket(s-listenConn
of them. I can check how many fds
are in use next time this happens.
On 01/21/2014 05:53 PM, Will Roberts wrote:
Hi,
I'm having a problem with some of my squids where they'll crash with
one of these two messages:
FATAL: dying from an unhandled exception:
AddOpenedHttpSocket(s-listenConn
On 01/23/2014 12:49 AM, Eliezer Croitoru wrote:
Hey Will,
About the 3.4.2, what OS are you using?
Is it a self compiled version of squid?
squid -v will give the basic idea of the squid configurations.
I'm using a self-compiled version on Debian 6 (64bit only at the moment).
configure
On 01/23/2014 07:26 PM, Eliezer Croitoru wrote:
On 23/01/14 15:17, Will Roberts wrote:
The server I just checked has about 906 FDs open, however, looking at
the limits it can have up to 65535 open. So that's probably not directly
the issue, unless there's a select call somewhere that's failing
On Mon, Feb 15, 2016 at 9:02 AM, Amos Jeffries wrote:
>
> Hmm, years worth of code change between those. I dont see anything
> NFMARK related in my patch list. But possibly.
>
And ultimately it does not appear to be a code change, but rather a build
change that bit me. In
In Advance
--
Geoff Roberts
IT Supervisor
Saint Mark's College
Port Pirie, South Australia
geoff...@stmarks.pp.catholic.edu.au
script files in .php or perl to do
the redirect is enough to put me off. I don't speak
C, perl, php or java.
I wish they'd just pick ONE script language and leave it at that.
Thanks for all your help and advice, it's appreciated.
Regards
--
Geoff Roberts
IT Supervisor
Saint Mark's
On Tuesday, 17 February 2009 at 7:52 am, in message
4999d91a.5080...@gci.net,
Chris Robertson crobert...@gci.net wrote:
Geoffrey ROBERTS wrote:
How was the old version installed?
Tick a box when installing SLES10.
No idea of the actual method.
I eventually found squid.exe and some other
Dear Squid-masters,
I would like to configure Squid so that it always serves the latest
available version of any given URL, even if the URL is no longer available
at the original server. In this way, Squid's clients would never receive an
error for a given URL, as long as that URL had been
Hello, Leonardo.
Thanks for your prompt reply!
On 8/21/08 3:02 PM, Leonardo Rodrigues Magalhães
[EMAIL PROTECTED] wrote:
are you sure you wanna do this kind of configuration ???
Yes. I am aware that my request is unusual, as this is to be a
special-purpose installation of squid.
have you
Hi,
Has anyone got any experiences of using N2H2 content filtering with
Squid?
Thanks in advance
Sam Roberts
Howdee. I'm looking for help on an issue with my bridging Squid server. I'm
new to configuring Squid but have been in and around it for a few years,
so I'm not a total idiot with it. I have:
Mandriva 2005
2 NIC's - eth0 eth1
Squid 2.5 Stable
1.06 Bridge-Tools
Network layout is like this:
Hi,
I am thinking about implementing delay-pools in my squid transparent proxy on my
Linux box. The reason is that my ISP (cable modem) has a monthly limit on the
number of bytes I can download. This didn't use to be a problem, but recently
my two kids have got laptops from school and all of a
Hi,
I have been using squid for about 4-5 months successfully on a RedHat 7.1 box
which acts as the nat router / firewall between the I-net and my LAN. A couple
of days ago I decided to upgrade to Fedora Core4. I have now got most things
working, but the browers on my LAN clients are not able
]
Sent: Monday, June 27, 2005 7:36 PM
To: [EMAIL PROTECTED]
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Squid not starting up after update to Fedora Core4
Vaughan Roberts wrote:
Hi,
I have been using squid for about 4-5 months successfully on a RedHat
7.1 box which acts as the nat
you disabled SELinux, it still did not work ?
Regards
Gert Brits
Senior Engineer
Technology Concepts
Tel +27 11 803 2169
Fax +27 11 803 2189
Web www.techconcepts.co.za
-Original Message-
From: Vaughan Roberts [mailto:[EMAIL PROTECTED]
Sent: Monday, June 27, 2005 12:28 PM
To: 'Emilio
I have a customer that uses a proxy server to authenticate users to an
internet based system. In summary, it is installed in a DMZ as a reverse
proxy accessing the customers service. The Service is built up using
several VIP addresses over http. When an Internet user accesses the
external URL they
I have come across a strange problem, after what could be days, hours or
even 10 minutes my transparent proxy will just stop working. I have tried
to restart squid, flush and reset my firewall rules, restart NoCatAuth,
and in the end the only thing that will get this working again is a full
On Thu, 25 Mar 2004 16:35:24 +0200, Denis Vlasenko
[EMAIL PROTECTED] wrote:
On Thursday 25 March 2004 08:44, E Roberts wrote:
I have come across a strange problem, after what could be days, hours or
even 10 minutes my transparent proxy will just stop working. I have
tried
tcpdump of this? What
Mar 2004 16:35:24 +0200, Denis Vlasenko
[EMAIL PROTECTED] wrote:
On Thursday 25 March 2004 08:44, E Roberts wrote:
I have come across a strange problem, after what could be days, hours or
even 10 minutes my transparent proxy will just stop working. I have
tried
tcpdump of this? What _exactly_
I have been going one by one though each package I can find to give me the
status of my squid server. So far I've been disapointed. The best
information I can get is from SCALAR (http://scalar.risk.az/), but it
doesn't save the data and only good for one run, plus the lack of HTML
output
Been trying to find ways to tweak my proxy setup and came across some info
about caching windowsupdate.com, and also some other sites. I was
wondering if anyone has anything to add to this for other sites that might
be popular and/or more Microsoft sites?
This is what I found so far. I
Hi,
I am running squid version 2.5 on Linux and am using NCSA auth to control www
access.
We have some Windows XP workstations which I find can't download updates from
the Microsoft sites. I have acls
acl WU1 dstdom_regex -i download.microsoft.com
acl WU1 dstdom_regex -i windowsupdate.com
acl
I have a client whose Squid proxy is blocking PATCH requests (returning 400
bad request) and defeating functionality in web software that uses this
HTTP method.
The "Server" header is "squid/2.7.STABLE6"
I assume that an update to Squid would resolve this issue but need to
confirm before pushing
70 matches
Mail list logo