Hi,
For those who need a step-by-step tutorial in configuring transparent
proxy, read this:
http://www.sublime.com.au/squid-wccp/
Regards
Mun Fai
Wei Ming Long wrote:
Hi Jason,
Did you enable ip_forwarding in your linux kernel.
check by cat /etc/sys/net/ipv4/ip_forward, if it's 0, then echo 1
Hi,
I try to configure squid2.5stable3 in accelerator mode, but I have some
problems.
1.
I try to authenticate users with pam_auth again radius server with RSA Token. It
seems that the pam module safe passwords from the user. The first authentication
doesnt work, but if you try it again with
Hi,
I try to configure squid2.5stable3 in accelerator mode, but I have some
problems.
1.
I try to authenticate users with pam_auth again radius server with RSA Token. It
seems that the pam module safe passwords from the user. The first authentication
doesnt work, but if you try it again with
Hello,
for a few days I have a strange problem with
squid running under FreeBSD. Although the proxy
is almost not used squid runs all the time with
99% CPU utilization.
Have already rebuilt the complete cache dir and
installed the latest version (2.5STABLE3) nothing
changed the situation (rolled
check you cache.log. problem can be in cache rebuilding.
if you squid process still used 99% CPU then you must C something in
cache.log and I'm sure it will be cache rebuilding dir that's y it's taking
time. it's depend on size of cache and mem + processor that how they can
rebuild cache fast..
Squid Cache (Version 2.5.STABLE1-20030204)
Red Hat Linux 8.0
I'm receiving this error in cache.log
any clue about this error messages
2003/07/02 15:59:16| fqdncacheParse: No PTR record
Hasan,
Don't worry about it this normal behave of fqdncahce when it does not find
reverse record against the IP.
--
Best Regs,
Masood Ahmad Shah
System Administrator
^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^
| * * * * * * * * * * * * * * * * * * * * * * * *
| Fibre Net (Pvt) Ltd.
What is the size of your cache_dir and cache_mem?
Hermann
Hello,
for a few days I have a strange problem with
squid running under FreeBSD. Although the proxy
is almost not used squid runs all the time with
99% CPU utilization.
Have already rebuilt the complete cache dir and
installed
So far I'm sticking by my original hunch - that the problem
is the PDC taking too long to respond, and not with Squid
itself. But we'll see what the numbers reveal.
Since upping the # of children I still haven't had any
helperStatefulDefer's, but I am getting invalid callback's and since
Since upping the # of children I still haven't had any
helperStatefulDefer's, but I am getting invalid callback's and since
I've increased NTLM logging I'm seeing a number of challenge exceeded
max lifetime by xxx seconds.
The challenge exceeded max lifetime messages are probably normal.
yesterday, i found in cache.log:
2003/07/01 08:14:59| authenticateDecodeAuth: Unsupported or unconfigured
proxy-auth scheme,
'jcskihfah0jbi|nc% '
FreeBSD 4.7, Squid 2.5S2
I am running Squid 2.5STABLE3 (from source) on RedHat Linux 7.3 with the
Winbind basic and NTLM auth helpers (NTLM
Another info:
The squid_ldap_group is doing -s sub searches just fine !
But in this case I get this module from squid-2.5.STABLE2.
I'd appreciate any help !
Thanks in advance,
EVJ
- Original Message -
From: Estevam Viragh Junior [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Tuesday,
List,
I'm getting a lot of complaints from my users that when in active pages
the connection will just die. This can be duplicated with web mail
clients in MSN, yahoo and hotmail. It usually happens when typing then
sending a message. When sending, the next page will immediately be page
cannot be
I'm getting a lot of complaints from my users that when in active pages
the connection will just die. This can be duplicated with web mail
clients in MSN, yahoo and hotmail. It usually happens when typing then
sending a message. When sending, the next page will immediately be page
cannot be
I sure have looked. The problem is many people describe the problem in
different ways, so searching was hard. Thanks for the settings. I will
try these out. I DON NOT M$ to win this one. I'm trying to decommission
2 M$ Proxy servers.
-Mark
-Original Message-
From: Adam Aube
Why? If it works...
Jim
-Original Message-
From: Mark Pelkoski [mailto:[EMAIL PROTECTED]
Sent: Wednesday, July 02, 2003 1:11 PM
To: Adam Aube; Squid Users Mailing List (E-mail)
Subject: RE: [squid-users] How to fix active page time-outs? PLEASE HELP
I sure have looked. The problem is
I DON NOT M$ to win this one. I'm trying to decommission
2 M$ Proxy servers.
Even from MS vendors I have heard that MS Proxy is crap (although
they do recommend MS ISA).
Funny little anecdote: a college classmate of mine was trying to
get a MS proxy server (don't know if it was MS Proxy or
I'm trying to decommission 2 M$ Proxy servers.
Why? If it works...
Because MS Proxy works only in the marginal sense of the word.
Even MS vendors I have spoken to say it's crap.
Adam
---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version:
We have a web site that uses SSL that broke when IE SP-1 came out on XP.
We use it for pricing out parts against other competitors. It is only
broken when using IE 6 SP-1 through M$ Proxy 2.0 AND ISA. Squid fixed
the problem. Ironic... Open source fixing a Microsoft induced problem.
-Mark
(Forwarded to mailing list)
We have a web site that uses SSL that broke when IE SP-1 came out on XP.
We use it for pricing out parts against other competitors. It is only
broken when using IE 6 SP-1 through M$ Proxy 2.0 AND ISA. Squid fixed
the problem. Ironic... Open source fixing a Microsoft
On Wednesday 02 July 2003 14.32, Adam Aube wrote:
Since upping the # of children I still haven't had any
helperStatefulDefer's, but I am getting invalid callback's and
since I've increased NTLM logging I'm seeing a number of
challenge exceeded max lifetime by xxx seconds.
The challenge
On Wednesday 02 July 2003 09.10, [EMAIL PROTECTED] wrote:
Hi,
I try to configure squid2.5stable3 in accelerator mode, but I have
some problems.
1.
I try to authenticate users with pam_auth again radius server with
RSA Token. It seems that the pam module safe passwords from the
user. The
On Saturday 28 June 2003 09.52, Peña, Botp wrote:
Hi Friends,
I'd like to deny downloading of files fr common webmails like
yahoo/hotmail. It's the webmail downloads I cannot catch.
I only get this kind of log:
1056815851.164934 10.1.1.1 TCP_MISS/200 11237 GET
On Monday 30 June 2003 03.11, [EMAIL PROTECTED] wrote:
Hi,
I've been running a transparent cache using WCCP for a few weeks
now. On several occasions I have encountered problems with web site
updates not showing up for users. Hitting refresh in the browser
doesn't work.
See squid.conf.
On Monday 30 June 2003 16.10, Adam Aube wrote:
I didn't mean turn off the cache completely - just stop caching
transparently. Configure the browsers to directly use the cache.
The IE bug only affects transparent caching.
Minor technical note:
The bug is not with IE, the bug is transparent
On Monday 30 June 2003 09.36, Rabie van der Merwe wrote:
cumbersome, can I somehow put all the subnets in one ACL and then
use that ACL in
delay_access?
Yes. Just do it.
Regards
Henrik
--
Donations welcome if you consider my Free Squid support helpful.
On Monday 30 June 2003 10.01, Apostolou, Nicholas [IT] wrote:
Also what happens when you run out of DNS server processors?
This will go to 32, currently have 25 DNS processors, and I still
have more users to come on this host.
2003/06/30 14:35:28| WARNING: All dnsserver processes are busy.
On Monday 30 June 2003 18.09, Darling, Jim wrote:
I can get an initial page served through squid (a login form), but
the login fails because squid switches from making https calls to
http calls after the initial page.
Squid never makes such switches, but it is very likely your web
On Tuesday 01 July 2003 04.15, MunFai wrote:
Hi,
I'm getting the following line occasionally in my cache.log:
squidaio_queue_request: WARNING - Queue congestion
if you only get this occationally then don't worry.
If you get a lot then your harddrive probably can not keep up with the
On Tuesday 01 July 2003 07.16, Kieran Farrell wrote:
Heyas,
I have a hard time trying to explain this but I'll give it a whirl.
I am sitting behind a 3rd party firewall/proxy that requires me to
log on.
See the login= cache_peer option.
Regards
Henrik
--
Donations welcome if you
On Tuesday 01 July 2003 09.03, Li Wei wrote:
acl badURL2 urlpath_regex -i /.mp3 /.wma /.avi /.mpg /.mpeg /.swf
/.asf /.rm /.ram
These regexes are very broad. regex patterns are not just matching the
end of a URL, they are mathing the specified sequence of characters
anywhere within the url,
On Tuesday 01 July 2003 17.01, Thomas Scholze/HRZ wrote:
Hello.
The situation:
www.server --SSL-- Squid --SSL-- Client
- Client opens connection to http://some.domain.com (squid)
- squid redirects this request to a internal http-server
- on the webserver is a link to
On Tuesday 01 July 2003 21.06, FRKS KRSZTN wrote:
The reason I want this is that we have to use some internet pages
that require NTLM auth.
NTLM auth cannot be proxied due to inherent design flaws in how
Microsoft NTLM authentication over HTTP operates. It is NOT a HTTP
authentication
On Tuesday 01 July 2003 21.37, Estevam Viragh Junior wrote:
Hello all,
I'm having problems with the squid_ldap_auth module from
squid-3.0.DEVEL-20030629.
It does not seems to work with -s sub option.
(I'm using this version cause I need LDAPv3.)
Every thing works fine if I
Also make sure to verify that your samba understands
challenge-response authentication. (see the Squid FAQ).
I hadn't thought of that - that can be a big gotcha if you
don't install Samba from source.
I know from experience that the Samba RPMs in RedHat 7.3
won't support challenge-response
On Tuesday 01 July 2003 19.42, Diego Rivera wrote:
Hello all
I've been combing through the mailing lists trying to find a
conclusive answer to my question, but with little luck as yet.
I did find references to functionality similar to what I need, but
it's supposedly in 2.5 - which I don't
On Monday 30 June 2003 19.45, Adam wrote:
I did was, in the /etc/system file, the following:
set rlim_fd_cur=2048
You should not increase the rlim_fd_cur beyond the default 256 on
Solaris. Doing so may break most 32-bit applications in certain
conditions..
The max you can set as high as
On Tuesday 01 July 2003 14.21, Marc Elsen wrote:
Dusan Djordjevic wrote:
Hi all,
I am trying to set up non caching proxy server. Hardware is
4xXeon server with 1 gb ram. OS is Red Hat 9, and I am using
Squid rebuilt to use i686 cpu from SRPM. As far as I know Squid
do not uses more
On Tuesday 01 July 2003 15.52, Bhattacharyya, Somraj wrote:
Hi guys !!
If we replicate a web server and place it near as near as possible
to a client then we might not require caching servers. This is a
general statement and vissible for very large and popular web
servers.
Yes and no.
This
hi all
Sorry to bother you
I`m still having problems trying to block downloas
I`ve got a Debian runnig as a gateway 192.168.0.1/255.255.255.0
this is mi configuration file
http_port 8080
icp_port 3130
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
no_cache deny QUERY
Okay, here are my new settings:
half_closed_clients on
request_timeout 10 minutes
persistent_request_timeout 5 minutes
I opened up a Yahoo account to test. It seems the connection does stay
open up to 5 minutes (Better then before), then dies. So, the answer
would be to up the
hello
i use following setup :
192.168.0.1 - first squid (winNT)
192.168.1.1 - second squid (linux)
192.168.0.5 - third squid (linux)
192.168.0.1 and 192.168.1.1 are internet gateways, ~100kbit each
i added
cache_peer 192.168.0.1 parent 3128 3130 round-robin connect-timeout=200ms
cache_peer
On Thursday 03 July 2003 00.34, Bob Arctor wrote:
when i download more than 2 files load is balanced , each file goes
via own squid - and own internet gateway...
but one large file, or an http:// stream flies via only one
connection...
Yes, this is inherent to the design of the HTTP
adding such option to squid would be trivial, and greatly improve setups
where multiple ISP lines are available. there could be
round-robin-bond=[weight]
and 'algorithm' would be to measure incoming data rate, multiply it by number
of total weights , and do an icp query of next peer cache
I`m still having problems trying to block downloas
Two issues with your config file:
1) Those entries in magic_words2 should be in this format:
^ftp \.exe$ \.mp3$
Otherwise, you'll match in odd places and block URLs you might not
want to block.
2) You never specifically block the
Okay, here are my new settings:
half_closed_clients on
request_timeout 10 minutes
persistent_request_timeout 5 minutes
I opened up a Yahoo account to test. It seems the connection does stay
open up to 5 minutes (Better then before), then dies. So, the answer
would be to up the
adding such option to squid would be trivial, and greatly improve
setups
where multiple ISP lines are available. there could be
round-robin-bond=[weight]
and 'algorithm' would be to measure incoming data rate, multiply
it by number
of total weights , and do an icp query of next peer cache
only what you need to do is to add a control rule same as normal
for example:
acl MAX_IP max_userip -s 1
http_access deny MAX_IP
Try.
- Original Message -
From: Alejandro Javier Pomeraniec [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Wednesday, July 02, 2003 7:46 PM
Subject:
the main problem arises with streams+large files, which can't be splitted in
both cases.
i tried other solution too.
On Thursday 03 July 2003 01:50, Adam Aube wrote:
adding such option to squid would be trivial, and greatly improve
setups
where multiple ISP lines are available. there could
Thank you all That worked, I configurd SQUID for tranparent Proxy and it is
now working like a champ!
Also do I need to port 443 to squid as well? or will squid get all HTTP
requests being told only to forward port 80?
Now how do I require a user name and password to access the web pages?
On Thursday 03 July 2003 02.10, Bob Arctor wrote:
the main problem arises with streams+large files, which can't be
splitted in both cases.
i tried other solution too.
These can only be splitted if you have an agreement with your ISP that
your two connections can be bonded as a single
Thanks Henrik,
That was the problem. As usual - human error, I didn't read the FAQ
carefully enough. I just assumed that because wbinfo and wb_auth
worked, then Samba was setup correctly. Big mistake :-) After
installing Samba from source instead of the RPM packages and making sure
I enabled
Hi Adam,
Yes - that was the problem. I didn't realise that the RedHat RPMS had
not enabled -with-winbind-auth-challenge. Installed Samba from source
and it worked fine :-)
Cheers,
Ken.
-Original Message-
From: Adam Aube [mailto:[EMAIL PROTECTED]
Sent: Thursday, 3 July 2003 5:43 AM
Thank you all That worked, I configurd SQUID for tranparent Proxy
and it is
now working like a champ!
That's great.
Also do I need to port 443 to squid as well? or will squid get all HTTP
requests being told only to forward port 80?
If you want users to go through Squid for SSL connections, I
i'd Squid2.5Stable3 running on Ultra 10 Sparc with Solaris 8.
The problem is that when ever i define a acl of a lists of urls to allowed or
denied inside a file, squid will consume a lots of cpu time and slow down
internet access. Any suggestion?
i'd Squid2.5Stable3 running on Ultra 10 Sparc with Solaris 8.
The problem is that when ever i define a acl of a lists of urls to
allowed or
denied inside a file, squid will consume a lots of cpu time and slow down
internet access. Any suggestion?
It is no doubt that a long lists existing must slow down its performance.
First, you should check your cache.log, and then delete some unnecessary ,
optimize your configuration file.
That's all.
- Original Message -
From: Tan Jun Min [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent:
I'll tell you upfront, authentication does NOT work with transparent proxy.
Adam
Why is that so? can you kindly explain to me? Thanks
Matthew
I'm sure this question has been answered before on the list (though
not by me), but I'll answer it anyway.
Transparent Proxying is actually a violation of HTTP because the
browser will assume it is directly connected to the remote server
unless specifically configured otherwise. (For more
My company has a client that is insterested in a proxy
server, and has us a few questions. But I am a bit
confused on a definitive answer.
The questions are:
Does it track the hours that users are on the
internet?
Does it track the sites that users go to?
Does it create a log at the end of that
Hi,
On Thu, 03 Jul 2003 10:32:36 +0800
Wei Ming Long [EMAIL PROTECTED] wrote:
I'll tell you upfront, authentication does NOT work with transparent proxy.
Adam
Why is that so? can you kindly explain to me? Thanks
Think about it. HTTP supports two authentication headers. One for the
To get more useful information i suggest you cahnge the log file format from
the squid native log to common apache/http log.
You can do this by setting the option emulate_httpd_log on in squid.conf.
This log format will include information like userid (if user go thru
authentication), connection
62 matches
Mail list logo