My /etc/init.d/squid ...I'm doing this already
#!/bin/bash
echo 1024 32768 /proc/sys/net/ipv4/ip_local_port_range
echo 1024 /proc/sys/net/ipv4/tcp_max_syn_backlog
SQUID=/usr/local/squid/sbin/squid
# increase file descriptor limits
echo 8192
Sorry to be pounding the list lately, but I'm about to lose it with
these file descriptors...
I've done everything I have read about to increase file descriptors on
my caching box, and now I just rebuilt a fresh clean squid. Before I
ran configure, I did ulimit -HSn 8192, and I noticed that
Hello,
I always had some problems with Filedescriptor.
One day, U changed my Linux version to a Debian (I don't now if it's the
reason) and I update Squid. And since, I got a configuration file under:
/etc/default/squid
On this file, there is:
#
# /etc/default/squid
Dnia czwartek, 23 lutego 2006 01:11, napisałeś:
* High latency clients
What do you mean high latecy clients?
The majority of my customers have a network path like:
client-squid-satellite-squid-internet
many of my clients: client-[radio line {12,34,54}mbps]-squid-internet
100
Hi,
I have a question about CONNECT statements. Squid doesn't try to
interpret the contents of CONNECT statements - how could it, when the
contents are encrypted? - and some applications are abusing that fact by
using CONNECT statements to tunnel non-SSL protocols. The biggest
offender is Skype
On 21.02 11:57, Denis Vlasenko wrote:
On Tuesday 21 February 2006 10:31, Matus UHLAR - fantomas wrote:
On 21.02 12:33, Ow Mun Heng wrote:
I've got a squid cache running as transparent proxy on an _very_ old
machine.
P 133 Mhz
128MB Ram
20GB Hard Disk
1GB Cache in aufs
On 21.02 10:51, Steve Brown wrote:
How is there authentication without credentials? I have misunderstood
your setup. What are you referring to when you say authentication because
the knee-jerk reaction is to assume a username and password is
authenticating...
Yes there is a user/pass.
On 23.02 09:41, Raj wrote:
After I upgrade the memory to 2gb can I increase the cache_mem value
to 256MB. At the moment it is 64MB.
funny, as long as Kenny said it's expandable only to 384 MB (not much,
although better than nothing).
--
Matus UHLAR - fantomas, [EMAIL PROTECTED] ;
On 22.02 09:50, Nolan Rumble wrote:
I would just like to know. When downloading a file from a site say for
example a link like so: http://host1.example.com/test.zip and the file
has a size say 300kb (and timestamp 2006/01/01 ?) and then downloading
the same file from another (mirror) site say
Yes my program talks with Windows 2003 AD.
On 22.02 17:31, Mark Elsen wrote:
Please ( !- again) , keep discussions into the same original-thread
does hypermail on squid-cache.org compare Thread-Topic: and Thread-Index:
headers? Note that lame Outlook and Exchange use these headers instead of
On 22.02 23:13, Tomasz Kolaj wrote:
I observed have too low performance. On 2x 64bit Xeon 2,8GHz 2GB DDR2, 2x
WD RAPTOR Squid 2.5.STABLE12 can answer max for 120 requests/s. 115 r/s -
97-98% usage of first processor. Second is unusable for squid :/. I have
two cache_dirs (aufs). One pre disk.
Hi,
I have seen in squid 2.5 stable10 that at many places if i give 0
value, it means no limit.
for example: connect_timeout 0 (it means no limit to connect timeout).
But i didn't find any documentation about it. can anybody tell me what
all directives have 0 as no limit?
Thanks
Mohinder
On 2/22/06, Franco, Battista [EMAIL PROTECTED] wrote:
Hi
I use squid ldap users authentication.
From my client PCs every time I start IE I need to insert username and
password.
Is it possible to configure squid user and password popup with a
checkbox to permit to save password?
So next time
Dnia czwartek, 23 lutego 2006 11:32, napisałeś:
On 22.02 23:13, Tomasz Kolaj wrote:
I observed have too low performance. On 2x 64bit Xeon 2,8GHz 2GB DDR2, 2x
WD RAPTOR Squid 2.5.STABLE12 can answer max for 120 requests/s. 115 r/s
- 97-98% usage of first processor. Second is unusable for
ons 2006-02-15 klockan 20:12 -0500 skrev Joe Commisso:
I am trying to get squid working and need help since I have been at this
for a long time.
I am including my squid.conf file and also the output from 'squid -k
parse' 'squidclient ...'
When I put my http proxy information in firefox,
While you are entering the username and password, you can save the
password by enabling the checkbox below the password field. This will
save your username and password and when you open the browser it will
not ask for the authentication.
Note: You cannot use authentication with Transparent proxy
tor 2006-02-16 klockan 11:53 +0100 skrev brak danych:
2006/02/15 13:52:02| WARNING: Forwarding loop detected for:
Client: 127.0.0.1 http_port: 127.0.0.1:80
REPORT http://clown.domain.com:81/svn/SwordFish/Trunk/SwordFish/SwordSite
HTTP/1.0
So it's not about access rights but about a
tor 2006-02-16 klockan 15:36 -0600 skrev Fernando Rodriguez:
Lets say you are using squid with a redirector that matches users and ip
adresses
You have your acl where you match the request of a client based on its ip
address then squid-cache sends the request to a redirector for
ons 2006-02-22 klockan 10:16 -0300 skrev Oliver Schulze L.:
and in the problematic squid server I see:
1140566460.404 2060 192.168.2.90 TCP_MISS/100 123 POST
http://setiboincdata.ssl.berkeley.edu/sah_cgi/file_upload_handler -
DIRECT/66.28.250.125 -
What does TCP_MISS/100 mean? As I
Before compiling squid, try changing the value of the __FD_SETSIZE in
the /usr/include/bits/typesizes.h file as
#define __FD_SETSIZE8192
and then in the shell prompt
ulimit -HSn 8192
and try
echo 1024 32768 /proc/sys/net/ipv4/ip_local_port_range
then compile and install squid
mån 2006-02-20 klockan 15:15 +0100 skrev Eugen Kraynovych:
Does anybody here have a full list of FTP commands, which SQUID 2.5 can
(for instance 2.5Stable12; PUT, GET, DELE etc.)? The same for SQUID 3?
Squid acts as a HTTP-FTP gateway on requests for ftp:// URLS.
The currently supported HTTP
tis 2006-02-21 klockan 10:51 -0600 skrev Steve Brown:
Yes there is a user/pass. Everyone is saying that the broswer
shouldn't indiscriminately provide crednetials, which I agree with.
However, in the setup I am proposing, the browser isn't submitting
credentials. The traffic is intercepted
tis 2006-02-21 klockan 12:42 +0100 skrev Christian Herzberg:
Hi Bart,
Squid isn´t crashing. Squid is waiting for what ever. You can wait one hour
without any respons from squid. After a restart of squid everythink is
working fine.
The same squid on a system with kernel 2.4 is working for
tis 2006-02-21 klockan 09:55 -0500 skrev Shoebottom, Bryan:
Hello,
I am running a WCCP enabled interception proxy and want the users to be
completely unaware that they are going through a proxy. I tried using the
following directive, but when trying to get to a website that doesn't
ons 2006-02-22 klockan 13:25 +0530 skrev mohinder garg:
Hi,
can anybody tell me what this acl does? does it block downloading or
uploading?
and how can i test it?
It matches the content-type of the HTTP request data going TO the
requested web server.
Is not really related to uploading,
tor 2006-02-23 klockan 17:24 +0530 skrev mohinder garg:
Hi,
I have seen in squid 2.5 stable10 that at many places if i give 0
value, it means no limit.
where this is mentioned in the squid.conf.default comments yes.
for example: connect_timeout 0 (it means no limit to connect timeout).
The Problem is, that although I've read (in some old Forums, 2003 and earlier),
that there is no DELETE (DELE) through SQUID possible, it works - by me in 2.5
STABLE5 and STABLE 12.
I have made with ethereal a dump of network traffic, there are:
Client-SQUID HTTP-Request DELE Filename
On 23.02 14:25, Tomasz Kolaj wrote:
Dnia czwartek, 23 lutego 2006 11:32, napisałeś:
On 22.02 23:13, Tomasz Kolaj wrote:
I observed have too low performance. On 2x 64bit Xeon 2,8GHz 2GB DDR2, 2x
WD RAPTOR Squid 2.5.STABLE12 can answer max for 120 requests/s. 115 r/s
- 97-98% usage of
I can't decide if this is a squid problem or an iptables problem, so I'm
asking here in case someone can point me in the right direction.
-
Software/Environment details:
-
jekyl:/home/david# uname -a
Linux jekyl 2.4.27-2-686 #1 Wed Aug 17
I think educating users (yes, there are 2 different passwords) would be most
effective.
Believe me, I wish I could. But these are sales people, and as I
said, some of them aren't very bright.
1. give users the same password for mail and proxy and probably fetch them
from the same source
Henrik,
I realize I can do this, but the user will still receive a page. Is
there a way to have the client act as though it weren't going through a
cache?
Thanks,
Bryan
-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED]
Sent: February 23, 2006 9:52 AM
To:
Hopefully that's just a misspelling. ;o)
Why?;) I did some wrong?
I'm testing epool patch like you said;
What he meant, I think, was that it's poll, not pool... therefore,
--enable-epool won't do a thing.
I can't decide if this is a squid problem or an iptables problem, so I'm
asking here in case someone can point me in the right direction.
-
Software/Environment details:
-
jekyl:/home/david# uname -a
Linux jekyl 2.4.27-2-686 #1 Wed
-Original Message-
From: Tomasz Kolaj [mailto:[EMAIL PROTECTED]
Sent: Saturday, January 28, 2006 10:01 AM
To: Chris Robertson
Subject: Re: [squid-users] low squid performance?
Dnia czwartek, 23 lutego 2006 01:11, napisałeś:
* High latency clients
What do you mean high
So, I rebuilt squid with the epoll patch, hoping to get cpu usage down
some...now I'm seeing this a LOT in the cache.log (more than once per
minute)
storeClientCopy3: http://xxx..com/xxx/abc.xyz - clearing
ENTRY_DEFER_READ
Should I 86 the patch? By 86 I mean get rid of ;/~
-Original Message-
From: Gregori Parker [mailto:[EMAIL PROTECTED]
Sent: Thursday, February 23, 2006 8:49 AM
To: squid-users@squid-cache.org
Subject: [squid-users] post epoll...
So, I rebuilt squid with the epoll patch, hoping to get cpu usage down
some...now I'm seeing this a
Is it possible to have my ntlm users go around 1
domain? We can't seem to get a state web site (which
uses a weird front end to it's client... but it ends
up on the web) to go through the proxy. When we sniff
the traffic locally, it is popping up a 407, but their
isn't anyway to log in.
I
Hello,
I am using Squid to collect log data as part of a user study.
My problem is that logging headers (log_mime_hdrs) together with the
regular logging that takes place generates huge amounts of data. I am
trying to minimze that load.
I am exclusively interested in logging:
- document
Interesting...
Here's what I did (downloaded patch from here
http://devel.squid-cache.org/cgi-bin/diff2/epoll-2_5.patch?s2_5):
# tar zxvf squid.tar.gz
# mv squid-2.5STABLE12/ squid
# patch -p0 epoll-2_5.patch
# cd squid
# ulimit -HSn 8192
#
-Original Message-
From: Gregori Parker [mailto:[EMAIL PROTECTED]
Sent: Thursday, February 23, 2006 10:03 AM
To: squid-users@squid-cache.org
Subject: RE: [squid-users] post epoll...
Interesting...
Here's what I did (downloaded patch from here
Ok, I need more detail - it doesn't make a lot of sense to me. I ran
./bootstrap.sh where you said, and it told me this:
Trying autoconf (GNU Autoconf) 2.59
autoheader: WARNING: Using auxiliary files such as `acconfig.h', `config.h.bot'
autoheader: WARNING: and `config.h.top', to define
-Original Message-
From: Gregori Parker [mailto:[EMAIL PROTECTED]
Sent: Thursday, February 23, 2006 10:53 AM
To: squid-users@squid-cache.org
Subject: RE: [squid-users] post epoll...
Ok, I need more detail - it doesn't make a lot of sense to
me. I ran ./bootstrap.sh where you
Dnia czwartek, 23 lutego 2006 18:32, napisałeś:
With epoll, 100 Req/sec puts my CPU at 23%. It made a huge difference.
I have still that same CPU usage.. mayby I aplied not this patch?
More memory and more spindles (drives) certainly won't hurt, but you seem
to be CPU limited. Taking care
Thank you very much Chris - I think that did the trick.
When I run bootstrap again, it seems successful...but can I ignore these
warnings?
configure.in:1392: warning: AC_TRY_RUN called without default to allow cross
compiling
configure.in:1493: warning: AC_TRY_RUN called without default to
-Original Message-
From: Tomasz Kolaj [mailto:[EMAIL PROTECTED]
Sent: Thursday, February 23, 2006 11:17 AM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] low squid performance?
Dnia czwartek, 23 lutego 2006 18:32, napisałeś:
With epoll, 100 Req/sec puts my CPU at
Thanks Chris - that did the trick
Also, thanks to Squidrunner Support Team - your advice resolved my file
descriptor issue.
Bigups to all you guys, thanks!
-Original Message-
From: Chris Robertson [mailto:[EMAIL PROTECTED]
Sent: Thursday, February 23, 2006 12:17 PM
To:
tor 2006-02-23 klockan 11:20 -0500 skrev Shoebottom, Bryan:
I realize I can do this, but the user will still receive a page. Is
there a way to have the client act as though it weren't going through a
cache?
Nope.
Regards
Henrik
signature.asc
Description: Detta är en digitalt signerad
To the List Manager:
I have unsubscribed several times, in each case I get the confirmation
reply, but a few hours later the squid-users stuff starts coming in
again.
Very odd. Could you please check the list and unsubscribe me. thanks.
jp
tor 2006-02-23 klockan 16:15 +0100 skrev Eugen Kraynovych:
Important: works only with HTTP CONNECT Client-Method, neither with
HTTP-Proxy with FTP-Support (TotalCommander - Response auf DELETE - Not
implemented), nor anything else.
If you use ĆONNECT there is no HTTP involved other than the
tor 2006-02-23 klockan 10:54 -0500 skrev David Clymer:
I am having problems with networks connected via IPSec VPN. I am using
native 2.6 ipsec, so there are no interfaces associated with individual
VPN connections as when using KLIPS (the openswan IPSec implementation).
The native 2.6 ipsec
tor 2006-02-23 klockan 19:52 +0100 skrev David:
Hello,
I am using Squid to collect log data as part of a user study.
My problem is that logging headers (log_mime_hdrs) together with the
regular logging that takes place generates huge amounts of data. I am
trying to minimze that load.
I
Well, everything is rebuilt, and my file descriptors are OK, but I'm still
seeing the storeClientCopy3: http://whatever - clearing ENTRY_DEFER_READ
Any more ideas? Or are these safely ignored? Or are they BAD?!?
-Original Message-
From: Gregori Parker
Sent: Thursday, February 23,
tor 2006-02-23 klockan 14:26 -0800 skrev Gregori Parker:
Well, everything is rebuilt, and my file descriptors are OK, but I'm still
seeing the storeClientCopy3: http://whatever - clearing ENTRY_DEFER_READ
Any more ideas? Or are these safely ignored? Or are they BAD?!?
Looks like just
FYI, I set half_closed_clients to off and that seemed to get rid of like 95% of
those messages.
-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED]
Sent: Thursday, February 23, 2006 3:03 PM
To: Gregori Parker
Cc: squid-users@squid-cache.org
Subject: RE: [squid-users]
Has anybody come across this problem of getting Squid_ldap_auth to get
users off of a NDS ldap server? ldapsearch can connect to it fine, and I
can see the users, but when I use it to auth with squid. It gives me a
ERR Success message. Also, do you know where or how I can turn the logs
on to
Has anybody come across this problem of getting Squid_ldap_auth to get
users off of a NDS ldap server? ldapsearch can connect to it fine, and I
can see the users, but when I use it to auth with squid. It gives me a
ERR Success message. Also, do you know where or how I can turn the logs
on to
Is it possible to have my ntlm users go around 1
domain? We can't seem to get a state web site (which
uses a weird front end to it's client... but it ends
up on the web) to go through the proxy. When we sniff
the traffic locally, it is popping up a 407, but their
isn't anyway to log in.
While you are entering the username and password, you can save the
password by enabling the checkbox below the password field. This will
save your username and password and when you open the browser it will
not ask for the authentication.
- Did you try this ?
M.
All of the machines have had a full process restart and they are
still experiencing the same problems, so it looks as if
half_closed_clients wasn't the source of the problem.
-Mike
On Feb 22, 2006, at 1:12 PM, Mike Solomon wrote:
I added this line to the config on two of my hosts, but it
On 2/24/06, Mark Elsen [EMAIL PROTECTED] wrote:
While you are entering the username and password, you can save the
password by enabling the checkbox below the password field. This will
save your username and password and when you open the browser it will
not ask for the authentication.
Reading this, would it be possible to not require AUTH for a certain MIME
header?
http_access allow header_type
http_access allow ntlm_users (provided proxy AUTH ACL is named
'ntlm_users')
Sorry for butting in, just wondering..
Thanks
- Original Message -
From: Mark Elsen
Reading this, would it be possible to not require AUTH for a certain MIME
header?
No because in that case, the object (webserver)-header info has, to
be looked at, if it has been received from the remote server (already).
M.
Thanks.
On 2/23/06, Henrik Nordstrom [EMAIL PROTECTED] wrote:
ons 2006-02-22 klockan 13:25 +0530 skrev mohinder garg:
Hi,
can anybody tell me what this acl does? does it block downloading or
uploading?
and how can i test it?
It matches the content-type of the HTTP request data going
Hi,
when you say less than 1 sec. Isn't it exactly zero?
Thanks
Mohinder
On 2/23/06, Henrik Nordstrom [EMAIL PROTECTED] wrote:
tor 2006-02-23 klockan 17:24 +0530 skrev mohinder garg:
Hi,
I have seen in squid 2.5 stable10 that at many places if i give 0
value, it means no limit.
where
64 matches
Mail list logo