What do the auth_param lines in squid.conf look like?
How did you create the username:password file? Is it readable by the UID
Squid runs as?
How is the browser configured? Were you running a transparent proxy before
you turned authentication on?
Adam
-Original Message-
From: Fred
I understand that you configured Squid based on the FAQ - however, the FAQ
you referred to does not have an exact section on how to configure
ncsa_auth. That is why I asked what the auth_param lines were.
Somehow I managed to miss you stating that you tested ncsa_auth from the
command line, so
PROTECTED]
Sent: Thursday, June 19, 2003 1:05 PM
To: Adam Aube
Cc: [EMAIL PROTECTED]
Subject: RE: [squid-users] Browser fails to prompt for authentication
I made sure that the password file is owned by the user squid runs as
and is readable by that user.
Prior to this installation of squid
We do it to work around stupid programs that don't understand proxy
authentication. It's very straightforward:
- Assume that acl LocalNet are the IP addresses allowed to use proxy
- Create an acl NoAuth with dst or dstdomain targets that will not require
authentication
- Setup http_access like
I would think the biggest factor in how many concurrent connections Squid
can support is your hardware, not your config file.
Adam
-Original Message-
From: NGUYEN Ngoc Can [mailto:[EMAIL PROTECTED]
Sent: Friday, June 20, 2003 2:33 PM
To: [EMAIL PROTECTED]
Subject: [squid-users]
to post it to the list for
reference.)
-Original Message-
From: NGUYEN Ngoc Can [mailto:[EMAIL PROTECTED]
Sent: Friday, June 20, 2003 2:54 PM
To: Adam Aube
Subject: RE: [squid-users] performant squid.conf example
Hello Adam,
thank you for your rapid anwser you're right oi think, but hte
Just to be sure we are talking of the same thing, could you tell me:
- could you tell me the squid version you are using;
Squid 2.5 Stable 2
- for your authentication, do you have the 'acl auth_user proxy_auth
REQUIRED' configuration line ?
Yes. The crucial part is putting the auth_user
When I check the url, my private ip address appear on my web browser.
Normally, it should not
When I check with an other Proxy, the official NAT IP address appears,
that It's correct !!!
Squid will normally pass an X-Forwarded-For header with the client's
actual IP address, which can be
I'd like to deny downloading of files fr common webmails like
yahoo/hotmail. It's the webmail downloads I cannot catch.
I only get this kind of log:
1056815851.164934 10.1.1.1 TCP_MISS/200 11237 GET
http://us.f138.mail.yahoo.com/ym/ShowLetter? - DEFAULT_PARENT/10.
254.254.6
I've been running a transparent cache using WCCP for a few weeks now. On
several occasions I have encountered problems with web site updates not
showing up for users. Hitting refresh in the browser doesn't work.
What browser are your clients using? Maybe there's a workaround.
If I have the
What browser are your clients using? Maybe there's a workaround.
Mostly IE, I pressume.
That's what I thought, but I didn't want to assume. There is a workaround
for an IE refresh bug with transparent proxying you can set in your
squid.conf:
ie_refresh on
You could try turning off
Windows NT4 PDC domain, Windows2k clients, Redhat Linux box with Squid
2.5 as firewall/proxy
We used squid/winbind NTLM authentication (need username based acls).
However this sommer the IT-Management plan change to Samba 2.2 PDC.
How can I do this?
Samba imitates an NT4 PDC to near
This is with Squid 2.5.STABLE3 and Samba 2.2.8a. NTLM authentication is
working for the most part, but every so often a user is prompted with a
basic password for some reason.
Two possible causes:
1) Not enough NTLM auth helpers
2) Response from auth server is taking too long
In squid.conf
Thank you for the info, I will keep an eye on this. To allow the
connection to squid's manager required a reconfigure, and it looks like
the ntlm statistics started over so I don't have the last batch of
numbers. Now, however, I know what to look for, and in the meantime
I've also upped
So far avg service time for NTLM authenticators is 0 msec. Here are the
current # of ntlm requests per helper for 15 clients: 281, 100, 46, 23, 6,
2, 0, 0...
If the current #'s are anything to go by, my first 3 handle 93% with the
first handling 61%, but I'll keep my eye on it. Hopefully
Actually you are right I need this to get to some intranet sites.
These are sites on YOUR intranet? Then you don't need to use Squid.
But anyway it seems to me that all the work with getting squid working was
a waste of time
Not necessarily - use the Squid setup to cache Internet sites. Most
Since upping the # of children I still haven't had any
helperStatefulDefer's, but I am getting invalid callback's and since
I've increased NTLM logging I'm seeing a number of challenge exceeded
max lifetime by xxx seconds.
The challenge exceeded max lifetime messages are probably normal.
yesterday, i found in cache.log:
2003/07/01 08:14:59| authenticateDecodeAuth: Unsupported or unconfigured
proxy-auth scheme,
'jcskihfah0jbi|nc% '
FreeBSD 4.7, Squid 2.5S2
I am running Squid 2.5STABLE3 (from source) on RedHat Linux 7.3 with the
Winbind basic and NTLM auth helpers (NTLM
I'm getting a lot of complaints from my users that when in active pages
the connection will just die. This can be duplicated with web mail
clients in MSN, yahoo and hotmail. It usually happens when typing then
sending a message. When sending, the next page will immediately be page
cannot be
I DON NOT M$ to win this one. I'm trying to decommission
2 M$ Proxy servers.
Even from MS vendors I have heard that MS Proxy is crap (although
they do recommend MS ISA).
Funny little anecdote: a college classmate of mine was trying to
get a MS proxy server (don't know if it was MS Proxy or
I'm trying to decommission 2 M$ Proxy servers.
Why? If it works...
Because MS Proxy works only in the marginal sense of the word.
Even MS vendors I have spoken to say it's crap.
Adam
---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version:
induced problem.
-Mark
-Original Message-
From: Adam Aube [mailto:[EMAIL PROTECTED]
Sent: Wednesday, July 02, 2003 11:31 AM
To: [EMAIL PROTECTED]
Subject: RE: [squid-users] How to fix active page time-outs? PLEASE HELP
I'm trying to decommission 2 M$ Proxy servers.
Why? If it works
Also make sure to verify that your samba understands
challenge-response authentication. (see the Squid FAQ).
I hadn't thought of that - that can be a big gotcha if you
don't install Samba from source.
I know from experience that the Samba RPMs in RedHat 7.3
won't support challenge-response
I`m still having problems trying to block downloas
Two issues with your config file:
1) Those entries in magic_words2 should be in this format:
^ftp \.exe$ \.mp3$
Otherwise, you'll match in odd places and block URLs you might not
want to block.
2) You never specifically block the
Okay, here are my new settings:
half_closed_clients on
request_timeout 10 minutes
persistent_request_timeout 5 minutes
I opened up a Yahoo account to test. It seems the connection does stay
open up to 5 minutes (Better then before), then dies. So, the answer
would be to up the
adding such option to squid would be trivial, and greatly improve
setups
where multiple ISP lines are available. there could be
round-robin-bond=[weight]
and 'algorithm' would be to measure incoming data rate, multiply
it by number
of total weights , and do an icp query of next peer cache
Thank you all That worked, I configurd SQUID for tranparent Proxy
and it is
now working like a champ!
That's great.
Also do I need to port 443 to squid as well? or will squid get all HTTP
requests being told only to forward port 80?
If you want users to go through Squid for SSL connections, I
I'm sure this question has been answered before on the list (though
not by me), but I'll answer it anyway.
Transparent Proxying is actually a violation of HTTP because the
browser will assume it is directly connected to the remote server
unless specifically configured otherwise. (For more
I came in this morning and tested this config again, and it is not
working. I restarted the Squid service and this did not help. It looks
like the timeout is back to 1 minute, but the conf file has a
persistent_request_timeout of 30 minutes which was working yesterday. I
don't understand WHY
My wb_group dies from time to time. It was built from 2.5.2. Can I just
compile the wb_group external helper from 2.5.3 and replace the old one,
or do I have to re-compile and replace squid also with 2.5.3 for the new
wb_group to work? TIA.
You might as well just fully install 2.5STABLE3,
Is there a command that you can type to find out if Samba RPM
has -with-winbind-auth-challenge compiled in?
Sure - test winbind from the command line by running
wbinfo -a username%password
You should see both a plaintext and challenge-response test result.
If you don't see the
In squid2.4 I had to disable internal_dns and run dnsserver in order
to
check /etc/host file.
It seems squid2.5 behaves the same. Please advise!
Squid 2.5 has a directive that allows you to set an /etc/hosts file
to check - look in the default config file for it.
I think it's something
Thanks Henrik for the fast reply. I implemented your work-around and
seems to work fine. I will look forward to the fix on this.
Glad to see you finally got that mess sorted out.
Adam
---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
I have several small sites that would like to be able to monitor user's
access. The problem I am running into is that most of them do not want to
have the user enter a password when the browser opens up. Doing it by IP
normally would not be all that bad since it can be tracked down to an
How can i make user credentials to expire on a specified time ?
I've tried credentialsttl with no result.
Depends on what you mean by expire. The credentialsttl setting will make
Squid force the user to reauthenticate after a set time period.
Since that setting didn't give you what you want,
If I change the FULANOS acl to
acl FULANOS proxy_auth REQUIRED
What will gonna happen ? Squid will allow access to anyone it can
authenticate by LDAP ? regardless of my user list on SQUID?
Correct. REQUIRED will match any user Squid can successfully authenticate.
There is an external_acl
I need squid to query the helper again to any user that has been
inactive for a period of time, let's say 20 minutes.
As Henrick has already noted:
1) credentialsttl will make Squid re-verify the user's credentials
with the auth helper at a given interval.
2) credentialsttl will NOT make the
I'm using the wb_ntlmauth helper, and I've noticed in my access.log there
will often be two TCP_DENIED requests for an object with no user
information, then a successful TCP request for the same object, this time
with the user information.
Example:
1057675581.034 19 192.168.127.100
I'm using the wb_ntlmauth helper, and I've noticed in my access.log there
will often be two TCP_DENIED requests for an object with no user
information, then a successful TCP request for the same object, this time
with the user information.
Now I feel extremely foolish. Google hit a thread on
I'm currently using Squid 2.4 in a Debian Woody, but I would like to
install
Squid 2.5 STABLE3 so I can take advantage of NTLM authentication, without
having to move to Debian testing.
You can get a single package from testing without converting your entire
system.
See the Debian docs for
i am using authenticate_ip_ttl 20 minutes
and max_user_ip -s 1 but the problem persists.
what could be wrong?
The items you include from your squid.conf look good
(though a little over-complex). Could you post your
entire squid.conf (minus comments, of course)?
Adam
---
Outgoing mail is
I would recommend you rewrite this section:
acl me src 192.168.0.0/24
http_access deny !me
acl authenticated proxy_auth REQUIRED
http_access deny !authenticated
acl onlyonce max_user_ip 1
http_access deny onlyonce
http_access allow authenticated
http_access deny all
to this:
[other acl lines]
I currently have an ACL setup (using regex -i) to block certain
files from
being viewed or downloaded (eg EXE, ZIP etc) which effects everyone
using
the cache. I now have the requirement to allow certain users from
accessing
some websites which require the unblocking of ZIP attachments so
i
I've got a beautifully working squid server with NTLM then BASIC
auth so
windows automatically authenticates and Linux can use basic auth.
Sweet, isn't it?
1. We have a number of users that use Adobe Web Capture to PDF
file. with
basic auth only turned on it prompts for a password like it
I am trying to get squid to prompt me for password before granting
access
to the internet.
The whole point of NTLM auth is not having to enter the password.
If you want the password prompt, you need to use basic auth and
the wb_auth helper.
Adam
At Friday, 11 July 2003, [EMAIL PROTECTED] wrote:
I don't want the password prompt, but i do want people with linux
boxes that
NTLM won't work to still use basic. this also works if in the conf
you have
ntlm first and then basic. very very nice.
just the adobe thing is the pain. that's why
damn. sorry. aarrghhh. It's a friday here and i'm looking forward
tothe
w/end.
Don't worry about it - I did the same thing myself, once (though
not on this list).
Enjoy the upcoming weekend - it's only 9 PM Thursday here.
Adam
Please excuse my ignorance. Would passwords be passed in clear text
using
basic auth? Is there an authentication scheme that works without
clear text.
There are 3 types of auth supported in Squid:
1) Basic auth
- Works with virutally any browser
- Password is sent in clear text
-
Mozilla 1.4 claims to support NTLM authentication.
That would rock. I hope it happens.
Adam
Mozilla 1.4 claims to support NTLM authentication.
That would rock. I hope it happens.
Should have checked the Mozilla site before responding - 1.4 has
been out for a week and a half.
Too bad it only works for Windows, but then it would probably be
very difficult to implement under Linux.
A
Digest, per se, doesn't require clear text password storage.
Squids supplied helper uses cleartext, but that is simply -a-
implementation. Squid itself never needs the cleartext password.
Technically, yes - digest auth does not require the password to be
stored in cleartext. However, as you
The NTLM over HTTP is fundamentally broken in it's design and should
never have seen the light. A classical do it our way without regards
to standards invention by Microsoft.
Yes, NTLM is horribly broken - just like almost everything developed by
Microsoft. The only reason I recommend it is
Well, there's a little project then :}. In point of fact, in 3.0 squid
can read pre-digested passwords in the supplied helper.
Well, that's good news.
You completely misunderstand how digest auth works. See RFC 2617 for the
spec..
Based on the info you provide here, I think I did understand
SSO is -not- a property of NTLM. It's a property of the OS and the
browser. It's fully possible to do SSO with basic (bad because of
password leak issues) and Digest (quite easy, using MD5-sess).
As I acknowledged later in the message, it can be done with basic or
digest. However, only NTLM
I'm going to try to summarize the discussion thus far.
NTLM auth is horribly broken, however:
1) It's currently the only auth scheme you can get SSO with
2) It does not send the password in the clear over the wire
Therefore, if you are already running a Windows domain on your
network, you
Actually the reason, that I want the popup is because I want selective
users to able to access the web. Not just anyone that walks up to
someone's workstation and being able to browse the net. Also I would
like to keep track user's authentication for accounting purpose.
I would recommend you
I google the net for wb_group but can't seem to find a place to
download this. Is this include with Squid-2.5.STABLE1? I take
it I need to compile squid again if it not found in
/usr/lib/squid/?
You will need to recompile Squid. Check in the helpers/external_acl
folder of the Squid source for
I am using the RPM that comes with redhat 9. I setup NCSA
authentication and I am getting a login and password prompt
from the browser when I hit the proxy, but it does not
authenticate (I did create a passwd file using htpasswd) I
thought is was configured correctly until I saw Too few
the passwd file is chmod 777 at this point and owned by the squid
user. I check to make sure I had auth_param children 5 and I upped it
to 15.
Not sure what is causing this the output of the ps -ax line was 1
and here is my conf file, it is on an internal lan and I am only using
it to
If I set up delay pools as follows...
users1 = 50%
users2 = 50%
If users2 is not using the system, do users1 get 100% bandwidth or are
they still limited to 50% ?
Depends on your delay pool setup. You should post the actual acls used to
setup your delay pools, then we can give you some
When I hit the Cache Manager Stats link which goes to
/squid/cachemgr.cgi I get :
Cache Host : localhost
Cache Port : 3128
Manager Name :
Password :
I've tried every possible user on the system as well as
cache password - each try results in connect:
(111) Connection refused
Leave
Would it be possible to give a speed limit (to XX Kb/s) for downloading
from http://some.url.com and give a other speed limit to cached files
Check out delay pools in the Squid FAQ.
Adam
Is there any way for me to specify another path for
the directories Squid will use?
Sure. Use configure like this:
./configre --prefix=/some/path [other options]
where some path is writeable by you. Note that you
won't be able to bind Squid to a port lower than 1024,
and you may encounter
I have start my squid, now I am trying to test it. So
I type in the command under bin directory:
../squidclient -h localhost -p 8080
http://localhost:8000
because the squid is installed on localhost, I specify
the http port to 8080, and I have apache running at
port 8000 on localhost too. But it
Now trying to get Squid work with NT domain group authentication.
Any updated docs I can use to make it possible ?.
Search the archive for winbind_group and wb_group.
Another thing that I can't find in Squids' Website is howto use
squid as a sock server. I heard with some tricks it could
Following lines are appearing in the file cache.log.
2003/07/18 20:32:30| Accepting HTTP connections at
0.0.0.0, port 8080, FD 10.
2003/07/18 20:32:30| Accepting ICP messages at
0.0.0.0, port 3130, FD 11.
Does this mean that there is some error.
No, this is normal.
Also access.log
Presently I get access log only of those clients where
I have changed proxy settings. It is not possible to
go to all machines and change the same. I thing I can
use transperent proxy for some.
Transparent proxying is a hack that breaks the HTTP spec
and can cause all sorts of problems. You
How do I block the download of .exe files via the browser?
I want to block .exe, zip, and other potentially executable
files ...
acl progs urlpath_regex -i \.exe$
http_access deny progs
The -i makes the matching case insensitive, and the $ makes it
match at the end of the line.
This was
Thanks for the help
You're quite welcome.
(I actually did try to find this answer but couldn't and decided
to
ask the list for its help (thinking that was, in fact, the purpose of
the list.))
Yes, that is the purpose of the list. However, you did not indicate
in your question that you
all my users use the server proxy
but i have the web page for my intranet
so i use the file proxy.pac where
we had configured the ip local address to be
routed directly
however not all users use proxy.pac and the apache
logs for my web page intranet.domain appear with the
ip address of the
I have squid server running in transparent mode.
The scenario is:
1) As the user sends any http request all the port 80 traffic will
be redirected to squid server.
2) The Squid Server will then throw a page(jsp or cgi) on the users
screen where he will enter the username and password.
3) Now as
I've google and the FAQ and can't seem to find a definate answer
to this
problem. I have a linux box running Squid Cache: Version 2.4.STABLE7
and
these are the relavent config options
(Removed for brevity)
What I'm trying to do is allow access to the proxy during the times
08:00 and 23:00 and
Does someone know how Mass Mailers and Spiders can be blocked via
squid from functioning?
Mass mailers generally use SMTP over port 25, which Squid has nothing
to do with. Spiders, on the other hand, do use HTTP, and Squid can
be part of the solution there.
I actually tried using HTB to
Is there any point in starting testing of 2.5S3 + all patches now, or
should I just wait for STABLE4 to be released shortly?
Squid 3.0 recently entered its release cycle, so there may not be
a 2.5STABLE4, but a 3.0STABLE1 instead. I'm not a developer, so I
can't say for sure.
If you haven't
clients that come to use the systems in the cafe do so, its used to
do mass telemarketing in form of scam mails
Your best bet would be to find some unique characteristic of the
spider (such as the User Agent string) and setup a delay pool to
slow it way down. You indicated you tried this before
Please post your entire squid.conf (minus any comment lines)
Here ya go, thanks very much.
Other than the http_port line being commented out, your squid.conf
looks fine. It WAS uncommented when you tried to use the proxy, right?
If so, I can give you two suggestions:
1) When you tried the
Please stop posting your message repeatedly. I can assure you the
list members received all the copies you have sent thus far.
Just because you did not receive an answer yet doesn't mean that
the message didn't go through or that you're being ignored; it just
means that no one who has seen
Put in your squid.conf file...
request_body_max_size 20 KB
He wanted to restrict downloads, not uploads. For downloads use:
reply_body_max_size 20 KB
Adam
---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.237 / Virus Database:
I don't think he she knows anything about squid or apache or anything
for that matter. He/ she is just a spammer , who annoys all the people
here.
Spammer probably isn't the correct term, because spammers generally
want you to buy something or go to a particular website. This person
probably
Here is what i am trying, but it seems not work.
Try this:
acl myusers proxy_auth REQUIRED
external_acl_type ip_auth %SRC %LOGIN
/path/ip_user_check -f /path/ip_and_userID
acl ip_auth_acl external ip_auth
http_access allow myusers ip_auth_acl
Adam
---
Outgoing mail is certified Virus Free.
Please guys, someone help me. I've already asked this, but nobody
answered.
When did you ask? I don't recall receiving this message before, and a search
of the archives came up blank.
Some people in my organization read e-mails form an outside ISP with
Outlook Express. But when they try to
I use a chrooted Squid-2.5.STABLE3 over djb daemontools.
If i do not use redirect program everything is fine.
When i set the directives to:
redirect_program /bin/redir.pl #because of chroot /usr/local/squid
redirect_children 2 # i think it is enough
nothing happens.
Try these:
1) Run
In MS-Proxy, we can monitor in real time all users
connected to the server ? How to do it in squid ?
I know the Cache Manager has some per-user info; you
could see if the info there meets your needs.
Adam
---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system
Yes, there is a delay pool FAQ:
http://www.squid-cache.org/Doc/FAQ/FAQ-19.html#ss19.8
If you haven't read it already, I recommend you do so.
There's also documentation in the default squid.conf.
Some things to remember:
1) Delay pool settings are in bytes, and your bandwidth
is likely measured
It is hard to find any worse real life workload for RAID5 than
Squid cache.
If you want to raid your cache then use mirroring, not RAID5.
What about RAID 0, striping the data across multiple drives? Normally
that will improve disk performance.
Lacks redundancy, but data loss isn't much of an
Forwarded to list for sake of archives
ons 2003-07-23 klockan 17.43 skrev Adam Aube:
4) I don't know for sure if you can set the per-user
limits in Class 2 and 3 pools to -1/-1 (never used those
classes), but if you can, then it should divide bandwidth
evenly among connections, up
i defined slow and fast access using delay pools
but download on slower access was much faster than
fast access machine.
1) You put slow_users into both the slow and fast
pool, giving it more total bandwidth than fast_users
2) You used bad syntax adding users to the pools
Try this instead:
You asked this question earlier; you don't need to post it again.
Adam
---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.237 / Virus Database: 115 - Release Date: 3/7/2001
I have implemented ACL's to restrict some users from browsing
at specified timesguess what some users are changing
their IP addresses and browse... Is they way squid can handle
this?
You could try using authentication, and then use the username
instead of the IP address as the basis for
I tried command killall -0 squid to stop Squid.Then
tried /usr/local/squid/sbin/squid to start Squid. It
give smessage Squid already running
You didn't read Henrik's intructions well enough. killall
-0 doesn't actually kill the process; it's a way to check
if a process is running.
If killall
I have Squid Cache: Version 2.5.STABLE3.
The problem is that it slow resolves addresses on peak times.
Usually it says The following error was encountered: Unable
to determine IP address from host name for .. The dnsserver
returned: No Address records
The cache.log file is filled with
I am under the impression that the original poster
was already on 'internal dns' so to speak. Perhaps
the error message uses on old 'semantic/string', not
100% sure though.
You are correct - I misread the post. He could try
disabling the internal DNS and see if that helps. He
could also try
How i can set the squid, to never never store any image
in your cache
You can use the no_cache directive (see the FAQ).
The squid store all images, and when a designer
change this images, him cant see the new image, cuz
the squid store the old image.
The designer could just click Refresh
anyone implemented or have any ideas on how to use
squid in conjnction with wirelss to authenticate
users for internet access.
As long as you have basic TCP/IP connectivity, Squid
will work just fine over wireless. You can just use
the standard basic/digest auth helpers (depending on
the
Can anyone tell me why with the delay_pool settings
below *everyone* is put into delay_pool 2?
How are you verifying everyone is put into delay pool 2?
Unless somehow all your users are matching on the overused
acl, the acls you have setup are fine.
A bit of advice: you can change delay pool
My PC (10.167.211.11) is the ONLY ONE supposed to connect to MSN
Messenger. The fact is that when trying to connect from another
IP it still works!
This means your acl to block MSN messegner isn't working. There was
a thread in late June that discussed how to block MSN messenger -
that thread
Now I'm trying to make msn messenger, yahoo and icq to work with
it, and
I've just managed to make the first two do it just right
Just out of curiosity - how did you get Yahoo! and MSN Messenger
to work? There was a recent post asking about this, and I have had
problems as well.
Adam
how do i go about searching through the entire ldap store for a
username?
As long as they are in separate containers, not separate domains,
the helper can search given the right parameter. I believe it is
-f; check the archives for this month to be sure. It has been answered
a few times; you
FYI, the percentage is Page faults with physical i/o /
Number of HTTP requests received is 0.29%. The physical
memory is 256 DDR, and the swap cache dir is 5000 16 256.
I use aufs, heap lfuda replacement policy.
I assume you mean that Page Faults / HTTP Requests = 0.0029,
which would be
With MS proxy, it used my login credentials for the domain to allow
access to the proxy server, so if I was logged in to the domain I
didn't have to authenticate to the proxy.
Is there a way to do this with Squid?
Yes, by using Squid's NTLM auth support.
See the Squid FAQ for the Winbind
1 - 100 of 883 matches
Mail list logo