Re: [squid-users] Securing squid3

2013-02-14 Thread Amos Jeffries

On 15/02/2013 10:18 a.m., Andreas Westvik wrote:

So i actually got it working!

Client -> gateway -> havp -> squid -> internets

I actually had blocked my self totally from squid3, so that was quite the head 
scratch. It turned out that http access deny all has to be
at the bottom of the config file.  ;)


:-)

You started this thread with a question on how to make Squid secure. If 
you are using the Squeeze or Wheezy package you are not secure, the 
Squeeze package is missing patches for 3 CVE vulnerabilities, the Wheezy 
package is currently missing 1.


Also, since you have a good handle on where the traffic is coming from 
you can lock down the proxy listening port.


I wouls suggest s small vriant of teh mangle table rule which can be 
found here:

http://wiki.squid-cache.org/ConfigExamples/Intercept/LinuxDnat
By adding a "-s !192.168.*" stanza to exclude your internal clients from 
the port block you can give them service while halting all external access.



So then I pasted this into squid.conf

cache_peer 192.168.0.24 parent 3127 0 no-query no-digest
And then I reloaded and everything just worked.

Now my second server running debian wheezy is a first gen macbook. So that is 
not a beast. But it workes just fine.
The log folder is mounted in the ram to use most of the speed.

I made a little screencast of the thing working
Have a look

https://vimeo.com/59687536

Thanks for the help everyone! :)
On Feb 14, 2013, at 17:24 , Andreas Westvik  wrote:


havp supports parent setup, and as far as I have seen, it should be setup 
before squid.
Now, I can always switch this around, and move the squid3 setup to 192.168.0.24 
and setup
havp on 192.168.0.1 of course.
But 192.168.0.1 is running debian "production" and Debian does not
support havp on a squeeze. So Im using a debian wheezy for havp in the mean 
while. And its not installed via apt.


HAVP appears to be a neglected project. You may want to update the 
scanner to another AV (clamav with c-icap perhapse).


NP: With ICAP you can plug in almost any AV scanner system into Squid 
and only have the MISS traffic being scanned, pre-scanned HITS still 
served out of cache at full speed. ICAP also supports streamed scanning 
from the latest AV systems, where the client gets delivery far faster.
 * serving from cache without re-scanning is a controverisial topic 
though. It is fast on the HITs, but permits any infections in cache to 
be delivered even after scanner signatures are updated.






If squid caches infected files, the local clamav should take care of that 
anyways? Since havp on the other server are
using clamav as well.


Try plugging clamav directly into Squid. c-icap works for most people 
(unless you are one of the lucky ones with trouble).




I really don't think the iptables rules should be that difficult to setup up, 
since I intercept the web traffic with this:

iptables -t nat -A PREROUTING -i eth3 -p tcp --dport 80 -j REDIRECT --to-port 
3128

So it's basically the same thing, but kinda like -j REDIRECT -to-destination 
192.168.0.24:3127

But it's not working! grr!


REDIRECT is a special case of DNAT target which redirects to the hosts 
main IP address. You cannot specify a destination IP on REDIRECT target, 
you can on DNAT. The LInuxDnat wiki page I linked to above has all the 
details you need for this - the iptables rules are the same for any 
proxy which accepts NAT'd traffic.


So...
 * When your box IP is dynamically assigned and not known in advance 
use REDIRECT.
 * When your box is statically assigned use DNAT to the IP Squid is 
listening on.


Squid-3.2+ provide protection against the CVE-2009-0801 security 
vulnerability in NAT and TPROXY traffic, I doubt HAVP supplies that, but 
it may.
If so, you cannot receive traffic at a proxy which was NAT'd on another 
box - since NAT erasing the destination IP is a cause of that CVE.


Amos


Re: [squid-users] Netflix+squid

2013-02-14 Thread Amos Jeffries

On 15/02/2013 1:24 p.m., mb...@whywire.com wrote:



 
 
 


 Hi all,

A friend of mine has a company outside
the U.S, and
wants to provide Netflix to his customers.
Since I can setup a proxy here
for him and have his
clients use my proxy to access netflix, is there any
other solution that can optimize it even better.


Better than what? you have not provided any information on what 
configuration settings you are using, we cannot tell whether you 
configured it for good performance or not.




  Can you
cache the videos
by the way?


Unknown. You will want to look into the cached object size limits 
(default maximum_object_size directive is probably too small for large 
videos). then look into whether the videos are actually cacheable. Paste 
one of their URLs into redbot.org for info on that.


Amos


Re: [squid-users] Help with server-first and mimic server certificate

2013-02-14 Thread Amos Jeffries

On 15/02/2013 2:23 a.m., Prasanna Venkateswaran wrote:

Hi,
   I have been trying to set up squid which can intercept https
traffic without client (read it as browser proxy) changes. I am using
the latest squid 3.3.1. When I actually open a https site I still see
the certificate with the parameters I provided (for myCA.pem) and I
dont see any of the original certificate's properties being mimicked.
I have listed my config below. Please let me know whether I am missing
anything. Pardon me if am overlooking any config. I am relatively new
to squid.

My iptable config:

Chain PREROUTING (policy ACCEPT)
target prot opt source   destination
REDIRECT   tcp  --  anywhere anywheretcp
dpt:www redir ports 3128
REDIRECT   tcp  --  anywhere anywheretcp
dpt:https redir ports 3129


My Squid config:

http_access deny all
always_direct allow all
ssl_bump server-first all

# Squid normally listens to port 3128
http_port 3128 transparent
https_port 3129 intercept cert=/etc/squid/ssl_cert/myCA.pem ssl-bump


Mimic only works when the certificate is being created by Squid.

The above config line is a _static_ certificate configuration. Whatever 
request arrives at squid will be SSL setup using myCA.pem keys - which 
were created by you in advance and are fixed.


What you are needing is a _dynamic_ certificate configuration. With the 
CA certificate, private key= certificate and generate-* SSL options 
enabled on this port to allow Squid to create new certificates as needed.



Amos


[squid-users] Netflix+squid

2013-02-14 Thread mbaki







Hi all,

A friend of mine has a company outside
the U.S, and
wants to provide Netflix to his customers.
Since I can setup a proxy here
for him and have his
clients use my proxy to access netflix, is there any
other solution that can optimize it even better. Can you
cache the videos
by the way?


Thanks








[squid-users] Re: Securing squid3

2013-02-14 Thread babajaga
So, at least you will need something like
iptables -t nat -A PREROUTING -i eth3 -p tcp --dport 80 -j DNAT --to
192.168.0.24:80
on the squid-box (default gateway).

But then the question arises: Does HAVP support transparent proxying, like
squid does ? 

If it does, then 
iptables -t nat -A PREROUTING -i ethx -p tcp --dport 80 -j REDIRECT
--to-port 3127
should do the trick.

(I do NOT think, that 
iptables -t nat -A PREROUTING -i eth3 -p tcp --dport 80 -j DNAT --to
192.168.0.24:3127
will work. Because that would not be a "standard" transparent proxy setup.)




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Securing-squid3-tp4658495p4658504.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Securing squid3

2013-02-14 Thread Andreas Westvik
So i actually got it working! 

Client -> gateway -> havp -> squid -> internets

I actually had blocked my self totally from squid3, so that was quite the head 
scratch. It turned out that http access deny all has to be
at the bottom of the config file.  ;)
So then I pasted this into squid.conf 

cache_peer 192.168.0.24 parent 3127 0 no-query no-digest
And then I reloaded and everything just worked.

Now my second server running debian wheezy is a first gen macbook. So that is 
not a beast. But it workes just fine. 
The log folder is mounted in the ram to use most of the speed. 

I made a little screencast of the thing working
Have a look

https://vimeo.com/59687536

Thanks for the help everyone! :)


On Feb 14, 2013, at 17:24 , Andreas Westvik  wrote:

> havp supports parent setup, and as far as I have seen, it should be setup 
> before squid.
> Now, I can always switch this around, and move the squid3 setup to 
> 192.168.0.24 and setup
> havp on 192.168.0.1 of course. 
> But 192.168.0.1 is running debian "production" and Debian does not
> support havp on a squeeze. So Im using a debian wheezy for havp in the mean 
> while. And its not installed via apt. 
> 
> 
> If squid caches infected files, the local clamav should take care of that 
> anyways? Since havp on the other server are
> using clamav as well. 
> 
> I really don't think the iptables rules should be that difficult to setup up, 
> since I intercept the web traffic with this:
> 
> iptables -t nat -A PREROUTING -i eth3 -p tcp --dport 80 -j REDIRECT --to-port 
> 3128
> 
> So it's basically the same thing, but kinda like -j REDIRECT -to-destination 
> 192.168.0.24:3127 
> 
> But it's not working! grr!
> 
> -Andreas
> 
> On Feb 14, 2013, at 17:12 , babajaga  wrote:
> 
>> Then its more a question how to setup iptables, the clients and HAVP.
>> However, why HAV first ?
>> This has the danger of squid caching infected files. And HAV will scan
>> cached files over and over again.
>> Then squid will be an upstream proxy of HAV. IF HAV supports parent proxies,
>> then squid should have no problem.
>> But this then either needs a proxy.pac for the clients browsers or explicit
>> proxy config for the clients browsers.
>> This would be the easier path. When this works, then to think about using
>> ipt with explicit routing of all packets to HAV-box. And back, so you have
>> to consider NAT. I am not fit enough in ipt, so I would keep it simple:
>> 
>> client-PC-squid-HAV--web
>> 
>> And the transparent setup for squid  is well documented.
>> 
>> PS: Grafik ist etwas klein :-)
>> 
>> 
>> 
>> 
>> 
>> --
>> View this message in context: 
>> http://squid-web-proxy-cache.1019090.n4.nabble.com/Securing-squid3-tp4658495p4658501.html
>> Sent from the Squid - Users mailing list archive at Nabble.com.
> 



Re: [squid-users] Re: Securing squid3

2013-02-14 Thread Andreas Westvik
havp supports parent setup, and as far as I have seen, it should be setup 
before squid.
Now, I can always switch this around, and move the squid3 setup to 192.168.0.24 
and setup
havp on 192.168.0.1 of course. 
But 192.168.0.1 is running debian "production" and Debian does not
support havp on a squeeze. So Im using a debian wheezy for havp in the mean 
while. And its not installed via apt. 


If squid caches infected files, the local clamav should take care of that 
anyways? Since havp on the other server are
using clamav as well. 

I really don't think the iptables rules should be that difficult to setup up, 
since I intercept the web traffic with this:

iptables -t nat -A PREROUTING -i eth3 -p tcp --dport 80 -j REDIRECT --to-port 
3128

So it's basically the same thing, but kinda like -j REDIRECT -to-destination 
192.168.0.24:3127 

But it's not working! grr!

-Andreas

On Feb 14, 2013, at 17:12 , babajaga  wrote:

> Then its more a question how to setup iptables, the clients and HAVP.
> However, why HAV first ?
> This has the danger of squid caching infected files. And HAV will scan
> cached files over and over again.
> Then squid will be an upstream proxy of HAV. IF HAV supports parent proxies,
> then squid should have no problem.
> But this then either needs a proxy.pac for the clients browsers or explicit
> proxy config for the clients browsers.
> This would be the easier path. When this works, then to think about using
> ipt with explicit routing of all packets to HAV-box. And back, so you have
> to consider NAT. I am not fit enough in ipt, so I would keep it simple:
> 
> client-PC-squid-HAV--web
> 
> And the transparent setup for squid  is well documented.
> 
> PS: Grafik ist etwas klein :-)
> 
> 
> 
> 
> 
> --
> View this message in context: 
> http://squid-web-proxy-cache.1019090.n4.nabble.com/Securing-squid3-tp4658495p4658501.html
> Sent from the Squid - Users mailing list archive at Nabble.com.



[squid-users] Re: Securing squid3

2013-02-14 Thread babajaga
Then its more a question how to setup iptables, the clients and HAVP.
However, why HAV first ?
This has the danger of squid caching infected files. And HAV will scan
cached files over and over again.
Then squid will be an upstream proxy of HAV. IF HAV supports parent proxies,
then squid should have no problem.
But this then either needs a proxy.pac for the clients browsers or explicit
proxy config for the clients browsers.
This would be the easier path. When this works, then to think about using
ipt with explicit routing of all packets to HAV-box. And back, so you have
to consider NAT. I am not fit enough in ipt, so I would keep it simple:

client-PC-squid-HAV--web

And the transparent setup for squid  is well documented.

PS: Grafik ist etwas klein :-)





--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Securing-squid3-tp4658495p4658501.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Securing squid3

2013-02-14 Thread Andreas Westvik
heh, try this one

http://bildr.no/view/1389674


On Feb 14, 2013, at 16:49 , Andreas Westvik  wrote:

> Sorry, I have been replying directly to users email.
> 
> To clear things up, here is a image of the setup:
> 
> http://bildr.no/image/1389674.jpeg
> 
> 
> havp is running on 192.168.0.24:3127 
> squid3 is running on 192.168.0.1:3128
> 
> -Andras
> 
> On Feb 14, 2013, at 16:45 , babajaga  wrote:
> 
>> I think, 2 corrections:
>> 
>> Instead
>>> squid.conf: 
>> cache_peer localhost parent 8899 0 no-query no-digest <
>> 
>> 
>> squid.conf: 
>> cache_peer avp-host parent 8899 0 no-query no-digest
>> never_direct allow all
>> 
>> 
>> Otherwise, uncachable requests will not go thru parent proxy, but direct.
>> Which will result in some files, not scanned by havp.
>> 
>> 
>> 
>> 
>> --
>> View this message in context: 
>> http://squid-web-proxy-cache.1019090.n4.nabble.com/Securing-squid3-tp4658495p4658498.html
>> Sent from the Squid - Users mailing list archive at Nabble.com.
> 



Re: [squid-users] Securing squid3

2013-02-14 Thread Andreas Westvik
Sorry, I have been replying directly to users email.

To clear things up, here is a image of the setup:

http://bildr.no/image/1389674.jpeg


havp is running on 192.168.0.24:3127 
squid3 is running on 192.168.0.1:3128

-Andras

On Feb 14, 2013, at 16:45 , babajaga  wrote:

> I think, 2 corrections:
> 
> Instead
>> squid.conf: 
> cache_peer localhost parent 8899 0 no-query no-digest <
> 
> 
> squid.conf: 
> cache_peer avp-host parent 8899 0 no-query no-digest
> never_direct allow all
> 
> 
> Otherwise, uncachable requests will not go thru parent proxy, but direct.
> Which will result in some files, not scanned by havp.
> 
> 
> 
> 
> --
> View this message in context: 
> http://squid-web-proxy-cache.1019090.n4.nabble.com/Securing-squid3-tp4658495p4658498.html
> Sent from the Squid - Users mailing list archive at Nabble.com.



Re: AW: [squid-users] Securing squid3

2013-02-14 Thread babajaga
I think, 2 corrections:

Instead
>squid.conf: 
cache_peer localhost parent 8899 0 no-query no-digest <


squid.conf: 
cache_peer avp-host parent 8899 0 no-query no-digest
never_direct allow all


Otherwise, uncachable requests will not go thru parent proxy, but direct.
Which will result in some files, not scanned by havp.




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Securing-squid3-tp4658495p4658498.html
Sent from the Squid - Users mailing list archive at Nabble.com.


WG: [squid-users] Securing squid3

2013-02-14 Thread Fuhrmann, Marcel
When you use havp and squid on the same server, you don't need iptables.

With 

cache_peer localhost parent 8899 0 no-query no-digest

squid uses a parent proxy (havp). http://www.server-side.de/ideas.htm



Take a look here:
http://www.christianschenk.org/blog/using-a-parent-proxy-with-squid/



-Ursprüngliche Nachricht-
Von: Andreas Westvik [mailto:andr...@spbk.no]
Gesendet: Donnerstag, 14. Februar 2013 16:29
An: Fuhrmann, Marcel
Betreff: Re: [squid-users] Securing squid3 

Thanks for the answers!

Went from:
tcp0  0 *:3128  *:* LISTEN 
to this:
tcp0  0 192.168.0.1:3128*:* LISTEN 

Very good. 
Now about the havp stuff you mention, I really did not understand.

> cache_peer localhost parent 8899 0 no-query no-digest

How will this redirect traffic to 192.168.0.24? I Im using this command to 
gather traffic, and send it to 192.168.0.1:3128

iptables -t nat -A PREROUTING -i eth3 -p tcp --dport 80 -j REDIRECT --to-port 
3128

I have even tried without this command, and its not working. 

-Andreas

On Feb 14, 2013, at 16:00 , "Fuhrmann, Marcel"  wrote:

> Hi Andreas,
> 
> take a look:
> 
> 1. 
> acl LAN 192.168.0.0/32
> ..
> ..
> http_access allow LAN
> http_access deny ALL
> 
> 
> 
> 2. http_port SQUID-IP:3128
> 
> 
> 3. Example:
> 
> squid.conf:
> cache_peer localhost parent 8899 0 no-query no-digest
> 
> havp.conf:
> #Port
> PORT 8899
> 
> 
> --
> Marcel
> 
> 
> -Ursprüngliche Nachricht-
> Von: Andreas Westvik [mailto:andr...@spbk.no]
> Gesendet: Donnerstag, 14. Februar 2013 15:43
> An: squid-users
> Betreff: [squid-users] Securing squid3
> 
> Hi everybody
> 
> I have been running squid3 on my Debian squeeze on/off for a few weeks now. 
> And there is a few things Im not sure of
> 
> 1. How can I be sure that Im running it securely? I really only want squid3 
> to server my local clients (192.168.0.0/32). 
> 2. Can I bind squid3 to only listen to any device/ip?
> 3. just for fun, I have setup havp on a different server. Is it possible to 
> send my http traffic to that server first? (havp runs on 192.168.0.24) Then 
> back to squid3? 
> 
> As of now, I need to configure my clients to connect to that havp server, 
> then havp will send traffic back to squid. But I would like to happen with 
> some automatic iptables commands.
> I have tried several iptables setup, but nothing will make this work. 
> I cannot for the life of me intercept the port 80 traffic, then 
> redirect it to 192.168.0.24:3127
> 
> 
> 
> Like this: Client -> Gw 192.168.0.1 -> havp 192.168.0.24:3127 ->
> squid3 192.168.0.1:3128 -> internets
> 
> This is my setup:
> 
> http_port 3128 transparent
> acl LAN src 192.168.0.0/32
> acl localnet src 127.0.0.1/255.255.255.255 http_access allow LAN 
> http_access allow localnet cache_dir ufs /var/spool/squid3 5000 16 256
> 
> #Block
> acl ads dstdom_regex -i "/etc/squid3/squid.adservers"
> http_access deny ads
> 
> eth3: 192.168.0.1 (non-dhcp envirment)
> eth4: wan official ip (non-dchp)
> 
> -Andreas



AW: [squid-users] Securing squid3

2013-02-14 Thread Fuhrmann, Marcel
Hi Andreas,

take a look:

1. 
acl LAN 192.168.0.0/32
..
..
http_access allow LAN
http_access deny ALL



2. http_port SQUID-IP:3128


3. Example:

squid.conf:
cache_peer localhost parent 8899 0 no-query no-digest

havp.conf:
#Port
PORT 8899


--
 Marcel


-Ursprüngliche Nachricht-
Von: Andreas Westvik [mailto:andr...@spbk.no] 
Gesendet: Donnerstag, 14. Februar 2013 15:43
An: squid-users
Betreff: [squid-users] Securing squid3 

Hi everybody

I have been running squid3 on my Debian squeeze on/off for a few weeks now. 
And there is a few things Im not sure of

1. How can I be sure that Im running it securely? I really only want squid3 to 
server my local clients (192.168.0.0/32). 
2. Can I bind squid3 to only listen to any device/ip?
3. just for fun, I have setup havp on a different server. Is it possible to 
send my http traffic to that server first? (havp runs on 192.168.0.24) Then 
back to squid3? 

As of now, I need to configure my clients to connect to that havp server, then 
havp will send traffic back to squid. But I would like to happen with some 
automatic iptables commands.
I have tried several iptables setup, but nothing will make this work. I cannot 
for the life of me intercept the port 80 traffic, then redirect it to 
192.168.0.24:3127 



Like this: Client -> Gw 192.168.0.1 -> havp 192.168.0.24:3127 -> squid3 
192.168.0.1:3128 -> internets

This is my setup:

http_port 3128 transparent
acl LAN src 192.168.0.0/32
acl localnet src 127.0.0.1/255.255.255.255 http_access allow LAN http_access 
allow localnet cache_dir ufs /var/spool/squid3 5000 16 256

#Block
acl ads dstdom_regex -i "/etc/squid3/squid.adservers"
http_access deny ads

eth3: 192.168.0.1 (non-dhcp envirment)
eth4: wan official ip (non-dchp)

-Andreas


[squid-users] Securing squid3

2013-02-14 Thread Andreas Westvik
Hi everybody

I have been running squid3 on my Debian squeeze on/off for a few weeks now. 
And there is a few things Im not sure of

1. How can I be sure that Im running it securely? I really only want squid3 to 
server my local clients (192.168.0.0/32). 
2. Can I bind squid3 to only listen to any device/ip?
3. just for fun, I have setup havp on a different server. Is it possible to 
send my http traffic to that server first? (havp runs on 192.168.0.24) Then 
back to squid3? 

As of now, I need to configure my clients to connect to that havp server, then 
havp will send traffic back to squid. But I would like to happen with some 
automatic iptables commands.
I have tried several iptables setup, but nothing will make this work. I cannot 
for the life of me intercept the port 80 traffic, then redirect it to 
192.168.0.24:3127 



Like this: Client -> Gw 192.168.0.1 -> havp 192.168.0.24:3127 -> squid3 
192.168.0.1:3128 -> internets

This is my setup:

http_port 3128 transparent
acl LAN src 192.168.0.0/32
acl localnet src 127.0.0.1/255.255.255.255
http_access allow LAN
http_access allow localnet
cache_dir ufs /var/spool/squid3 5000 16 256

#Block
acl ads dstdom_regex -i "/etc/squid3/squid.adservers"
http_access deny ads

eth3: 192.168.0.1 (non-dhcp envirment)
eth4: wan official ip (non-dchp)

-Andreas

[squid-users] Help with server-first and mimic server certificate

2013-02-14 Thread Prasanna Venkateswaran
Hi,
  I have been trying to set up squid which can intercept https
traffic without client (read it as browser proxy) changes. I am using
the latest squid 3.3.1. When I actually open a https site I still see
the certificate with the parameters I provided (for myCA.pem) and I
dont see any of the original certificate's properties being mimicked.
I have listed my config below. Please let me know whether I am missing
anything. Pardon me if am overlooking any config. I am relatively new
to squid.

My iptable config:

Chain PREROUTING (policy ACCEPT)
target prot opt source   destination
REDIRECT   tcp  --  anywhere anywheretcp
dpt:www redir ports 3128
REDIRECT   tcp  --  anywhere anywheretcp
dpt:https redir ports 3129


My Squid config:

http_access deny all
always_direct allow all
ssl_bump server-first all

# Squid normally listens to port 3128
http_port 3128 transparent
https_port 3129 intercept cert=/etc/squid/ssl_cert/myCA.pem ssl-bump

#icap settings
icap_serviceservice_url_check reqmod_precache bypass=on
icap://127.0.0.1:1344/url_check
icap_enable on
icap_preview_size 128
icap_service_failure_limit -1
icap_preview_enable on
icap_send_client_ip on
icap_send_client_username on
icap_client_username_header X-Authenticated-User
icap_client_username_encode on
adaptation_service_set  class_url_check  service_url_check
adaptation_access  class_url_check  allow all

 Thanks & Regards,
Prasanna


Re: [squid-users] ipv6 support for 3.1.16

2013-02-14 Thread Amos Jeffries

On 14/02/2013 11:47 p.m., anita wrote:

Hi,

I am using squid version 3.1.16 on a red hat linux OS.
 From the release notes, I do find that there is ipv6 support from 3.1.1
release.

What I need to know is:
1. the option to specify dns_nameservers : can this directive hold ipv6
address and ipv4 address at the same time - that is if I have one directive
for each address in the same squid.conf?


Almost anywhere in squid.conf where IPv4 was accepted will also accept IPv6.

The only exception is WCCP settings, since that protocol version(s) 
implemented by Squid are IPv4-only.
PS. we are still looking for a sponsor to adjust ot to the v3 protocol 
with IPv6 upport.



2. I understand if i simply specify a port number for http_port, it should
give me support for both ipv4 & 6 automatically.


Yes.

There is one proviso. If you have a split TCP stack (the 'v4-mapping' 
feature of IPv6 is missing or disabled in yoru kernel) then 3.1 has 
issues. You will need 3.2 or later for good IPv6 support on those systems.




3. I would need to configure my eth0 interface (in the machine where I
running my squid) to have both ipv4 & ipv6 addresses. In this case, should I
restart the squid? Or rather should squid be restarted everytime the eth
interface is reconfigured? is it necessary? According to my understanding,
only any config settings would need a reconfiguration. Please correct me.


If you configure _only_ a port number in http_port there is no binding 
between Squid and any IPs or NIC. You can change them as needed on the 
system without restarting Squid.


If you configure an IP or hostname in the http_port line, you will need 
to 'squid -k reconfigure' after making alterations to _that_ address or 
host name.


Amos


Re: [squid-users] query about --with-filedescriptors and ulimit

2013-02-14 Thread Amm
Ok I am answering my own question just incase someone also faces the same issue.

Compile time option -with-filedescriptors is just a suggestion to squid. (as 
clarified by Amos)


Earlier I was assuming that, it is enough and there is no need to set ulimit.

But after few commands and Amos's reply, I realised we must set ulimit.
Even after the WARNING by squid, squid was not actually increasing the limit.


Before ulimit (1024/4096) and -with-filedescriptors=16384

cat /proc/SQUIDPID/limits
Max open files    1024 4096 files 



After ulimit (16384/16384) and -with-filedescriptors=16384

cat /proc/SQUIDPID/limits
Max open files    16384    16384    files 


In short, you still need to set ulimit.


Here is how to do it on Fedora

1) Create file /etc/systemd/system/squid.service
2) Add following 3 lines in it.

.include /lib/systemd/system/squid.service
[Service]
LimitNOFILE=16384

3) systemctl daemon-reload
4) systemctl restart squid.service

Hope it helps

Amm


- Original Message -
> From: Amm 
> To: "squid-users@squid-cache.org" 
> Cc: 
> Sent: Thursday, 14 February 2013 3:53 PM
> Subject: Re: [squid-users] query about  --with-filedescriptors and ulimit
> 

>>>   I compiled squid using --with-filedescriptors=16384.
>>> 
>>>   So do I still need to set ulimit before starting squid?


Re: [squid-users] Squid negotiate authentication digest/basic

2013-02-14 Thread FredB

Thanks Amos,

I found something strange with nonce, the nonce seems never change 
nonce_max_count

auth_param digest nonce_max_count 10
auth_param digest check_nonce_count yes
auth_param digest nonce_strictness on

http://www.squid-cache.org/Doc/config/auth_param/

With wireshark I'm seeing my nonce like nonce="a7qcucileAouwvp6" ok no problem, 
but it still the same after many requests (hundred) 

I also tested with auth_param digest nonce_max_duration 2 minutes, I need 
reload my ID/password.

A bug ? or misunderstanding ?

Thanks



[squid-users] ipv6 support for 3.1.16

2013-02-14 Thread anita
Hi,

I am using squid version 3.1.16 on a red hat linux OS.
>From the release notes, I do find that there is ipv6 support from 3.1.1
release.

What I need to know is:
1. the option to specify dns_nameservers : can this directive hold ipv6
address and ipv4 address at the same time - that is if I have one directive
for each address in the same squid.conf?

2. I understand if i simply specify a port number for http_port, it should
give me support for both ipv4 & 6 automatically.

3. I would need to configure my eth0 interface (in the machine where I
running my squid) to have both ipv4 & ipv6 addresses. In this case, should I
restart the squid? Or rather should squid be restarted everytime the eth
interface is reconfigured? is it necessary? According to my understanding,
only any config settings would need a reconfiguration. Please correct me.

Thanks in advance.

-Anita



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/ipv6-support-for-3-1-16-tp4658490.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] query about --with-filedescriptors and ulimit

2013-02-14 Thread Amm
Umm your reply confused me further! :)

Please see below inline.




- Original Message -
> From: Amos Jeffries 
> To: squid-users@squid-cache.org
> 
> On 14/02/2013 10:12 p.m., Amm wrote:
>> 
>>  I compiled squid using --with-filedescriptors=16384.
>> 
>>  So do I still need to set ulimit before starting squid?

> Yes. Squid obeys both limits. The smaller of the two will determine how 
> many are available for active use.

So in my case the max limit is, 4096 or 1024? (for squid)


>>  squidclient gives this:
>> 
>>  [root@localhost ]# squidclient -h 127.0.0.1 mgr:info |grep -i desc
>>  File descriptor usage for squid:
>>           Maximum number of file descriptors:   16384
>>           Largest file desc currently in use:    888
>>           Number of file desc currently in use:  774
>>           Available number of file descriptors: 15610
>>           Reserved number of file descriptors:   100
>> 
>>  ulimit -H -n gives 4096
>>  ulimit -n gives 1024
>> 
>>  These are standard Fedora settings, I have not made any changes.

If squid obeys the smaller limit shoudn't it report "Available number of file 
descriptors" to max 4096?
Why is it reporting 15610?

> ... when this proxy reaches the limit for Squid, you will get a message 
> about socket errors and FD reserved will jump from 100 to something just 
> below that limit to prevent running out of FD in future.

I have SELinux disabled.

I just got this:

2013/02/14 15:07:08 kid1| Attempt to open socket for EUI retrieval failed: (24) 
Too many open files
2013/02/14 15:07:08 kid1| comm_open: socket failure: (24) Too many open files
2013/02/14 15:07:08 kid1| Reserved FD adjusted from 100 to 15391 due to failures
2013/02/14 15:07:08 kid1| '/usr/share/squid/errors/en-us/ERR_CONNECT_FAIL': 
(24) Too many open files
2013/02/14 15:07:08 kid1| WARNING: Error Pages Missing Language: en-us
2013/02/14 15:07:08 kid1| WARNING! Your cache is running out of filedescriptors

How to know number of FD open when this error occurred? I want to know if it 
was 1024 or 4096?

Did squid automatically handle it? Why does it say 15391 instead of something 
below 4096?
Or 15391 is right and expected and I do not have to set ulimit before squid 
starts?


 
>>  So back to my question:
>>  If I am compiling squid with --with-filedescriptors=16384
>>  do I need to set ulimit before starting squid?
>> 
>>  Or does squid automatically set ulimit?
> 
> Yes.

Yes was for that "I have to set ulimit before starting squid"
OR
Yes was for that "squid automatically sets ulimit and i do not have to do 
anything"

> Amos

Thanks for your quick response.

Regards

Amm



Re: [squid-users] query about --with-filedescriptors and ulimit

2013-02-14 Thread Amos Jeffries

On 14/02/2013 10:12 p.m., Amm wrote:

Hello,

I have a query about how --with-filedescriptors and ulimit.

Every 2-3 days I keep getting WARNING that system is running out of descriptors.


I compiled squid using --with-filedescriptors=16384.

So do I still need to set ulimit before starting squid?


Yes. Squid obeys both limits. The smaller of the two will determine how 
many are available for active use.





Or does squid automatically set ulimit? (as it starts as root)


Squid can set its available FD state at something higher. Squid built 
with system support for adjusting rlimit can use it to request higher 
limits. But ulimit (or SELinux and friends) can come along later and 
prevent a number of those sockets being opened...




I am using Fedora 16 with systemd squid.service (standard fedora file, no 
change)

Cache.log says:

2013/02/14 10:28:52 kid1| With 16384 file descriptors available


which is as expected.


squidclient gives this:

[root@localhost ]# squidclient -h 127.0.0.1 mgr:info |grep -i desc
File descriptor usage for squid:
 Maximum number of file descriptors:   16384
 Largest file desc currently in use:888
 Number of file desc currently in use:  774
 Available number of file descriptors: 15610
 Reserved number of file descriptors:   100

ulimit -H -n gives 4096
ulimit -n gives 1024

These are standard Fedora settings, I have not made any changes.


... when this proxy reaches the limit for Squid, you will get a message 
about socket errors and FD reserved will jump from 100 to something just 
below that limit to prevent running out of FD in future.


NP: there is a bug currently being investigated in squid-dev that Squid 
does not report when it does not have rlimit support available.




So back to my question:
If I am compiling squid with --with-filedescriptors=16384
do I need to set ulimit before starting squid?

Or does squid automatically set ulimit?


Yes.

Amos


Re: [squid-users] query about --with-filedescriptors and ulimit

2013-02-14 Thread Eliezer Croitoru

On 2/14/2013 11:12 AM, Amm wrote:

ulimit -H -n gives 4096
ulimit -n gives 1024

These are standard Fedora settings, I have not made any changes.


So back to my question:
If I am compiling squid with --with-filedescriptors=16384
do I need to set ulimit before starting squid?

Or does squid automatically set ulimit?

This gives squid the default to 16384 as a limit if available.
In a case the system is limiting to 1\4k the lower limit is being forced 
by the OS.


You need to change the limit's in the OS level to this specific 
service\user\process.


Many admins prefer to just add a line at the startup script:
ulimit 16384
(or another limit)

It works fine so feel free to use it unless you prefer to do it in the 
ways Fedora\linux structure offers the admin.



Regards,
Eliezer




Thanks


Amm.


--
Eliezer Croitoru
http://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


[squid-users] query about --with-filedescriptors and ulimit

2013-02-14 Thread Amm
Hello,

I have a query about how --with-filedescriptors and ulimit.

Every 2-3 days I keep getting WARNING that system is running out of descriptors.


I compiled squid using --with-filedescriptors=16384.

So do I still need to set ulimit before starting squid?

Or does squid automatically set ulimit? (as it starts as root)


I am using Fedora 16 with systemd squid.service (standard fedora file, no 
change)

Cache.log says:

2013/02/14 10:28:52 kid1| With 16384 file descriptors available


which is as expected.


squidclient gives this:

[root@localhost ]# squidclient -h 127.0.0.1 mgr:info |grep -i desc
File descriptor usage for squid:
    Maximum number of file descriptors:   16384
    Largest file desc currently in use:    888
    Number of file desc currently in use:  774
    Available number of file descriptors: 15610
    Reserved number of file descriptors:   100

ulimit -H -n gives 4096
ulimit -n gives 1024

These are standard Fedora settings, I have not made any changes.


So back to my question:
If I am compiling squid with --with-filedescriptors=16384
do I need to set ulimit before starting squid?

Or does squid automatically set ulimit?


Thanks 


Amm.