Re: [squid-users] Re: ONLY Cache certain Websites.

2014-08-06 Thread Igor Novgorodov

Well, english is not my native language too, but that does not hurt much :)

1. Define an access list (text file with domains you wanna cache, one 
domain per line):

acl domains_cache dstdomain "/etc/squid/lists/domains_cache.txt"

2. Define a parameter that will allow cache for these domains, while 
denying all others:

cache allow domains_cache
cache deny all

That's all, wasn't that difficult :)

P.S.
always_direct directive is for something a little different, it's used 
with parent proxies,

so use just "cache".


On 06.08.2014 21:33, nuhll wrote:

Thanks for your answer.

Ill try to get it working but im not sure how. I dont understand this "acl"
system. I know there are alot of tutorials out there, but not in my mother
language so im not able to fully understand such expert things.

Could you maybe show me atleast at one exampel how to get it work? Also
maybe there are things i can remove?

Heres my actual list:

acl localnet src 192.168.0.0
acl all src all
acl localhost src 127.0.0.1

#access_log daemon:/var/log/squid/access.test.log squid

http_port 192.168.0.1:3128 transparent

cache_dir ufs /daten/squid 10 16 256

range_offset_limit 100 MB windowsupdate
maximum_object_size 6000 MB
quick_abort_min -1


# Add one of these lines for each of the websites you want to cache.

refresh_pattern -i
microsoft.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80% 432000
reload-into-ims

refresh_pattern -i
windowsupdate.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80%
432000 reload-into-ims

refresh_pattern -i
windows.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80% 432000
reload-into-ims

#kaspersky update
refresh_pattern -i
geo.kaspersky.com/.*\.(cab|dif|pack|q6v|2fv|49j|tvi|ez5|1nj|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip)
4320 80% 432000 reload-into-ims

#nvidia updates
refresh_pattern -i
download.nvidia.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80%
432000 reload-into-ims

#java updates
refresh_pattern -i
sdlc-esd.sun.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80%
432000 reload-into-ims

# DONT MODIFY THESE LINES
refresh_pattern \^ftp:   144020% 10080
refresh_pattern \^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320

#kaspersky update
acl kaspersky dstdomain geo.kaspersky.com

acl windowsupdate dstdomain windowsupdate.microsoft.com
acl windowsupdate dstdomain .update.microsoft.com
acl windowsupdate dstdomain download.windowsupdate.com
acl windowsupdate dstdomain redir.metaservices.microsoft.com
acl windowsupdate dstdomain images.metaservices.microsoft.com
acl windowsupdate dstdomain c.microsoft.com
acl windowsupdate dstdomain www.download.windowsupdate.com
acl windowsupdate dstdomain wustat.windows.com
acl windowsupdate dstdomain crl.microsoft.com
acl windowsupdate dstdomain sls.microsoft.com
acl windowsupdate dstdomain productactivation.one.microsoft.com
acl windowsupdate dstdomain ntservicepack.microsoft.com
acl CONNECT method CONNECT
acl wuCONNECT dstdomain www.update.microsoft.com
acl wuCONNECT dstdomain sls.microsoft.com

http_access allow kaspersky localnet
http_access allow CONNECT wuCONNECT localnet
http_access allow windowsupdate localnet

#test
http_access allow localnet
http_access allow all
http_access allow localhost
  




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/ONLY-Cache-certain-Websites-tp4667121p4667157.html
Sent from the Squid - Users mailing list archive at Nabble.com.




[squid-users] Re: ONLY Cache certain Websites.

2014-08-06 Thread nuhll
Thanks for your answer.

Ill try to get it working but im not sure how. I dont understand this "acl"
system. I know there are alot of tutorials out there, but not in my mother
language so im not able to fully understand such expert things.

Could you maybe show me atleast at one exampel how to get it work? Also
maybe there are things i can remove? 

Heres my actual list:

acl localnet src 192.168.0.0
acl all src all
acl localhost src 127.0.0.1

#access_log daemon:/var/log/squid/access.test.log squid

http_port 192.168.0.1:3128 transparent

cache_dir ufs /daten/squid 10 16 256

range_offset_limit 100 MB windowsupdate
maximum_object_size 6000 MB
quick_abort_min -1


# Add one of these lines for each of the websites you want to cache.

refresh_pattern -i
microsoft.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80% 432000
reload-into-ims

refresh_pattern -i
windowsupdate.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80%
432000 reload-into-ims

refresh_pattern -i
windows.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80% 432000
reload-into-ims

#kaspersky update
refresh_pattern -i
geo.kaspersky.com/.*\.(cab|dif|pack|q6v|2fv|49j|tvi|ez5|1nj|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip)
4320 80% 432000 reload-into-ims

#nvidia updates
refresh_pattern -i
download.nvidia.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80%
432000 reload-into-ims

#java updates
refresh_pattern -i
sdlc-esd.sun.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80%
432000 reload-into-ims

# DONT MODIFY THESE LINES
refresh_pattern \^ftp:   144020% 10080
refresh_pattern \^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320

#kaspersky update
acl kaspersky dstdomain geo.kaspersky.com

acl windowsupdate dstdomain windowsupdate.microsoft.com
acl windowsupdate dstdomain .update.microsoft.com
acl windowsupdate dstdomain download.windowsupdate.com
acl windowsupdate dstdomain redir.metaservices.microsoft.com
acl windowsupdate dstdomain images.metaservices.microsoft.com
acl windowsupdate dstdomain c.microsoft.com
acl windowsupdate dstdomain www.download.windowsupdate.com
acl windowsupdate dstdomain wustat.windows.com
acl windowsupdate dstdomain crl.microsoft.com
acl windowsupdate dstdomain sls.microsoft.com
acl windowsupdate dstdomain productactivation.one.microsoft.com
acl windowsupdate dstdomain ntservicepack.microsoft.com
acl CONNECT method CONNECT
acl wuCONNECT dstdomain www.update.microsoft.com
acl wuCONNECT dstdomain sls.microsoft.com

http_access allow kaspersky localnet
http_access allow CONNECT wuCONNECT localnet
http_access allow windowsupdate localnet

#test
http_access allow localnet
http_access allow all
http_access allow localhost
 



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/ONLY-Cache-certain-Websites-tp4667121p4667157.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Forwarding loop on squid 3.3.8

2014-08-06 Thread Amos Jeffries
On 7/08/2014 3:28 a.m., James Michels wrote:
> El miércoles, 6 de agosto de 2014, Amos Jeffries 
> escribió:
> 
>> On 7/08/2014 1:26 a.m., Karma sometimes Hurts wrote:
>>> Greetings,
>>>
>>> I'm trying to setup a transparent proxy on Squid 3.3.8, Ubuntu Trusty
>>> 14.04 from the official APT official repository. All boxes including
>>> the Squid box are under the same router, but the squid box is on a
>>> different server than the clients. Seems that for some reason the
>>> configuration on the squid3 box side is missing something, as a
>>> forwarding loop is produced.
>>>
>>> This is the configuration of the squid3 box:
>>>
>>>   visible_hostname squidbox.localdomain.com
>>>   acl SSL_ports port 443
>>>   acl Safe_ports port 80  # http
>>>   acl Safe_ports port 21  # ftp
>>>   acl Safe_ports port 443 # https
>>>   acl Safe_ports port 70  # gopher
>>>   acl Safe_ports port 210 # wais
>>>   acl Safe_ports port 1025-65535  # unregistered ports
>>>   acl Safe_ports port 280 # http-mgmt
>>>   acl Safe_ports port 488 # gss-http
>>>   acl Safe_ports port 591 # filemaker
>>>   acl Safe_ports port 777 # multiling http
>>>   acl CONNECT method CONNECT
>>>   http_access allow all
>>>   http_access deny !Safe_ports
>>>   http_access deny CONNECT !SSL_ports
>>>   http_access allow localhost manager
>>>   http_access deny manager
>>>   http_access allow localhost
>>>   http_access allow all
>>>   http_port 3128 intercept
>>>   http_port 0.0.0.0:3127
>>>
>>> This rule has been added to the client's boxes:
>>>
>>>   iptables -t nat -A OUTPUT -p tcp --dport 80 -j DNAT --to-destination
>>> 192.168.1.100:3128
>>
>> Thats the problem. NAT is required on the Squid box *only*.
>>
>>
> Ok, but if NAT is required on the Squid box exclusively, how do I redirect
> all outgoing traffic sent to the port 80 over a client to another box
> (concretely the one where Squid runs) without using such NAT?
> 

covered in the rest of what I wrote earlier.

Policy routing. AKA make default gateway for port 80 traffic from each
client be the Squid box.
 The easiest way to do that is to simply make Squid box the default
gateway for all clients, and have only Squid box aware of the real
gateway. Requires the Squid box be able to handle the full network
traffic load.
 Harder way is setting default gateway for only port 80 traffic be the
Squid box rest going to real gateway.

http://wiki.squid-cache.org/ConfigExamples/Intercept/IptablesPolicyRoute


> 
>>>
>>> 192.168.1.100 corresponds to the squid3 box. In the log below
>>> 192.168.1.20 is one of the clients.
>>
>>
>> When receiving intercepted traffic current Squid validate the
>> destination IP address against the claimed Host: header domain DNS
>> records to avoid several nasty security vulnerabilities connecting to
>> that Host domain. If that fails the traffic is instead relayed to the
>> original IP:port address in the TCP packet. That address arriving into
>> your Squid box was 192.168.1.100:3128 ... rinse, repeat ...
>>
>> Use policy routing, or a tunnel (GRE, VPN, etc) that does not alter the
>> packet src/dst IP addresses to get traffic onto the Squid box.
>>
>>
> I thought packets were not mangled over the same network unless
> specifically done via iptables.

Correct. And you have done that mangling with "-j DNAT" on the client
machines. Squid box does not have access to those client machines
kernels to un-mangle.


> Does that mean that the squid3 box
> currently has trouble resolving the Host domain, i.e. google.com and
> therefore tries relaying to the original packet ip? Seems to resolve it via
> the 'host' or 'ping' commands.
> 

Domains do not always resolve to the same IPs. We see a lot of
false-negative results from Host verification for Google and Akamai
hosted domains due to the way they rotate, geo-base, and IP-base DNS
results in real-time. Thus the fallback to original IP.

Amos


Re: [squid-users] Forwarding loop on squid 3.3.8

2014-08-06 Thread James Michels
Ok, but if NAT is expected on the Squid box exclusively, how do I
redirect all the outgoing traffic sent over the port 80 from a client
to another box (concretely the one where Squid runs) without using
such NAT?

I thought packets were not mangled over the same network unless
specifically done via iptables. Does that mean that squid3 box
currently has trouble resolving the host domain, i.e. google.com and
therefore tries relying through the original packet's IP? Seems to
resolve it via the 'host' or 'ping' commands.

Thanks

James

2014-08-06 14:52 GMT+01:00 Amos Jeffries :
> On 7/08/2014 1:26 a.m., Karma sometimes Hurts wrote:
>> Greetings,
>>
>> I'm trying to setup a transparent proxy on Squid 3.3.8, Ubuntu Trusty
>> 14.04 from the official APT official repository. All boxes including
>> the Squid box are under the same router, but the squid box is on a
>> different server than the clients. Seems that for some reason the
>> configuration on the squid3 box side is missing something, as a
>> forwarding loop is produced.
>>
>> This is the configuration of the squid3 box:
>>
>>   visible_hostname squidbox.localdomain.com
>>   acl SSL_ports port 443
>>   acl Safe_ports port 80  # http
>>   acl Safe_ports port 21  # ftp
>>   acl Safe_ports port 443 # https
>>   acl Safe_ports port 70  # gopher
>>   acl Safe_ports port 210 # wais
>>   acl Safe_ports port 1025-65535  # unregistered ports
>>   acl Safe_ports port 280 # http-mgmt
>>   acl Safe_ports port 488 # gss-http
>>   acl Safe_ports port 591 # filemaker
>>   acl Safe_ports port 777 # multiling http
>>   acl CONNECT method CONNECT
>>   http_access allow all
>>   http_access deny !Safe_ports
>>   http_access deny CONNECT !SSL_ports
>>   http_access allow localhost manager
>>   http_access deny manager
>>   http_access allow localhost
>>   http_access allow all
>>   http_port 3128 intercept
>>   http_port 0.0.0.0:3127
>>
>> This rule has been added to the client's boxes:
>>
>>   iptables -t nat -A OUTPUT -p tcp --dport 80 -j DNAT --to-destination
>> 192.168.1.100:3128
>
> Thats the problem. NAT is required on the Squid box *only*.
>
>>
>> 192.168.1.100 corresponds to the squid3 box. In the log below
>> 192.168.1.20 is one of the clients.
>
>
> When receiving intercepted traffic current Squid validate the
> destination IP address against the claimed Host: header domain DNS
> records to avoid several nasty security vulnerabilities connecting to
> that Host domain. If that fails the traffic is instead relayed to the
> original IP:port address in the TCP packet. That address arriving into
> your Squid box was 192.168.1.100:3128 ... rinse, repeat ...
>
> Use policy routing, or a tunnel (GRE, VPN, etc) that does not alter the
> packet src/dst IP addresses to get traffic onto the Squid box.
>
> Amos


Re: [squid-users] Squid as internet traffic monitor

2014-08-06 Thread Amos Jeffries
On 6/08/2014 9:30 p.m., Babelo Gmvsdm wrote:
> Hi,
> 
> I would like to use a Squid Server only as an Internet Traffic Monitor.
> To do this I used an Ubuntu 14.04 with Squid 3.3 on it.
> 
> 
> I plugged the squid on a cisco switch port configured as a monitor 
> destination.
> The port connected to the backbone switch is configured as monitor source.
> I configured the IP of the Squid to be the same as real gateway used by users.
> I configured the squid to be in transparent mode with : http_port 3128 
> intercept
> I put an iptable rule that should forward http packets to the squid on port 
> 3128.
> 
> Unfortunately it does not work.

If I'm reading that right you now have two boxes using the same gateway
IP for themselves.
 Which do the packets go to from the client?
 Where do the packets from Squid go when using the gateway IP as source
address?
 Where do the TCP SYN-ACK packets go?

Amos


Re: [squid-users] Re: Configuring WCCPv2, Mask Assignment

2014-08-06 Thread Squid user

Hi Amos.

Understood... thanks.

Then I think the names of the flags are a bit misleading:
they all end with "_hash", even if mask assignment is used.

Also, with respect to that fixed mask, 0x1741  I know that is the 
default value, but it then means that there is no way to use a different 
mask.


If the number of cache-engines is low, one could think on having a mask 
of just 1 or 2 bits for instance, so that the processing time at the 
router is minimized.


What do you think?

Thanks.




On 08/06/2014 11:16 AM, Amos Jeffries wrote:

On 5/08/2014 12:27 a.m., Squid user wrote:

Hi Amos.

Could you please be more specific?

I cannot find any wccp-related directive in Squid named IIRC or similar.


IIRC = "If I Recall Correctly".
I am basing my answer on code knowledge I gained a year or two back.

Just re-checked the code and confirmed. The flag names on
wccp2_service_info are the same for both hash and mask methods. What
they do is different and hard-coded into Squid.

For mask assignment the static mask of 0x1741 is sent from Squid for
each of the fields you configure a flag for.

http://www.squid-cache.org/Doc/config/wccp2_service_info/


Examples of what you need for your earlier requested config (Sorry about
the line wrap):

   wccp2_service_info 80 protocol=tcp flags=src_ip_hash
priority=240 ports=80

with mask assignment method sets the mask to be 0x1741 on the packet
src-IP when protocol is TCP and dst-port 80.


   wccp2_service_info 90 protocol=tcp flags=dst_ip_hash
priority=240 ports=80

with mask assignment method sets the mask to be 0x1741 on the packet
dst-IP when protocol is TCP and dst-port 80.


Amos



RE: [squid-users] Quick question

2014-08-06 Thread Lawrence Pingree
Interesting, so on ext4 (which is what I am using) there's no performance
differences between using different numbers?


"Convert your dreams to achievable and realistic goals, this way the journey
is satisfying and progressive." - LP

Best regards,
The Geek Guy

Lawrence Pingree
http://www.lawrencepingree.com/resume/

Author of "The Manager's Guide to Becoming Great"
http://www.Management-Book.com
 


-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Tuesday, August 5, 2014 6:28 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Quick question

> -Original Message-
> From: Lawrence Pingree
>
> I have a 175 gigabyte cache file system. What would be the optimal L1
and L2
> cache dirs allocated for this cache size to perform well?
>

On 6/08/2014 11:52 a.m., Lawrence Pingree wrote:
> Anyone?

That depends on the OS filesystem underlying the cache, and the size of
objects in it.

The L1/L2 settings matter on FS which have a per-directory limit on inode
entries, or need to scan the full list on each file open/stat event (I think
that was FAT32, NTFS, maybe ext2, maybe old unix FS). On FS which do not do
those two things they are just an admin convenience.

Amos


smime.p7s
Description: S/MIME cryptographic signature


Re: [squid-users] Forwarding loop on squid 3.3.8

2014-08-06 Thread Amos Jeffries
On 7/08/2014 1:26 a.m., Karma sometimes Hurts wrote:
> Greetings,
> 
> I'm trying to setup a transparent proxy on Squid 3.3.8, Ubuntu Trusty
> 14.04 from the official APT official repository. All boxes including
> the Squid box are under the same router, but the squid box is on a
> different server than the clients. Seems that for some reason the
> configuration on the squid3 box side is missing something, as a
> forwarding loop is produced.
> 
> This is the configuration of the squid3 box:
> 
>   visible_hostname squidbox.localdomain.com
>   acl SSL_ports port 443
>   acl Safe_ports port 80  # http
>   acl Safe_ports port 21  # ftp
>   acl Safe_ports port 443 # https
>   acl Safe_ports port 70  # gopher
>   acl Safe_ports port 210 # wais
>   acl Safe_ports port 1025-65535  # unregistered ports
>   acl Safe_ports port 280 # http-mgmt
>   acl Safe_ports port 488 # gss-http
>   acl Safe_ports port 591 # filemaker
>   acl Safe_ports port 777 # multiling http
>   acl CONNECT method CONNECT
>   http_access allow all
>   http_access deny !Safe_ports
>   http_access deny CONNECT !SSL_ports
>   http_access allow localhost manager
>   http_access deny manager
>   http_access allow localhost
>   http_access allow all
>   http_port 3128 intercept
>   http_port 0.0.0.0:3127
> 
> This rule has been added to the client's boxes:
> 
>   iptables -t nat -A OUTPUT -p tcp --dport 80 -j DNAT --to-destination
> 192.168.1.100:3128

Thats the problem. NAT is required on the Squid box *only*.

> 
> 192.168.1.100 corresponds to the squid3 box. In the log below
> 192.168.1.20 is one of the clients.


When receiving intercepted traffic current Squid validate the
destination IP address against the claimed Host: header domain DNS
records to avoid several nasty security vulnerabilities connecting to
that Host domain. If that fails the traffic is instead relayed to the
original IP:port address in the TCP packet. That address arriving into
your Squid box was 192.168.1.100:3128 ... rinse, repeat ...

Use policy routing, or a tunnel (GRE, VPN, etc) that does not alter the
packet src/dst IP addresses to get traffic onto the Squid box.

Amos


[squid-users] Forwarding loop on squid 3.3.8

2014-08-06 Thread Karma sometimes Hurts
Greetings,

I'm trying to setup a transparent proxy on Squid 3.3.8, Ubuntu Trusty
14.04 from the official APT official repository. All boxes including
the Squid box are under the same router, but the squid box is on a
different server than the clients. Seems that for some reason the
configuration on the squid3 box side is missing something, as a
forwarding loop is produced.

This is the configuration of the squid3 box:

  visible_hostname squidbox.localdomain.com
  acl SSL_ports port 443
  acl Safe_ports port 80  # http
  acl Safe_ports port 21  # ftp
  acl Safe_ports port 443 # https
  acl Safe_ports port 70  # gopher
  acl Safe_ports port 210 # wais
  acl Safe_ports port 1025-65535  # unregistered ports
  acl Safe_ports port 280 # http-mgmt
  acl Safe_ports port 488 # gss-http
  acl Safe_ports port 591 # filemaker
  acl Safe_ports port 777 # multiling http
  acl CONNECT method CONNECT
  http_access allow all
  http_access deny !Safe_ports
  http_access deny CONNECT !SSL_ports
  http_access allow localhost manager
  http_access deny manager
  http_access allow localhost
  http_access allow all
  http_port 3128 intercept
  http_port 0.0.0.0:3127

This rule has been added to the client's boxes:

  iptables -t nat -A OUTPUT -p tcp --dport 80 -j DNAT --to-destination
192.168.1.100:3128

192.168.1.100 corresponds to the squid3 box. In the log below
192.168.1.20 is one of the clients.

2014/08/06 15:13:05| Starting Squid Cache version 3.3.8 for
x86_64-pc-linux-gnu...
2014/08/06 15:13:27.900| client_side.cc(2316) parseHttpRequest: HTTP
Client local=192.168.1.100:3128 remote=192.168.1.20:54341 FD 8
flags=33
2014/08/06 15:13:27.901| client_side.cc(2317) parseHttpRequest: HTTP
Client REQUEST:
-
GET / HTTP/1.1
Host: www.google.com
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Firefox/24.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,es;q=0.8,en-US;q=0.5,en;q=0.3
Accept-Encoding: gzip, deflate
Cookie: 
PREF=ID=119a6e25e6eccb3b:U=95e37afd611b606e:FF=0:TM=1404500940:LM=1404513627:S=r7E-Xed2muOOp-ay;
NID=67=M5geOtyDtp5evLidOfam1uzfhl6likehxjXo7KcamK8c5jXptfx9zJc-5L7jhvYvnfTvtXYJ3yza7cE8fRq2x0iyVEHN9Pn2hz9urrC_Qt_xNH6IQCoT-3-eXTwb2h4f;
OGPC=5-25:
Connection: keep-alive
Pragma: no-cache
Cache-Control: no-cache

--
2014/08/06 15:13:27.902| http.cc(2204) sendRequest: HTTP Server
local=192.168.1.100:43140 remote=192.168.1.100:3128 FD 11 flags=1
2014/08/06 15:13:27.902| http.cc(2205) sendRequest: HTTP Server REQUEST:
-
GET / HTTP/1.1
Host: www.google.com
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Firefox/24.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,es;q=0.8,en-US;q=0.5,en;q=0.3
Accept-Encoding: gzip, deflate
Cookie: 
PREF=ID=119a6e25e6eccb3b:U=95e37afd611b606e:FF=0:TM=1404500940:LM=1404513627:S=r7E-Xed2muOOp-ay;
NID=67=M5geOtyDtp5evLidOfam1uzfhl6likehxjXo7KcamK8c5jXptfx9zJc-5L7jhvYvnfTvtXYJ3yza7cE8fRq2x0iyVEHN9Pn2hz9urrC_Qt_xNH6IQCoT-3-eXTwb2h4f;
OGPC=5-25:
Via: 1.1 squidbox.localdomain.com (squid/3.3.8)
Connection: keep-alive
Pragma: no-cache
Cache-Control: no-cache

--
2014/08/06 15:13:27.902| client_side.cc(2316) parseHttpRequest: HTTP
Client local=192.168.1.100:3128 remote=192.168.1.100:43140 FD 13
flags=33
2014/08/06 15:13:27.902| client_side.cc(2317) parseHttpRequest: HTTP
Client REQUEST:
-
GET / HTTP/1.1
Host: www.google.com
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Firefox/24.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,es;q=0.8,en-US;q=0.5,en;q=0.3
Accept-Encoding: gzip, deflate
Cookie: 
PREF=ID=119a6e25e6eccb3b:U=95e37afd611b606e:FF=0:TM=1404500940:LM=1404513627:S=r7E-Xed2muOOp-ay;
NID=67=M5geOtyDtp5evLidOfam1uzfhl6likehxjXo7KcamK8c5jXptfx9zJc-5L7jhvYvnfTvtXYJ3yza7cE8fRq2x0iyVEHN9Pn2hz9urrC_Qt_xNH6IQCoT-3-eXTwb2h4f;
OGPC=5-25:
Via: 1.1 squidbox.localdomain.com (squid/3.3.8)
Connection: keep-alive
Pragma: no-cache
Cache-Control: no-cache

--
2014/08/06 15:13:27.903| client_side.cc(1377) sendStartOfMessage: HTTP
Client local=192.168.1.100:3128 remote=192.168.1.100:43140 FD 13
flags=33
2014/08/06 15:13:27.903| client_side.cc(1378) sendStartOfMessage: HTTP
Client REPLY:
-
HTTP/1.1 403 Forbidden
Server: squid/3.3.8
Mime-Version: 1.0
Date: Fri, 18 Jul 2014 10:33:27 GMT
Content-Type: text/html
Content-Length: 3932
X-Squid-Error: ERR_ACCESS_DENIED 0
Vary: Accept-Language
Content-Language: en-US
X-Cache: MISS from squidbox.localdomain.com
X-Cache-Lookup: MISS from squidbox.localdomain.com:3127
Via: 1.1 squidbox.localdomain.com (squid/3.3.8)
Connection: keep-alive

--
2014/08/06 15:13:27.903| ctx: enter level  0: 'http://www.google.com/'
2014/08/06 15:13:27.903| http.cc(761) processReplyHeader: HTTP Server
local=192.168.1.100:43140 remote=192.168.1.100:3128 FD 11 fla

[squid-users] Squid as internet traffic monitor

2014-08-06 Thread Babelo Gmvsdm
Hi,

I would like to use a Squid Server only as an Internet Traffic Monitor.
To do this I used an Ubuntu 14.04 with Squid 3.3 on it.


I plugged the squid on a cisco switch port configured as a monitor destination.
The port connected to the backbone switch is configured as monitor source.
I configured the IP of the Squid to be the same as real gateway used by users.
I configured the squid to be in transparent mode with : http_port 3128 intercept
I put an iptable rule that should forward http packets to the squid on port 
3128.

Unfortunately it does not work.

The access.log does not populate except if I try to access directly the squid 
IP on port 3128.

Any Idea? Do I loosing my time? Does anybody has already tried to to these kind 
of things?

Many thx my advance for your answers.

Bye   

Re: [squid-users] Re: Configuring WCCPv2, Mask Assignment

2014-08-06 Thread Amos Jeffries
On 5/08/2014 12:27 a.m., Squid user wrote:
> Hi Amos.
> 
> Could you please be more specific?
> 
> I cannot find any wccp-related directive in Squid named IIRC or similar.

IIRC = "If I Recall Correctly".
I am basing my answer on code knowledge I gained a year or two back.

Just re-checked the code and confirmed. The flag names on
wccp2_service_info are the same for both hash and mask methods. What
they do is different and hard-coded into Squid.

For mask assignment the static mask of 0x1741 is sent from Squid for
each of the fields you configure a flag for.

http://www.squid-cache.org/Doc/config/wccp2_service_info/


Examples of what you need for your earlier requested config (Sorry about
the line wrap):

  wccp2_service_info 80 protocol=tcp flags=src_ip_hash
priority=240 ports=80

with mask assignment method sets the mask to be 0x1741 on the packet
src-IP when protocol is TCP and dst-port 80.


  wccp2_service_info 90 protocol=tcp flags=dst_ip_hash
priority=240 ports=80

with mask assignment method sets the mask to be 0x1741 on the packet
dst-IP when protocol is TCP and dst-port 80.


Amos


Re: [squid-users] unbound and squid not resolving SSL sites

2014-08-06 Thread Amos Jeffries
On 5/08/2014 1:13 p.m., sq...@proxyplayer.co.uk wrote:
> In my network I have unbound redirecting some sites through the proxy
> server and checking authentication, If I redirect www.thisite.com it
> works corectly. However, as soon as SSL is used https://www.thissite.com
> it doesn't resolve at all. Any ideas what I have to do to enable ssl
> redirects in unbound or squid?

Handle port 443 traffic and the encrypted traffic there.
You are only receiving port 80 traffic in this config file.


There are other problems in the config file displayed. Notes inline.

> 
> squid.conf
> #
> # Recommended minimum configuration:
> #
> acl manager proto cache_object
> acl localhost src 127.0.0.1/32 ::1
> acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1
> 
> # Example rule allowing access from your local networks.
> # Adapt to list your (internal) IP networks from where browsing
> # should be allowed
> acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
> acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
> acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
> acl localnet src fc00::/7# RFC 4193 local private network range
> acl localnet src fe80::/10# RFC 4291 link-local (directly
> plugged) machines
> 
> acl SSL_ports port 443
> acl Safe_ports port 80  # http
> acl Safe_ports port 21  # ftp
> acl Safe_ports port 443 # https
> acl Safe_ports port 70  # gopher
> acl Safe_ports port 210 # wais
> acl Safe_ports port 1025-65535  # unregistered ports
> acl Safe_ports port 280 # http-mgmt
> acl Safe_ports port 488 # gss-http
> acl Safe_ports port 591 # filemaker
> acl Safe_ports port 777 # multiling http
> acl CONNECT method CONNECT
> 

You should erase all of the lines above. They are duplicated below.

> #
> # Recommended minimum Access Permission configuration:
> #
> # Only allow cachemgr access from localhost
> http_access allow manager localhost
> http_access deny manager
> 

NOTE: Current best practice recommendation is to have the manager access
control lines after the CONNECT one below. Saves on a couple of slow
regex calculations on certain types of DoS attacks.

> # Deny requests to certain unsafe ports
> http_access deny !Safe_ports
> 
> # Deny CONNECT to other than secure SSL ports
> http_access deny CONNECT !SSL_ports
> 
> # We strongly recommend the following be uncommented to protect innocent
> acl manager proto cache_object
> acl localhost src 127.0.0.1/32 ::1
> acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1
> 
> acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
> acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
> acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
> acl localnet src fc00::/7# RFC 4193 local private network range
> acl localnet src fe80::/10# RFC 4291 link-local (directly
> plugged) machines
> 
> acl SSL_ports port 443
> acl Safe_ports port 80  # http
> acl Safe_ports port 21  # ftp
> acl Safe_ports port 443 # https
> acl Safe_ports port 70  # gopher
> acl Safe_ports port 210 # wais
> acl Safe_ports port 1025-65535  # unregistered ports
> acl Safe_ports port 280 # http-mgmt
> acl Safe_ports port 488 # gss-http
> acl Safe_ports port 591 # filemaker
> acl Safe_ports port 777 # multiling http
> acl CONNECT method CONNECT
> 
> http_access allow manager localhost
> http_access deny manager
> http_access deny !Safe_ports
> http_access deny CONNECT !SSL_ports

NP: these four lines above are now occuring three times in a row in your
http_access rules. Only the first occurance will have any useful effect,
the rest just waste processing time.

> 
> external_acl_type time_squid_auth ttl=5 %SRC /usr/local/bin/squidauth

What does this helper do exactly to earn the term "authentication"?
TCP/IP address alone is insufficient to verify the end-users identity.


> acl interval_auth external time_squid_auth
> http_access allow interval_auth
> http_access deny all
> http_port 80 accel vhost allow-direct
> hierarchy_stoplist cgi-bin ?
> coredump_dir /var/spool/squid
> 
> refresh_pattern ^ftp:   144020% 10080
> refresh_pattern ^gopher:14400%  1440
> refresh_pattern -i (/cgi-bin/|\?) 0 0%0
> refresh_pattern .   020% 4320
> 

Amos


Re: [squid-users] https url filter issue

2014-08-06 Thread Amos Jeffries
On 6/08/2014 6:20 p.m., Sucheta Joshi wrote:
> Hi,
> 
> We are using facebook share api in our application for which user need to
> login using main site.  Following URL if I need to allow and not have full
> access for facebook for user then how to do it?
> 
> https://www.facebook.com/dialog/oauth?client_id=206510072861784&response_typ
> e=code&redirect_uri=http://app.ripplehire.com/ripplehire/connect/facebook&sc
> ope=publish_stream
> 
> I don't have option for dstdom_regex here as it is the main site.
> 
> I am able to do filter in other proxyies using keyword like my client id
> "206510072861784"  So it will allow only my API call and not whole site.
> 
> How to do this in Squid?

The only way to find any details about the URL path on HTTPS traffic is
to configure ssl-bump and MITM decrypt the TLS/SSL traffic.

Amos