Re: [squid-users] Re: access denied

2014-07-08 Thread Amos Jeffries

On 2014-07-08 16:41, winetbox wrote:

sorry for being out of topic, since my squid configuration is here, and
squid's experts are already here, i'd like to ask about my cache 
config.


NP: here is an email mailing list. All posted mails get to the 
experts. It helps us a lot to manage the flow of requests if they are 
all titled/threaded properly. Worst-case if the question that earlier 
started this thread is resolved is that I have my mailer auto-delete 
closed threads side-trails and you then lose all help on this hijack 
request.




*. how efficient is my cache config?

cache_mem 1024 MB
maximum_object_size_in_memory 2048 KB
memory_replacement_policy heap LFUDA
cache_replacement_policy heap LRU
cache_dir ufs /mnt/cache/cache1 8000 16 256
cache_dir ufs /mnt/cache/cache2 8000 16 256
cache_dir ufs /mnt/cache/cache3 8000 16 256
cache_dir ufs /mnt/cache/cache4 8000 16 256
maximum_object_size 1024 MB


IMO, having global object limit smaller than the in-memory object size 
limit is not a good idea. It prevents many useful but large memory 
cached objects being pushed to disk temporarily when the memory cache 
fills up.


If you have a version of Squid with rock cache type available you will 
see much faster disk hits by using a rock cache for small objects.




*. is there anything i should adjust? because i mostly get 
TCP_MEM_HIT/200,
and TCP_HIT are so extremely rare(which i believe cached on object on 
disk).


This is kind of good. It means most of your HIT are extremely fast. 
Knowing whether you could improve the HIT ratio based on that alone is 
difficult to answer, because disk objects able to memory cache will get 
loaded into memory cache on first TCP_HIT and further uses become 
TCP_MEM_HIT.


IMHO, you get better results concentrating more on the HIT/MISS ratio 
than the HIT/MEM_HIT ratio.



*. and even if i do make an adjustment on cache_replacement_policy 
later, do

this mean i have to rebuild cache dirs?


No, just a restart of Squid is required. The replacement policy data is 
generated when loading the cache_dir indexes. The locations of objects 
on disk is not changed.


Amos


Re: [squid-users] Handling client-side request floods

2014-07-08 Thread Amos Jeffries

On 2014-07-08 13:17, Dan Charlesworth wrote:

Hey folks

So I support a bunch of Squid deployments and every so often I’ll get
a call about a poor performance, or very large access logs files etc.

Oftentimes as soon as I crack open the access log I see there’s a
handful of machines (sometimes just one) practically DoSing the proxy
with failed requests (failing because the client app won’t comply with
proxy authentication).

Here’s a recent example of one of these bugs from Google Chrome:
https://code.google.com/p/chromium/issues/detail?id=373181

So I just wanted to see if anyone had any advice or suggestions for
dealing with this kind of thing. I’m guessing iptables would be the
logical place to try and prevent it, but I wouldn’t know where to
start with rate limiting in iptables…

Anyone care to share?


Andrew Beverleys QoS and traffic shaping documentation 
(http://andybev.com/index.php/Main_Page) is probably the best place to 
look for iptables based solutions, with the official netfilter 
documentation coming in second.


Squid-3.5 is coming with a new helper (ext_delayer_acl) which can be 
configured to help in this type of situation. For older Squid versions 
you can download the perl script from 
http://bazaar.launchpad.net/~squid/squid/trunk/files/head:/helpers/external_acl/delayer/ 
- documentation for it is inside the script.



Amos


RE: [squid-users] Why squid show IP in access log for transparent proxy?‏

2014-07-08 Thread Amos Jeffries

On 2014-07-08 17:36, Nil Nik wrote:
I am NOT looking for client IP or host. I am looking for target server 
IP.

In case of 'ssl_bump none' squid access log shows IP of server instead
of domain.



Nik Nik,
  The answer to your original question is that Squid only has the TCP/IP 
packet details to work with in intercepted traffic. particularly with 
port 443 traffic which has not been decrypted to get the Host header 
details.




log_fqdn on is not useful for me.


For the record this option is not even supported by Squid-3.2 and later. 
People using it should move to using %A in a custom log format instead.


The proper way to log rDNS details is with the %A and %A log tokens in 
a custom logformat.


The %A format token is the one needed to log server rDNS record. 
However it is important to be aware that rDNS record is often different 
from the URL domain name being fetched by the client. Server IP address 
is far more accurate and reliable for both debugging and reporting.


Amos





From: antony.st...@squid.open.source.it
To: squid-users@squid-cache.org
Date: Mon, 7 Jul 2014 20:14:40 +0200
Subject: Re: [squid-users] Why squid show IP in access log for 
transparent proxy?‏


On Monday 07 July 2014 at 19:44:34, Mark jensen wrote:


to show the domain name instead of IP:

One method would be to make use of this directive in the squid.conf 
file to

get the log file to show FQDNs instead of the IPs: log_fqdn on


That's for looking up the hostnames of clients connecting to the 
proxy.


i got the impression the original question was about the target server 
IP

addresses appearing in the logifles, instead of their DNS names.


this is a good link which may help you:

http://unix.stackexchange.com/questions/134132/how-can-we-make-squid-do-a-r
everse-nslookup



Regards,


Antony.

--
This email was created using 100% recycled electrons.

Please reply to the list;
please don't CC me.


RE: [squid-users] squid: Memory utilization higher than expected since moving from 3.3 to 3.4 and Vary: working

2014-07-08 Thread Martin Sperl
Well - there are 4 Points here:
a) this is gradual behavior that takes month to accumulate
b) extra used memory is much bigger than Cache (1.5x aprox), which looks wrong 
to me as the other parameters you have given (diskcache,...) are not 
c) we have not seen this behavior with 3.3 and have just switched to 3.4 two 
month ago and it is running since then showing this increase.
d) if you look at the memory report (mgr:mem) which I have shared you find: 
900MB for 542k HttpHeaderEntries and 2GB for 791k Short_Strings - and that is 
almost 3 out of 4.2G  for this report alone.

So I was guessing there was some sort of memory allocation bug or something - 
maybe something to do with the now working handling of vary headers, that 
triggers this memory leak.
To me it looks as if those Vary nodes are not clean up properly and that is 
why we are slowly increasing memory...

There are specially a few things in the msg:info block that made me suspicious:
Memory usage for squid via mallinfo():
Total space in arena:  -2064964 KB
Ordinary blocks:   2047465 KB 161900 blks
Small blocks:   0 KB  0 blks
Holding blocks: 38432 KB  9 blks
Free Small blocks:  0 KB
Free Ordinary blocks:   81875 KB
Total in use:   81875 KB -4%
Total free: 81875 KB -4%
Total size:-2026532 KB
Memory accounted for:
Total accounted:   1269447 KB -63%
memPool accounted: 9658055 KB -477%
memPool unaccounted:   -11684587 KB  -0%
memPoolAlloc calls: 507567155336
memPoolFree calls:  508713005568
Especially those negative Numbers and percentages - but maybe that is just a 
reporting bug, which - if so - is hindering debugging the real thing

Thanks,
Martin

P.s: it would be helpful if the output of mgr:mem would be formatted in a way 
that would allow it to get easily pasted into Excel or similar - no spaces in 
pool names would be sufficient...


-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Montag, 07. Juli 2014 15:40
To: squid-users@squid-cache.org
Subject: Re: [squid-users] squid: Memory utilization higher than expected since 
moving from 3.3 to 3.4 and Vary: working

On 2014-07-07 20:46, Martin Sperl wrote:
 Hi!
 
 We have found out that since we moved from squid 3.3 to 3.4.3 (and the
 corresponding Vary)  the memory utilization of squid has increased.
 It is 10GB right now 9.6G in memory, but we have only configured 4GB
 for in Memory caching.

Yes. 4GB for memory caching *only*, and all your reports confirm the RAM 
cache allocated 4GB worth of memory blocks for caching purposes.

But Squid uses memory for other things as explained by the FAQ.

http://wiki.squid-cache.org/SquidFaq/SquidMemory#I_set_cache_mem_to_XX.2C_but_the_process_grows_beyond_that.21

http://wiki.squid-cache.org/SquidFaq/SquidMemory#Why_does_Squid_use_so_much_memory.21.3F

Amos


This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp


Re: [squid-users] TPROXY Squid Error.

2014-07-08 Thread Info OoDoO
Thanks Hassan,

I have covered all the steps except the WCCP Configuration, Coz i dont
use WCCP Router. I tried discovering for Routing loop and was unable
to find any, Could you please help me How to Find a Routing loop.

Here is my Squid Conf and my TCPdump sample.

http://pastebin.com/aJskfywx -- TCPdump
http://pastebin.com/b9u24rEC -- Squid Conf

Thanks,
Ganesh J


On Tue, Jul 8, 2014 at 2:55 AM, Nyamul Hassan nya...@gmail.com wrote:
 Did you check the possibility of a routing loop as described in the
 troubleshooting section of the TProxy wiki page?  In fact, can you
 check that you have covered all the steps mentioned in that section?

 Regards
 HASSAN

 On Tue, Jul 8, 2014 at 2:37 AM, Info OoDoO i...@oodoo.co.in wrote:
 Thanks Hassan,

 Now the request are passing through Squid but Failing with 110
 Connection Timed Out Error.

 When I use transparent Mode its working fine. Any Idea..!!

 Thanks,
 Ganesh J
 Thanks,
 OodoO Fiber,
 +91 8940808080
 www.oodoo.co.in


 On Tue, Jul 8, 2014 at 1:16 AM, Nyamul Hassan nya...@gmail.com wrote:
 Hi Ganesh,

 In your basic data pastebin, seems like the ip rule and ip route
 rules are missing.

 Please see if running the following commands helps the situation:
 * echo 100 squidtproxy  /etc/iproute2/rt_tables
 * ip rule add fwmark 1 lookup 100
 * ip route add local default dev lo table 100

 Regards
 HASSAN


 On Tue, Jul 8, 2014 at 1:15 AM, Nyamul Hassan nya...@gmail.com wrote:
 Can you also pastebin your squid.conf?

 Regards
 HASSAN

 On Tue, Jul 8, 2014 at 12:53 AM, collect oodoo coll...@oodoo.co.in wrote:
 I have configured squid with the options in the below paste ..
 http://pastebin.com/jFhzd3qj
 I packets are being forwarded from the cache box to internet and i'm
 able to see the Client Public address instaed of squid Box Public
 Address..
 the Issue here is the requests are not being forwarded by or through 
 Squid..
 I'm unable to view any log for the request on access.log.
 If i use the same squid in transparent mode then I'm able to view the
 requests forwarded and logged on access.log but it shows Squid Box
 Public IP address.
 Can some body Help me on this..
 My basic Data of Machine is

 http://pastebin.com/TdnhnJtx

 Thanks,
 Ganesh J


[squid-users] Re: Handling client-side request floods

2014-07-08 Thread babajaga
Rate limit using iptables
http://thelowedown.wordpress.com/2008/07/03/iptables-how-to-use-the-limits-module/
seems to be the simplest solution for an upper limit of requests/time.

Practically, you want the same as an administrator, who wants to protect his
web server against a DoS attack by means of a flood of incoming
http-requests. So you might also google for apache request limit or
similar.




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Handling-client-side-request-floods-tp4666726p4666736.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] TPROXY Squid Error.

2014-07-08 Thread Nyamul Hassan
tcpdump shows traffic flowing both ways, which is good.  We also need
to have the following settings:

#  sysctl.conf
net.ipv4.ip_forward = 1
net.ipv4.conf.default.rp_filter = 0
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.eth0.rp_filter = 0
net.ipv4.conf.eth1.rp_filter = 0

The last two lines are for my specific system where I have two NICs.
Feel free to modify on your own.  After changing the file running
sysctl -p usually works.  To check if it did, please run the
following commands:

find /proc/sys/net/ipv4/ -iname rp_filter
find /proc/sys/net/ipv4/ -iname rp_filter -exec cat {} +

The first shows all the rp_filter in your system.
The second shows if they are indeed set to 0 as needed.

Please do a pastebin for both sysctl.conf and the outputs of the find commands.

Regards
HASSAN


On Tue, Jul 8, 2014 at 2:34 PM, Info OoDoO i...@oodoo.co.in wrote:
 Thanks Hassan,

 I have covered all the steps except the WCCP Configuration, Coz i dont
 use WCCP Router. I tried discovering for Routing loop and was unable
 to find any, Could you please help me How to Find a Routing loop.

 Here is my Squid Conf and my TCPdump sample.

 http://pastebin.com/aJskfywx -- TCPdump
 http://pastebin.com/b9u24rEC -- Squid Conf

 Thanks,
 Ganesh J


 On Tue, Jul 8, 2014 at 2:55 AM, Nyamul Hassan nya...@gmail.com wrote:
 Did you check the possibility of a routing loop as described in the
 troubleshooting section of the TProxy wiki page?  In fact, can you
 check that you have covered all the steps mentioned in that section?

 Regards
 HASSAN

 On Tue, Jul 8, 2014 at 2:37 AM, Info OoDoO i...@oodoo.co.in wrote:
 Thanks Hassan,

 Now the request are passing through Squid but Failing with 110
 Connection Timed Out Error.

 When I use transparent Mode its working fine. Any Idea..!!

 Thanks,
 Ganesh J
 Thanks,
 OodoO Fiber,
 +91 8940808080
 www.oodoo.co.in


 On Tue, Jul 8, 2014 at 1:16 AM, Nyamul Hassan nya...@gmail.com wrote:
 Hi Ganesh,

 In your basic data pastebin, seems like the ip rule and ip route
 rules are missing.

 Please see if running the following commands helps the situation:
 * echo 100 squidtproxy  /etc/iproute2/rt_tables
 * ip rule add fwmark 1 lookup 100
 * ip route add local default dev lo table 100

 Regards
 HASSAN


 On Tue, Jul 8, 2014 at 1:15 AM, Nyamul Hassan nya...@gmail.com wrote:
 Can you also pastebin your squid.conf?

 Regards
 HASSAN

 On Tue, Jul 8, 2014 at 12:53 AM, collect oodoo coll...@oodoo.co.in 
 wrote:
 I have configured squid with the options in the below paste ..
 http://pastebin.com/jFhzd3qj
 I packets are being forwarded from the cache box to internet and i'm
 able to see the Client Public address instaed of squid Box Public
 Address..
 the Issue here is the requests are not being forwarded by or through 
 Squid..
 I'm unable to view any log for the request on access.log.
 If i use the same squid in transparent mode then I'm able to view the
 requests forwarded and logged on access.log but it shows Squid Box
 Public IP address.
 Can some body Help me on this..
 My basic Data of Machine is

 http://pastebin.com/TdnhnJtx

 Thanks,
 Ganesh J


RE: [squid-users] squid: Memory utilization higher than expected since moving from 3.3 to 3.4 and Vary: working

2014-07-08 Thread Amos Jeffries

On 2014-07-08 19:38, Martin Sperl wrote:

Well - there are 4 Points here:
a) this is gradual behavior that takes month to accumulate
b) extra used memory is much bigger than Cache (1.5x aprox), which
looks wrong to me as the other parameters you have given
(diskcache,...) are not
c) we have not seen this behavior with 3.3 and have just switched to
3.4 two month ago and it is running since then showing this increase.
d) if you look at the memory report (mgr:mem) which I have shared you
find: 900MB for 542k HttpHeaderEntries and 2GB for 791k Short_Strings
- and that is almost 3 out of 4.2G  for this report alone.

So I was guessing there was some sort of memory allocation bug or
something - maybe something to do with the now working handling of
vary headers, that triggers this memory leak.
To me it looks as if those Vary nodes are not clean up properly and
that is why we are slowly increasing memory...



Hmm. Yes that does sound like what we call a pseudo-leak (same effects 
as a memory leak, but not an actual leak since the objects are still 
being reference-counted by something). The headers and HttpHeaderEntries 
are stored as collections of String's. The HttpHeaderEntries are 
ref-counted objects used by HttpRequest and HttpReply transaction 
objects, which are themselves ref-counted. If there are a lot of those 
being held onto for no reason this needs to be tracked down.


Are you in a position to test the 3.HEAD code to see if it is something 
we fixed already there or a new issue to debug from scratch?



There are specially a few things in the msg:info block that made me 
suspicious:

Memory usage for squid via mallinfo():
Total space in arena:  -2064964 KB
Ordinary blocks:   2047465 KB 161900 blks
Small blocks:   0 KB  0 blks
Holding blocks: 38432 KB  9 blks
Free Small blocks:  0 KB
Free Ordinary blocks:   81875 KB
Total in use:   81875 KB -4%
Total free: 81875 KB -4%
Total size:-2026532 KB
Memory accounted for:
Total accounted:   1269447 KB -63%
memPool accounted: 9658055 KB -477%
memPool unaccounted:   -11684587 KB  -0%
memPoolAlloc calls: 507567155336
memPoolFree calls:  508713005568
Especially those negative Numbers and percentages - but maybe that is
just a reporting bug, which - if so - is hindering debugging the real
thing


64-bit mallinfo() bugs produce corrupt output. The only uncorrupted 
values above are the memPool accounted KB size and probably the calls 
counts.





Thanks,
Martin

P.s: it would be helpful if the output of mgr:mem would be formatted
in a way that would allow it to get easily pasted into Excel or
similar - no spaces in pool names would be sufficient...



The report is a TSV spreadsheet. If you save it directly to a file then 
import into Excel selecting Tab separator instead of the default comma 
(CSV) it loads as a nice spreadsheet.


Amos



[squid-users] Re: split the connexion using Squid

2014-07-08 Thread Yemen SAYOUR



Hello Dear,

Thank you for taking your time to help me with my problem.

I have a modem DSL with connexion up to 6 MB/s, 30 Pcs are connected 
at the same time to the network via cables and wifi. The problem is 
the users cannot use equally the connexion.


My question : how can i limit the connexion using squid between Pcs, 
Although there are Pcs connected via wifi (each pc has a portion of 
the bandwidth)?


I have a Debian Wheezy as OS for servers and Ubuntu 12.04 for clients.

Thank you




--
Yemen SAYOUR
Responsable Technique Locale
Campus Numérique Francophone (CNF) Tripoli
Agence universitaire de la Francophonie
Tél : +(961) 6 205 280
Fax : +(961) 6 205 281
yemen.say...@auf.org



RE: [squid-users] squid: Memory utilization higher than expected since moving from 3.3 to 3.4 and Vary: working

2014-07-08 Thread Martin Sperl
The problem is that it is a slow leak - it takes some time (month) to find 
it...
Also it only happens on real live traffic with high volume plus high 
utilization of Vary:
Moving our prod environment to head would be quite a political issue inside our 
organization.
Arguing to go to the latest stable version 3.4.6 would be possible, but I doubt 
it would change a thing

In the meantime we have not restarted the squids yet, so we still got a bit of 
information available if needed.
But we cannot keep it up in this state much longer.

I created a core-dump, but analyzing that is hard.

Here the top strings from that 10GB core-file - taken via: strings corefile| 
sort | uniq -c | sort -rn | head -20).
This may give you some idea:
2071897 =0.7
1353960 Keep-Alive
1343528 image/gif
 877129 HTTP/1.1 200 OK
 855949  GMT
 852122 Content-Type
 851706 HTTP/
 851371 Date
 850485 Server
 848027 IEND
 821956 Content-Length
 776359 Content-Type: image/gif
 768935 Cache-Control
 760741 ETag
 743341 live
 720255 Connection
 677920 Connection: Keep-Alive
 676108 Last-Modified
 662765 Expires
 585139 X-Powered-By: Servlet/2.4 JSP/2.0

Another thing I thought we could do is:
* restart squids
* run mgr:mem every day and compare the daily changes for all the values (maybe 
others?)

Any other ideas how to find the issue?

Martin




-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Dienstag, 08. Juli 2014 11:40
To: squid-users@squid-cache.org
Subject: RE: [squid-users] squid: Memory utilization higher than expected since 
moving from 3.3 to 3.4 and Vary: working

On 2014-07-08 19:38, Martin Sperl wrote:
 Well - there are 4 Points here:
 a) this is gradual behavior that takes month to accumulate
 b) extra used memory is much bigger than Cache (1.5x aprox), which
 looks wrong to me as the other parameters you have given
 (diskcache,...) are not
 c) we have not seen this behavior with 3.3 and have just switched to
 3.4 two month ago and it is running since then showing this increase.
 d) if you look at the memory report (mgr:mem) which I have shared you
 find: 900MB for 542k HttpHeaderEntries and 2GB for 791k Short_Strings
 - and that is almost 3 out of 4.2G  for this report alone.
 
 So I was guessing there was some sort of memory allocation bug or
 something - maybe something to do with the now working handling of
 vary headers, that triggers this memory leak.
 To me it looks as if those Vary nodes are not clean up properly and
 that is why we are slowly increasing memory...
 

Hmm. Yes that does sound like what we call a pseudo-leak (same effects 
as a memory leak, but not an actual leak since the objects are still 
being reference-counted by something). The headers and HttpHeaderEntries 
are stored as collections of String's. The HttpHeaderEntries are 
ref-counted objects used by HttpRequest and HttpReply transaction 
objects, which are themselves ref-counted. If there are a lot of those 
being held onto for no reason this needs to be tracked down.

Are you in a position to test the 3.HEAD code to see if it is something 
we fixed already there or a new issue to debug from scratch?


 There are specially a few things in the msg:info block that made me 
 suspicious:
 Memory usage for squid via mallinfo():
 Total space in arena:  -2064964 KB
 Ordinary blocks:   2047465 KB 161900 blks
 Small blocks:   0 KB  0 blks
 Holding blocks: 38432 KB  9 blks
 Free Small blocks:  0 KB
 Free Ordinary blocks:   81875 KB
 Total in use:   81875 KB -4%
 Total free: 81875 KB -4%
 Total size:-2026532 KB
 Memory accounted for:
 Total accounted:   1269447 KB -63%
 memPool accounted: 9658055 KB -477%
 memPool unaccounted:   -11684587 KB  -0%
 memPoolAlloc calls: 507567155336
 memPoolFree calls:  508713005568
 Especially those negative Numbers and percentages - but maybe that is
 just a reporting bug, which - if so - is hindering debugging the real
 thing

64-bit mallinfo() bugs produce corrupt output. The only uncorrupted 
values above are the memPool accounted KB size and probably the calls 
counts.


 
 Thanks,
   Martin
 
 P.s: it would be helpful if the output of mgr:mem would be formatted
 in a way that would allow it to get easily pasted into Excel or
 similar - no spaces in pool names would be sufficient...
 

The report is a TSV spreadsheet. If you save it directly to a file then 
import into Excel selecting Tab separator instead of the default comma 
(CSV) it loads as a nice spreadsheet.

Amos


This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp


Re: [squid-users] Re: split the connexion using Squid

2014-07-08 Thread Antony Stone
On Tuesday 08 July 2014 at 12:18:09, Yemen SAYOUR wrote:

 I have a modem DSL with connexion up to 6 MB/s, 30 Pcs are connected
 at the same time to the network via cables and wifi. The problem is
 the users cannot use equally the connexion.
 
 My question : how can i limit the connexion using squid between Pcs,
 Although there are Pcs connected via wifi (each pc has a portion of
 the bandwidth)?
 
 I have a Debian Wheezy as OS for servers and Ubuntu 12.04 for clients.

You're probably better off looking at http://www.lartc.org/ for an answer to 
this.

Squid will just use whatever routing the underlying operating system provides.


Regards,


Antony.

-- 
The Royal Society for the Prevention of Cruelty to Animals was formed in 1824.
The National Society for the Prevention of Cruelty to Children was not formed 
until 1884.
That says something about the British.

 Please reply to the list;
   please don't CC me.


Re: [squid-users] TPROXY Squid Error.

2014-07-08 Thread Info OoDoO
Thanks Hassan,
Yes I have the following settings done.

Please see the details in the pastebin

http://pastebin.com/YzKDSV7J -- Find Results.

http://pastebin.com/XhZYiDxm --sysctl.conf

Thanks,
Ganesh J


On Tue, Jul 8, 2014 at 2:29 PM, Nyamul Hassan nya...@gmail.com wrote:
 tcpdump shows traffic flowing both ways, which is good.  We also need
 to have the following settings:

 #  sysctl.conf
 net.ipv4.ip_forward = 1
 net.ipv4.conf.default.rp_filter = 0
 net.ipv4.conf.all.rp_filter = 0
 net.ipv4.conf.eth0.rp_filter = 0
 net.ipv4.conf.eth1.rp_filter = 0

 The last two lines are for my specific system where I have two NICs.
 Feel free to modify on your own.  After changing the file running
 sysctl -p usually works.  To check if it did, please run the
 following commands:

 find /proc/sys/net/ipv4/ -iname rp_filter
 find /proc/sys/net/ipv4/ -iname rp_filter -exec cat {} +

 The first shows all the rp_filter in your system.
 The second shows if they are indeed set to 0 as needed.

 Please do a pastebin for both sysctl.conf and the outputs of the find 
 commands.

 Regards
 HASSAN


 On Tue, Jul 8, 2014 at 2:34 PM, Info OoDoO i...@oodoo.co.in wrote:
 Thanks Hassan,

 I have covered all the steps except the WCCP Configuration, Coz i dont
 use WCCP Router. I tried discovering for Routing loop and was unable
 to find any, Could you please help me How to Find a Routing loop.

 Here is my Squid Conf and my TCPdump sample.

 http://pastebin.com/aJskfywx -- TCPdump
 http://pastebin.com/b9u24rEC -- Squid Conf

 Thanks,
 Ganesh J


 On Tue, Jul 8, 2014 at 2:55 AM, Nyamul Hassan nya...@gmail.com wrote:
 Did you check the possibility of a routing loop as described in the
 troubleshooting section of the TProxy wiki page?  In fact, can you
 check that you have covered all the steps mentioned in that section?

 Regards
 HASSAN

 On Tue, Jul 8, 2014 at 2:37 AM, Info OoDoO i...@oodoo.co.in wrote:
 Thanks Hassan,

 Now the request are passing through Squid but Failing with 110
 Connection Timed Out Error.

 When I use transparent Mode its working fine. Any Idea..!!

 Thanks,
 Ganesh J
 Thanks,
 OodoO Fiber,
 +91 8940808080
 www.oodoo.co.in


 On Tue, Jul 8, 2014 at 1:16 AM, Nyamul Hassan nya...@gmail.com wrote:
 Hi Ganesh,

 In your basic data pastebin, seems like the ip rule and ip route
 rules are missing.

 Please see if running the following commands helps the situation:
 * echo 100 squidtproxy  /etc/iproute2/rt_tables
 * ip rule add fwmark 1 lookup 100
 * ip route add local default dev lo table 100

 Regards
 HASSAN


 On Tue, Jul 8, 2014 at 1:15 AM, Nyamul Hassan nya...@gmail.com wrote:
 Can you also pastebin your squid.conf?

 Regards
 HASSAN

 On Tue, Jul 8, 2014 at 12:53 AM, collect oodoo coll...@oodoo.co.in 
 wrote:
 I have configured squid with the options in the below paste ..
 http://pastebin.com/jFhzd3qj
 I packets are being forwarded from the cache box to internet and i'm
 able to see the Client Public address instaed of squid Box Public
 Address..
 the Issue here is the requests are not being forwarded by or through 
 Squid..
 I'm unable to view any log for the request on access.log.
 If i use the same squid in transparent mode then I'm able to view the
 requests forwarded and logged on access.log but it shows Squid Box
 Public IP address.
 Can some body Help me on this..
 My basic Data of Machine is

 http://pastebin.com/TdnhnJtx

 Thanks,
 Ganesh J


[squid-users] Re: split the connexion using Squid

2014-07-08 Thread babajaga
For a very first beginning, you might look into the delay_pools of squid, to
distribute and limit download speed, at least. Works only for proxied
traffic, of course, so torrents etc. are not throttled.
But easy to implement. 



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Re-split-the-connexion-using-Squid-tp4666739p4666742.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Re: split the connexion using Squid

2014-07-08 Thread Yemen SAYOUR

Thank you,

But by limiting the download speed, can i attribute for each @IP a 
portion of bandwidth? (exemple 2%)


Le 08/07/2014 14:29, babajaga a écrit :

For a very first beginning, you might look into the delay_pools of squid, to
distribute and limit download speed, at least. Works only for proxied
traffic, of course, so torrents etc. are not throttled.
But easy to implement.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Re-split-the-connexion-using-Squid-tp4666739p4666742.html
Sent from the Squid - Users mailing list archive at Nabble.com.



--
Yemen SAYOUR
Responsable Technique Locale
Campus Numérique Francophone (CNF) Tripoli
Agence universitaire de la Francophonie
Tél : +(961) 6 205 280
Fax : +(961) 6 205 281
yemen.say...@auf.org



[squid-users] Re: split the connexion using Squid

2014-07-08 Thread babajaga
Not percentagewise, only in absolute values.

I had problems myself to vaguely understand at least the doc about
delay_pools, look into the documented squid.conf. So somebody else should
answer your detailed questions, if any.
However,
I use it to put an upper limit of 125kbit/s  download speed to every user
having this simple config (squid2.7):
.
delay_pools 1 #Just one pool
delay_class 1 2 #class 2
delay_access 1 allow all #everybody will be throttled; you might set up
another pool allowing higher badwidth
delay_parameters 1 -1/-1 125000/125000 #125kbit/s; no bursts. You might
allow
#delay_parameters 1 -1/-1 125000/25 #... 250kbit/s burst rate, for
initial page load

which should be adequate for interactive browsing. As you have 6MBit WAN,
this should also leave quite some spare bandwidth for non-proxied traffic,
as not all of your 30 users will be hitting the enter button
simultaneously to load another page.




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Re-split-the-connexion-using-Squid-tp4666739p4666745.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Re: split the connexion using Squid

2014-07-08 Thread Nyamul Hassan
Yemen, what router do you use for going to the internet?  Regular
linux box?  Or something else?

Regards
HASSAN


On Tue, Jul 8, 2014 at 8:18 PM, babajaga augustus_me...@yahoo.de wrote:
 Not percentagewise, only in absolute values.

 I had problems myself to vaguely understand at least the doc about
 delay_pools, look into the documented squid.conf. So somebody else should
 answer your detailed questions, if any.
 However,
 I use it to put an upper limit of 125kbit/s  download speed to every user
 having this simple config (squid2.7):
 .
 delay_pools 1 #Just one pool
 delay_class 1 2 #class 2
 delay_access 1 allow all #everybody will be throttled; you might set up
 another pool allowing higher badwidth
 delay_parameters 1 -1/-1 125000/125000 #125kbit/s; no bursts. You might
 allow
 #delay_parameters 1 -1/-1 125000/25 #... 250kbit/s burst rate, for
 initial page load

 which should be adequate for interactive browsing. As you have 6MBit WAN,
 this should also leave quite some spare bandwidth for non-proxied traffic,
 as not all of your 30 users will be hitting the enter button
 simultaneously to load another page.




 --
 View this message in context: 
 http://squid-web-proxy-cache.1019090.n4.nabble.com/Re-split-the-connexion-using-Squid-tp4666739p4666745.html
 Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] special configuration of squid for connections with citrix clients?

2014-07-08 Thread Andreas . Reschke
Hello there,

is there a special configuration of squid to allow connections for a 
Citrix ICA-Client to go through to proxy? We're not able to connect the 
Citrix ICA-Client (Web) through our squid proxy to a Citrix server outside 
in the internet. With the Microsoft ISA-Proxy it does.

Our squid.conf:
bgstproxyls01:~ # cat /etc/squid/squid.conf
#
# Recommended minimum configuration:
#

acl snmppublic snmp_community squid
snmp_port 3401
snmp_incoming_address 10.143.153.27
snmp_outgoing_address 10.143.153.27
snmp_access allow all
client_db off
half_closed_clients off
via off
cache_mem 4096 MB
ipcache_size 2028
fqdncache_size 2048

hosts_file /etc/hosts

memory_pools off
maximum_object_size 50 MB
quick_abort_min 0 KB
quick_abort_max 0 KB
log_icp_queries off
buffered_logs on

dns_nameservers 10.20.94.32
# acl manager proto cache_object
# acl localhost src 127.0.0.1 # ::1
# acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 # ::1

# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range 
   
acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged) 
machines   

acl SSL_ports port 443 
acl Safe_ports port 80  # http  
acl Safe_ports port 21  # ftp  
acl Safe_ports port 443 # https  
acl Safe_ports port 70  # gopher   
acl Safe_ports port 210 # wais  
acl Safe_ports port 1025-65535  # unregistered ports  
acl Safe_ports port 280 # http-mgmt   
acl Safe_ports port 488 # gss-http   
acl Safe_ports port 591 # filemaker   
acl Safe_ports port 777 # multiling http 
acl CONNECT method CONNECT 
# neu
acl SSL method CONNECT 
acl CONNECT method CONNECT 


# erlaubte Seiten ohne Internetberechtigung
acl open-sites dstdomain /etc/squid/open-sites.txt  
# erlaubte Seiten ohne Internetberechtigung

# verbotene Seiten 
acl denied-sites url_regex /etc/squid/denied-sites.txt
acl selling-sites url_regex /etc/squid/selling-sites.txt 
 
acl social-sites url_regex /etc/squid/social-sites.txt
# verbotene Seiten
acl allowedurls dstdomain /etc/squid/bypass.txt

external_acl_type LDAPLookup children-startup=10 children-idle=30 
children-max=80 ttl=600 negative_ttl=30 %LOGIN 
/usr/sbin/ext_ldap_group_acl -d  -b dc=behrgroup,dc=net -D 
CN=BGST-S-SQUID,OU=Service Accounts,OU=bgst,OU=de,DC=behrgroup,DC=net -W 
/etc/squid/ppp -f 
((objectclass=user)(sAMAccountName=%v)(memberof:1.2.840.113556.1.4.1941:=CN=%a,OU=groups,OU=Proxy,OU=Global
 
Groups,DC=behrgroup,dc=net)) -h 10.20.94.32


## DEBUGGING

#debug_options 28,9
#debug_options ALL,5 33,2 28,9 44,3

# local  manager
http_access allow manager localhost 
http_access deny manager

# nur safe  SSL ab hier
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports


deny_info http://bgstproxyls01/denied.html denied-sites


# Squid normally listens to port 3128
http_port 3128


# Leave coredumps in the first cache dir
coredump_dir /var/cache/squid

# Add any of your own refresh_pattern entries above these.
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320


### pure ntlm authentication
auth_param ntlm program /usr/bin/ntlm_auth 
--helper-protocol=squid-2.5-ntlmssp --domain=BEHRGROUP.NET
auth_param ntlm children 128
auth_param ntlm keep_alive off


# Zeit-Steuerung für Indien
acl indien proxy_auth external LDAPLookup GGPY-LO-Web-Time-Limited
acl DAY time 05:30-13:30
# Zeit-Steuerung für Indien
acl chkglwebhttp external LDAPLookup GGPY-LO-Web-Http
acl sellingUser external LDAPLookup GGPY-LO-Web-Allowed-Selling
acl socialUser external LDAPLookup GGPY-LO-Web-Allowed-Social
acl allforbUser external LDAPLookup GGPY-LO-Web-Allowed-All
acl ftpputUser external LDAPLookup GGPY-LO-Web-Ftp-Put
acl loggingUser external LDAPLookup GGPY-LO-Web-Log-User
acl auth proxy_auth REQUIRED
# bestimmte IP-adressen erlauben
acl permitt_ips src 10.143.10.247/32
acl FTP proto FTP
acl PUT method PUT

# whitelisten
http_access allow open-sites all
http_access allow localhost
http_access allow permitt_ips !denied-sites !social-sites
http_access allow indien DAY
http_access deny indien
http_access deny !chkglwebhttp
http_access allow selling-sites sellingUser
http_access allow social-sites socialUser

# Denied sites rauswerfen, wenn sie nicht ebenfalls in allforbUser stehen
http_access allow denied-sites allforbUser
http_access deny denied-sites all # 

Re: [squid-users] special configuration of squid for connections with citrix clients?

2014-07-08 Thread Stephen Borrill
On 08/07/2014 16:13, andreas.resc...@mahle.com wrote:
 Hello there,
 
 is there a special configuration of squid to allow connections for a 
 Citrix ICA-Client to go through to proxy? We're not able to connect the 
 Citrix ICA-Client (Web) through our squid proxy to a Citrix server outside 
 in the internet. With the Microsoft ISA-Proxy it does.
[snip]

What are you connecting to at the other end? Access Gateway, NetScaler,
Secure Gateway or just Web Interface?

The first 3 will tend to just tunnel all ICA and CGP traffic over port
443. The latter could use any ports as defined within its secure access
section.

-- 
Stephen


Re: [squid-users] TPROXY Squid Error.

2014-07-08 Thread Nyamul Hassan
Ok.  Good so far.  I saw you opened another email about this.  Please
keep related discussions in one single thread.  We had similar TProxy
issues around 7-8 days ago.  From your emails, it seems you are
running CentOS 6.5, just like we are.  The difference is that you are
using Squid 3.1 which is available in CentOS yum.  We installed the
same on our CentOS, and confirmed that Squid 3.1 is working with
TProxy.  So, I think this is a routing / iptables issue.

In that email, you mentioned that Squid is receiving the packets?  How
are you determining this?

Also, can you enable:
debug_options ALL,1 89,9 17,3
in your squid.conf?  This will print a bunch of debug messages in
cache.log when you try to browse through proxy.

Also, before you start browsing, run this command:
tcpdump -n -nn -e -i any dst port 80
That should allow you to see some packet header data.

Now, try to browse from client, and pastebin the output of both
cache.log  tcpdump.

Regards
HASSAN


On Tue, Jul 8, 2014 at 4:54 PM, Info OoDoO i...@oodoo.co.in wrote:
 Thanks Hassan,
 Yes I have the following settings done.

 Please see the details in the pastebin

 http://pastebin.com/YzKDSV7J -- Find Results.

 http://pastebin.com/XhZYiDxm --sysctl.conf

 Thanks,
 Ganesh J


 On Tue, Jul 8, 2014 at 2:29 PM, Nyamul Hassan nya...@gmail.com wrote:
 tcpdump shows traffic flowing both ways, which is good.  We also need
 to have the following settings:

 #  sysctl.conf
 net.ipv4.ip_forward = 1
 net.ipv4.conf.default.rp_filter = 0
 net.ipv4.conf.all.rp_filter = 0
 net.ipv4.conf.eth0.rp_filter = 0
 net.ipv4.conf.eth1.rp_filter = 0

 The last two lines are for my specific system where I have two NICs.
 Feel free to modify on your own.  After changing the file running
 sysctl -p usually works.  To check if it did, please run the
 following commands:

 find /proc/sys/net/ipv4/ -iname rp_filter
 find /proc/sys/net/ipv4/ -iname rp_filter -exec cat {} +

 The first shows all the rp_filter in your system.
 The second shows if they are indeed set to 0 as needed.

 Please do a pastebin for both sysctl.conf and the outputs of the find 
 commands.

 Regards
 HASSAN


 On Tue, Jul 8, 2014 at 2:34 PM, Info OoDoO i...@oodoo.co.in wrote:
 Thanks Hassan,

 I have covered all the steps except the WCCP Configuration, Coz i dont
 use WCCP Router. I tried discovering for Routing loop and was unable
 to find any, Could you please help me How to Find a Routing loop.

 Here is my Squid Conf and my TCPdump sample.

 http://pastebin.com/aJskfywx -- TCPdump
 http://pastebin.com/b9u24rEC -- Squid Conf

 Thanks,
 Ganesh J


 On Tue, Jul 8, 2014 at 2:55 AM, Nyamul Hassan nya...@gmail.com wrote:
 Did you check the possibility of a routing loop as described in the
 troubleshooting section of the TProxy wiki page?  In fact, can you
 check that you have covered all the steps mentioned in that section?

 Regards
 HASSAN

 On Tue, Jul 8, 2014 at 2:37 AM, Info OoDoO i...@oodoo.co.in wrote:
 Thanks Hassan,

 Now the request are passing through Squid but Failing with 110
 Connection Timed Out Error.

 When I use transparent Mode its working fine. Any Idea..!!

 Thanks,
 Ganesh J
 Thanks,
 OodoO Fiber,
 +91 8940808080
 www.oodoo.co.in


 On Tue, Jul 8, 2014 at 1:16 AM, Nyamul Hassan nya...@gmail.com wrote:
 Hi Ganesh,

 In your basic data pastebin, seems like the ip rule and ip route
 rules are missing.

 Please see if running the following commands helps the situation:
 * echo 100 squidtproxy  /etc/iproute2/rt_tables
 * ip rule add fwmark 1 lookup 100
 * ip route add local default dev lo table 100

 Regards
 HASSAN


 On Tue, Jul 8, 2014 at 1:15 AM, Nyamul Hassan nya...@gmail.com wrote:
 Can you also pastebin your squid.conf?

 Regards
 HASSAN

 On Tue, Jul 8, 2014 at 12:53 AM, collect oodoo coll...@oodoo.co.in 
 wrote:
 I have configured squid with the options in the below paste ..
 http://pastebin.com/jFhzd3qj
 I packets are being forwarded from the cache box to internet and i'm
 able to see the Client Public address instaed of squid Box Public
 Address..
 the Issue here is the requests are not being forwarded by or through 
 Squid..
 I'm unable to view any log for the request on access.log.
 If i use the same squid in transparent mode then I'm able to view the
 requests forwarded and logged on access.log but it shows Squid Box
 Public IP address.
 Can some body Help me on this..
 My basic Data of Machine is

 http://pastebin.com/TdnhnJtx

 Thanks,
 Ganesh J


Re: [squid-users] SQUID 3.10 TProxy Issues

2014-07-08 Thread Eliezer Croitoru

Hey There,

Can you run the next script?
http://www1.ngtech.co.il/squid/basic_data.sh
(use curl to download the file, with default wget you might get wrong 
line ending matching windows ones)


Eliezer

On 07/07/2014 08:47 PM, Info OoDoO wrote:

Hi,

I configured Squid in Tproxy mode and Mangled the request, Now i am
able to see my client public address, but i'm unable to see any
request on squid access log.
seems the request is not forwarded to squid or some thing spoofy..

My iptables rules

-A PREROUTING -p tcp -m socket -j DIVERT
-A PREROUTING -p tcp -m tcp --dport 80 -j TPROXY --on-port 4xx5
--on-ip 10.x.x.x --tproxy-mark 0x1/0x1
-A DIVERT -j MARK --set-xmark 0x1/0x
-A DIVERT -j ACCEPT




Re: [squid-users] TPROXY Squid Error.

2014-07-08 Thread Info OoDoO
Sorry for the other mail chain. it was opened accidentally yesterday.

Thanks for the response.

please find the required data below.

http://pastebin.com/Abs3QmMe -- cache.log

http://pastebin.com/eS94BHHu -- TCP Dump.

I was able to see the site logged in access.log with http code 504,
Gateway Timed Out. so i thought the packets are sent to squid.

For your kind attention, i have not installed Squid 3.1.10 from YUM. I
have Compiled and installed from the source with the following
options.

http://pastebin.com/jFhzd3qj


Thanks,
Ganesh J



On Tue, Jul 8, 2014 at 10:44 PM, Nyamul Hassan nya...@gmail.com wrote:
 Ok.  Good so far.  I saw you opened another email about this.  Please
 keep related discussions in one single thread.  We had similar TProxy
 issues around 7-8 days ago.  From your emails, it seems you are
 running CentOS 6.5, just like we are.  The difference is that you are
 using Squid 3.1 which is available in CentOS yum.  We installed the
 same on our CentOS, and confirmed that Squid 3.1 is working with
 TProxy.  So, I think this is a routing / iptables issue.

 In that email, you mentioned that Squid is receiving the packets?  How
 are you determining this?

 Also, can you enable:
 debug_options ALL,1 89,9 17,3
 in your squid.conf?  This will print a bunch of debug messages in
 cache.log when you try to browse through proxy.

 Also, before you start browsing, run this command:
 tcpdump -n -nn -e -i any dst port 80
 That should allow you to see some packet header data.

 Now, try to browse from client, and pastebin the output of both
 cache.log  tcpdump.

 Regards
 HASSAN


 On Tue, Jul 8, 2014 at 4:54 PM, Info OoDoO i...@oodoo.co.in wrote:
 Thanks Hassan,
 Yes I have the following settings done.

 Please see the details in the pastebin

 http://pastebin.com/YzKDSV7J -- Find Results.

 http://pastebin.com/XhZYiDxm --sysctl.conf

 Thanks,
 Ganesh J


 On Tue, Jul 8, 2014 at 2:29 PM, Nyamul Hassan nya...@gmail.com wrote:
 tcpdump shows traffic flowing both ways, which is good.  We also need
 to have the following settings:

 #  sysctl.conf
 net.ipv4.ip_forward = 1
 net.ipv4.conf.default.rp_filter = 0
 net.ipv4.conf.all.rp_filter = 0
 net.ipv4.conf.eth0.rp_filter = 0
 net.ipv4.conf.eth1.rp_filter = 0

 The last two lines are for my specific system where I have two NICs.
 Feel free to modify on your own.  After changing the file running
 sysctl -p usually works.  To check if it did, please run the
 following commands:

 find /proc/sys/net/ipv4/ -iname rp_filter
 find /proc/sys/net/ipv4/ -iname rp_filter -exec cat {} +

 The first shows all the rp_filter in your system.
 The second shows if they are indeed set to 0 as needed.

 Please do a pastebin for both sysctl.conf and the outputs of the find 
 commands.

 Regards
 HASSAN


 On Tue, Jul 8, 2014 at 2:34 PM, Info OoDoO i...@oodoo.co.in wrote:
 Thanks Hassan,

 I have covered all the steps except the WCCP Configuration, Coz i dont
 use WCCP Router. I tried discovering for Routing loop and was unable
 to find any, Could you please help me How to Find a Routing loop.

 Here is my Squid Conf and my TCPdump sample.

 http://pastebin.com/aJskfywx -- TCPdump
 http://pastebin.com/b9u24rEC -- Squid Conf

 Thanks,
 Ganesh J


 On Tue, Jul 8, 2014 at 2:55 AM, Nyamul Hassan nya...@gmail.com wrote:
 Did you check the possibility of a routing loop as described in the
 troubleshooting section of the TProxy wiki page?  In fact, can you
 check that you have covered all the steps mentioned in that section?

 Regards
 HASSAN

 On Tue, Jul 8, 2014 at 2:37 AM, Info OoDoO i...@oodoo.co.in wrote:
 Thanks Hassan,

 Now the request are passing through Squid but Failing with 110
 Connection Timed Out Error.

 When I use transparent Mode its working fine. Any Idea..!!

 Thanks,
 Ganesh J
 Thanks,
 OodoO Fiber,
 +91 8940808080
 www.oodoo.co.in


 On Tue, Jul 8, 2014 at 1:16 AM, Nyamul Hassan nya...@gmail.com wrote:
 Hi Ganesh,

 In your basic data pastebin, seems like the ip rule and ip route
 rules are missing.

 Please see if running the following commands helps the situation:
 * echo 100 squidtproxy  /etc/iproute2/rt_tables
 * ip rule add fwmark 1 lookup 100
 * ip route add local default dev lo table 100

 Regards
 HASSAN


 On Tue, Jul 8, 2014 at 1:15 AM, Nyamul Hassan nya...@gmail.com wrote:
 Can you also pastebin your squid.conf?

 Regards
 HASSAN

 On Tue, Jul 8, 2014 at 12:53 AM, collect oodoo coll...@oodoo.co.in 
 wrote:
 I have configured squid with the options in the below paste ..
 http://pastebin.com/jFhzd3qj
 I packets are being forwarded from the cache box to internet and i'm
 able to see the Client Public address instaed of squid Box Public
 Address..
 the Issue here is the requests are not being forwarded by or through 
 Squid..
 I'm unable to view any log for the request on access.log.
 If i use the same squid in transparent mode then I'm able to view the
 requests forwarded and logged on access.log but it shows Squid Box
 Public IP address.
 Can some 

Re: [squid-users] TPROXY Squid Error.

2014-07-08 Thread Info OoDoO
+Eliezer

Thanks,
Ganesh J


On Tue, Jul 8, 2014 at 11:46 PM, Info OoDoO i...@oodoo.co.in wrote:
 Sorry for the other mail chain. it was opened accidentally yesterday.

 Thanks for the response.

 please find the required data below.

 http://pastebin.com/Abs3QmMe -- cache.log

 http://pastebin.com/eS94BHHu -- TCP Dump.

 I was able to see the site logged in access.log with http code 504,
 Gateway Timed Out. so i thought the packets are sent to squid.

 For your kind attention, i have not installed Squid 3.1.10 from YUM. I
 have Compiled and installed from the source with the following
 options.

 http://pastebin.com/jFhzd3qj


 Thanks,
 Ganesh J



 On Tue, Jul 8, 2014 at 10:44 PM, Nyamul Hassan nya...@gmail.com wrote:
 Ok.  Good so far.  I saw you opened another email about this.  Please
 keep related discussions in one single thread.  We had similar TProxy
 issues around 7-8 days ago.  From your emails, it seems you are
 running CentOS 6.5, just like we are.  The difference is that you are
 using Squid 3.1 which is available in CentOS yum.  We installed the
 same on our CentOS, and confirmed that Squid 3.1 is working with
 TProxy.  So, I think this is a routing / iptables issue.

 In that email, you mentioned that Squid is receiving the packets?  How
 are you determining this?

 Also, can you enable:
 debug_options ALL,1 89,9 17,3
 in your squid.conf?  This will print a bunch of debug messages in
 cache.log when you try to browse through proxy.

 Also, before you start browsing, run this command:
 tcpdump -n -nn -e -i any dst port 80
 That should allow you to see some packet header data.

 Now, try to browse from client, and pastebin the output of both
 cache.log  tcpdump.

 Regards
 HASSAN


 On Tue, Jul 8, 2014 at 4:54 PM, Info OoDoO i...@oodoo.co.in wrote:
 Thanks Hassan,
 Yes I have the following settings done.

 Please see the details in the pastebin

 http://pastebin.com/YzKDSV7J -- Find Results.

 http://pastebin.com/XhZYiDxm --sysctl.conf

 Thanks,
 Ganesh J


 On Tue, Jul 8, 2014 at 2:29 PM, Nyamul Hassan nya...@gmail.com wrote:
 tcpdump shows traffic flowing both ways, which is good.  We also need
 to have the following settings:

 #  sysctl.conf
 net.ipv4.ip_forward = 1
 net.ipv4.conf.default.rp_filter = 0
 net.ipv4.conf.all.rp_filter = 0
 net.ipv4.conf.eth0.rp_filter = 0
 net.ipv4.conf.eth1.rp_filter = 0

 The last two lines are for my specific system where I have two NICs.
 Feel free to modify on your own.  After changing the file running
 sysctl -p usually works.  To check if it did, please run the
 following commands:

 find /proc/sys/net/ipv4/ -iname rp_filter
 find /proc/sys/net/ipv4/ -iname rp_filter -exec cat {} +

 The first shows all the rp_filter in your system.
 The second shows if they are indeed set to 0 as needed.

 Please do a pastebin for both sysctl.conf and the outputs of the find 
 commands.

 Regards
 HASSAN


 On Tue, Jul 8, 2014 at 2:34 PM, Info OoDoO i...@oodoo.co.in wrote:
 Thanks Hassan,

 I have covered all the steps except the WCCP Configuration, Coz i dont
 use WCCP Router. I tried discovering for Routing loop and was unable
 to find any, Could you please help me How to Find a Routing loop.

 Here is my Squid Conf and my TCPdump sample.

 http://pastebin.com/aJskfywx -- TCPdump
 http://pastebin.com/b9u24rEC -- Squid Conf

 Thanks,
 Ganesh J


 On Tue, Jul 8, 2014 at 2:55 AM, Nyamul Hassan nya...@gmail.com wrote:
 Did you check the possibility of a routing loop as described in the
 troubleshooting section of the TProxy wiki page?  In fact, can you
 check that you have covered all the steps mentioned in that section?

 Regards
 HASSAN

 On Tue, Jul 8, 2014 at 2:37 AM, Info OoDoO i...@oodoo.co.in wrote:
 Thanks Hassan,

 Now the request are passing through Squid but Failing with 110
 Connection Timed Out Error.

 When I use transparent Mode its working fine. Any Idea..!!

 Thanks,
 Ganesh J
 Thanks,
 OodoO Fiber,
 +91 8940808080
 www.oodoo.co.in


 On Tue, Jul 8, 2014 at 1:16 AM, Nyamul Hassan nya...@gmail.com wrote:
 Hi Ganesh,

 In your basic data pastebin, seems like the ip rule and ip route
 rules are missing.

 Please see if running the following commands helps the situation:
 * echo 100 squidtproxy  /etc/iproute2/rt_tables
 * ip rule add fwmark 1 lookup 100
 * ip route add local default dev lo table 100

 Regards
 HASSAN


 On Tue, Jul 8, 2014 at 1:15 AM, Nyamul Hassan nya...@gmail.com wrote:
 Can you also pastebin your squid.conf?

 Regards
 HASSAN

 On Tue, Jul 8, 2014 at 12:53 AM, collect oodoo coll...@oodoo.co.in 
 wrote:
 I have configured squid with the options in the below paste ..
 http://pastebin.com/jFhzd3qj
 I packets are being forwarded from the cache box to internet and i'm
 able to see the Client Public address instaed of squid Box Public
 Address..
 the Issue here is the requests are not being forwarded by or through 
 Squid..
 I'm unable to view any log for the request on access.log.
 If i use the same squid in transparent mode then I'm able to 

Re: [squid-users] TPROXY Squid Error.

2014-07-08 Thread Nyamul Hassan
 For your kind attention, i have not installed Squid 3.1.10 from YUM. I
 have Compiled and installed from the source with the following
 options.

 http://pastebin.com/jFhzd3qj


Oh!  If you did compile it, then can you check if you have
libcap-devel installed?

Regards
HASSAN


Re: [squid-users] TPROXY Squid Error.

2014-07-08 Thread Info OoDoO
Yes.. it is installed..

libcap-devel.x86_64  2.16-5.5.el6  @base

Thanks,
Ganesh J


On Tue, Jul 8, 2014 at 11:49 PM, Nyamul Hassan nya...@gmail.com wrote:
 For your kind attention, i have not installed Squid 3.1.10 from YUM. I
 have Compiled and installed from the source with the following
 options.

 http://pastebin.com/jFhzd3qj


 Oh!  If you did compile it, then can you check if you have
 libcap-devel installed?

 Regards
 HASSAN


Re: [squid-users] TPROXY Squid Error.

2014-07-08 Thread Info OoDoO
Sorry, I installed it recently and it was not there when i compiled
and configured squid from source.

Thanks,
Ganesh J


On Tue, Jul 8, 2014 at 11:52 PM, Info OoDoO i...@oodoo.co.in wrote:
 Yes.. it is installed..

 libcap-devel.x86_64  2.16-5.5.el6  @base

 Thanks,
 Ganesh J


 On Tue, Jul 8, 2014 at 11:49 PM, Nyamul Hassan nya...@gmail.com wrote:
 For your kind attention, i have not installed Squid 3.1.10 from YUM. I
 have Compiled and installed from the source with the following
 options.

 http://pastebin.com/jFhzd3qj


 Oh!  If you did compile it, then can you check if you have
 libcap-devel installed?

 Regards
 HASSAN


Re: [squid-users] TPROXY Squid Error.

2014-07-08 Thread Nyamul Hassan
We were in the same problem just a few days ago.  Can you recompile and check?

Also, since you are compiling, then can you also try the latest stable
version 3.4.6?

Regards
HASSAN


On Wed, Jul 9, 2014 at 12:24 AM, Info OoDoO i...@oodoo.co.in wrote:
 Sorry, I installed it recently and it was not there when i compiled
 and configured squid from source.

 Thanks,
 Ganesh J


 On Tue, Jul 8, 2014 at 11:52 PM, Info OoDoO i...@oodoo.co.in wrote:
 Yes.. it is installed..

 libcap-devel.x86_64  2.16-5.5.el6  @base

 Thanks,
 Ganesh J


 On Tue, Jul 8, 2014 at 11:49 PM, Nyamul Hassan nya...@gmail.com wrote:
 For your kind attention, i have not installed Squid 3.1.10 from YUM. I
 have Compiled and installed from the source with the following
 options.

 http://pastebin.com/jFhzd3qj


 Oh!  If you did compile it, then can you check if you have
 libcap-devel installed?

 Regards
 HASSAN


Re: [squid-users] TPROXY Squid Error.

2014-07-08 Thread Info OoDoO
Configured Squid 3.4.6 again with all the options, still facing the same issue.

Thanks,
Ganesh J


On Tue, Jul 8, 2014 at 11:55 PM, Nyamul Hassan nya...@gmail.com wrote:
 We were in the same problem just a few days ago.  Can you recompile and check?

 Also, since you are compiling, then can you also try the latest stable
 version 3.4.6?

 Regards
 HASSAN


 On Wed, Jul 9, 2014 at 12:24 AM, Info OoDoO i...@oodoo.co.in wrote:
 Sorry, I installed it recently and it was not there when i compiled
 and configured squid from source.

 Thanks,
 Ganesh J


 On Tue, Jul 8, 2014 at 11:52 PM, Info OoDoO i...@oodoo.co.in wrote:
 Yes.. it is installed..

 libcap-devel.x86_64  2.16-5.5.el6  @base

 Thanks,
 Ganesh J


 On Tue, Jul 8, 2014 at 11:49 PM, Nyamul Hassan nya...@gmail.com wrote:
 For your kind attention, i have not installed Squid 3.1.10 from YUM. I
 have Compiled and installed from the source with the following
 options.

 http://pastebin.com/jFhzd3qj


 Oh!  If you did compile it, then can you check if you have
 libcap-devel installed?

 Regards
 HASSAN


Re: [squid-users] TPROXY Squid Error.

2014-07-08 Thread Eliezer Croitoru

What router are you using??

Eliezer

P.S. I will be at the squid irc channel for about couple hours
http://webchat.freenode.net/?channels=squid

On 07/08/2014 10:19 PM, Info OoDoO wrote:

Configured Squid 3.4.6 again with all the options, still facing the same issue.

Thanks,
Ganesh J


On Tue, Jul 8, 2014 at 11:55 PM, Nyamul Hassan nya...@gmail.com wrote:

We were in the same problem just a few days ago.  Can you recompile and check?

Also, since you are compiling, then can you also try the latest stable
version 3.4.6?

Regards
HASSAN


On Wed, Jul 9, 2014 at 12:24 AM, Info OoDoO i...@oodoo.co.in wrote:

Sorry, I installed it recently and it was not there when i compiled
and configured squid from source.

Thanks,
Ganesh J


On Tue, Jul 8, 2014 at 11:52 PM, Info OoDoO i...@oodoo.co.in wrote:

Yes.. it is installed..

libcap-devel.x86_64  2.16-5.5.el6  @base

Thanks,
Ganesh J


On Tue, Jul 8, 2014 at 11:49 PM, Nyamul Hassan nya...@gmail.com wrote:

For your kind attention, i have not installed Squid 3.1.10 from YUM. I
have Compiled and installed from the source with the following
options.

http://pastebin.com/jFhzd3qj



Oh!  If you did compile it, then can you check if you have
libcap-devel installed?

Regards
HASSAN




Re: [squid-users] SQUID 3.10 TProxy Issues

2014-07-08 Thread Eliezer Croitoru

Note that I have changed the script to match couple new aspects..
I will later on probably will add debian\ubuntu and maybe others support 
for the script.


Eliezer

On 07/08/2014 08:40 PM, Eliezer Croitoru wrote:

Hey There,

Can you run the next script?
http://www1.ngtech.co.il/squid/basic_data.sh
(use curl to download the file, with default wget you might get wrong
line ending matching windows ones)

Eliezer

On 07/07/2014 08:47 PM, Info OoDoO wrote:

Hi,

I configured Squid in Tproxy mode and Mangled the request, Now i am
able to see my client public address, but i'm unable to see any
request on squid access log.
seems the request is not forwarded to squid or some thing spoofy..

My iptables rules

-A PREROUTING -p tcp -m socket -j DIVERT
-A PREROUTING -p tcp -m tcp --dport 80 -j TPROXY --on-port 4xx5
--on-ip 10.x.x.x --tproxy-mark 0x1/0x1
-A DIVERT -j MARK --set-xmark 0x1/0x
-A DIVERT -j ACCEPT






Re: [squid-users] special configuration of squid for connections with citrix clients?

2014-07-08 Thread Eliezer Croitoru

Hey Andreas,

What do you see in the access.log when you try to access the website?
Also try to change the behavior of:
http://www.squid-cache.org/Doc/config/forwarded_for/

Eliezer

On 07/08/2014 06:13 PM, andreas.resc...@mahle.com wrote:

Hello there,

is there a special configuration of squid to allow connections for a
Citrix ICA-Client to go through to proxy? We're not able to connect the
Citrix ICA-Client (Web) through our squid proxy to a Citrix server outside
in the internet. With the Microsoft ISA-Proxy it does.

Our squid.conf:
bgstproxyls01:~ # cat /etc/squid/squid.conf
#
# Recommended minimum configuration:
#

acl snmppublic snmp_community squid
snmp_port 3401
snmp_incoming_address 10.143.153.27
snmp_outgoing_address 10.143.153.27
snmp_access allow all
client_db off
half_closed_clients off
via off
cache_mem 4096 MB
ipcache_size 2028
fqdncache_size 2048

hosts_file /etc/hosts

memory_pools off
maximum_object_size 50 MB
quick_abort_min 0 KB
quick_abort_max 0 KB
log_icp_queries off
buffered_logs on

dns_nameservers 10.20.94.32
# acl manager proto cache_object
# acl localhost src 127.0.0.1 # ::1
# acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 # ::1

# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range

acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged)
machines

acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
# neu
acl SSL method CONNECT
acl CONNECT method CONNECT


# erlaubte Seiten ohne Internetberechtigung
acl open-sites dstdomain /etc/squid/open-sites.txt
# erlaubte Seiten ohne Internetberechtigung

# verbotene Seiten
acl denied-sites url_regex /etc/squid/denied-sites.txt
acl selling-sites url_regex /etc/squid/selling-sites.txt

acl social-sites url_regex /etc/squid/social-sites.txt
# verbotene Seiten
acl allowedurls dstdomain /etc/squid/bypass.txt

external_acl_type LDAPLookup children-startup=10 children-idle=30
children-max=80 ttl=600 negative_ttl=30 %LOGIN
/usr/sbin/ext_ldap_group_acl -d  -b dc=behrgroup,dc=net -D
CN=BGST-S-SQUID,OU=Service Accounts,OU=bgst,OU=de,DC=behrgroup,DC=net -W
/etc/squid/ppp -f
((objectclass=user)(sAMAccountName=%v)(memberof:1.2.840.113556.1.4.1941:=CN=%a,OU=groups,OU=Proxy,OU=Global
Groups,DC=behrgroup,dc=net)) -h 10.20.94.32


## DEBUGGING

#debug_options 28,9
#debug_options ALL,5 33,2 28,9 44,3

# local  manager
http_access allow manager localhost
http_access deny manager

# nur safe  SSL ab hier
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports


deny_infohttp://bgstproxyls01/denied.html  denied-sites


# Squid normally listens to port 3128
http_port 3128


# Leave coredumps in the first cache dir
coredump_dir /var/cache/squid

# Add any of your own refresh_pattern entries above these.
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320


### pure ntlm authentication
auth_param ntlm program /usr/bin/ntlm_auth
--helper-protocol=squid-2.5-ntlmssp --domain=BEHRGROUP.NET
auth_param ntlm children 128
auth_param ntlm keep_alive off


# Zeit-Steuerung für Indien
acl indien proxy_auth external LDAPLookup GGPY-LO-Web-Time-Limited
acl DAY time 05:30-13:30
# Zeit-Steuerung für Indien
acl chkglwebhttp external LDAPLookup GGPY-LO-Web-Http
acl sellingUser external LDAPLookup GGPY-LO-Web-Allowed-Selling
acl socialUser external LDAPLookup GGPY-LO-Web-Allowed-Social
acl allforbUser external LDAPLookup GGPY-LO-Web-Allowed-All
acl ftpputUser external LDAPLookup GGPY-LO-Web-Ftp-Put
acl loggingUser external LDAPLookup GGPY-LO-Web-Log-User
acl auth proxy_auth REQUIRED
# bestimmte IP-adressen erlauben
acl permitt_ips src 10.143.10.247/32
acl FTP proto FTP
acl PUT method PUT

# whitelisten
http_access allow open-sites all
http_access allow localhost
http_access allow permitt_ips !denied-sites !social-sites
http_access allow indien DAY
http_access deny indien
http_access deny !chkglwebhttp
http_access allow selling-sites sellingUser
http_access allow social-sites socialUser

# Denied sites rauswerfen, wenn sie nicht ebenfalls in allforbUser stehen
http_access allow denied-sites allforbUser

[squid-users] Re: access denied

2014-07-08 Thread winetbox
Amos Jeffries wrote
 On 2014-07-08 16:41, winetbox wrote:
 sorry for being out of topic, since my squid configuration is here, and
 squid's experts are already here, i'd like to ask about my cache 
 config.
 
 NP: here is an email mailing list. All posted mails get to the 
 experts. It helps us a lot to manage the flow of requests if they are 
 all titled/threaded properly. Worst-case if the question that earlier 
 started this thread is resolved is that I have my mailer auto-delete 
 closed threads side-trails and you then lose all help on this hijack 
 request.
 
 
 *. how efficient is my cache config?
 
 cache_mem 1024 MB
 maximum_object_size_in_memory 2048 KB
 memory_replacement_policy heap LFUDA
 cache_replacement_policy heap LRU
 cache_dir ufs /mnt/cache/cache1 8000 16 256
 cache_dir ufs /mnt/cache/cache2 8000 16 256
 cache_dir ufs /mnt/cache/cache3 8000 16 256
 cache_dir ufs /mnt/cache/cache4 8000 16 256
 maximum_object_size 1024 MB
 
 IMO, having global object limit smaller than the in-memory object size 
 limit is not a good idea. It prevents many useful but large memory 
 cached objects being pushed to disk temporarily when the memory cache 
 fills up.
 
 If you have a version of Squid with rock cache type available you will 
 see much faster disk hits by using a rock cache for small objects.
 
 
 *. is there anything i should adjust? because i mostly get 
 TCP_MEM_HIT/200,
 and TCP_HIT are so extremely rare(which i believe cached on object on 
 disk).
 
 This is kind of good. It means most of your HIT are extremely fast. 
 Knowing whether you could improve the HIT ratio based on that alone is 
 difficult to answer, because disk objects able to memory cache will get 
 loaded into memory cache on first TCP_HIT and further uses become 
 TCP_MEM_HIT.
 
 IMHO, you get better results concentrating more on the HIT/MISS ratio 
 than the HIT/MEM_HIT ratio.
 
 
 *. and even if i do make an adjustment on cache_replacement_policy 
 later, do
 this mean i have to rebuild cache dirs?
 
 No, just a restart of Squid is required. The replacement policy data is 
 generated when loading the cache_dir indexes. The locations of objects 
 on disk is not changed.
 
 Amos

i see. thank's for the answer.

btw, i am also using nabble.com for viewing the thread, which makes this
kind of look like a forum
we can set this [solved] back again




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/access-denied-tp419p4666761.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] special configuration of squid for connections with citrix clients?

2014-07-08 Thread Amos Jeffries

On 2014-07-09 03:13, andreas.resc...@mahle.com wrote:

Hello there,

is there a special configuration of squid to allow connections for a
Citrix ICA-Client to go through to proxy? We're not able to connect the
Citrix ICA-Client (Web) through our squid proxy to a Citrix server 
outside

in the internet. With the Microsoft ISA-Proxy it does.


The key things are to know whether the client software supports HTTP 
proxy either transfers over HTTP requests (ie. some form of REST 
protocol) or via CONNECT tunnels, and what ports are involved. Once you 
know those you can tune the Squid ACLs to check for just about anything 
in the client traffic and permit/deny as you please.


Amos



Re: [squid-users] Squid exiting on its own at sys startup

2014-07-08 Thread Eliezer Croitoru

Hey Mike,

I was wondering if you have these Selinux rules in binary or another 
format(src) which I can try to use and package them in RPM?


Thanks,
Eliezer

On 06/27/2014 12:08 AM, Mike wrote:

After some deeper digging, it seems selinux was only temporarily
disabled (via  echo 0 /selinux/enforce), not disabled in the primary
config file. But this actually allowed me to track down a fix to keep
using selinux (which we definitely need for server security). I am going
to add it here for others that may run into the same problem (in RedHat,
CentOS and Scientific Linux) and how to fix it. This allows us to use
ssl-bump with selinux. I had one where pinger was also having an issue
so I am including it here.
Scientific Linux 6.5 (would also work for RedHat and CentOS 6)
squid 3.4.5 and 3.4.6

Edit /etc/selinux/config and change to “permissive”. Then cycle the
audit logs:
cd /var/log/audit/
mv audit.log audit.log.0
touch audit.log

Thenreboot the system and let selinux come back up and catch the items
in its log (usually ssl_crtd and pinger) located at
/var/log/audit/audit.log. Many times squid will try to start but end up
with “the ssl_crtd helpers are crashing too quickly” which will shut the
squid service down.

  *

Install the needed tool for selinux: yum install
policycoreutils-python (which will also install a few other needed
dependencies).

ssl_crtd: Start in /tmp/ folder since we will not need these files for
long.

  *

grep ssl_crtd /var/log/audit/audit.log | audit2allow -m
ssl_crtdlocal  ssl_crtdlocal.te

  o

outputs the suggested settings into the file ssl_crtdlocal.te,
which we will review below in “cat”

  *

cat ssl_crtdlocal.te # to review the created file and show what will
be done

  *

grep ssl_crtd /var/log/audit/audit.log | audit2allow -M ssl_crtdlocal

  o

Note the capital M, this makes the needed file, ready for
selinux to import, and then the next command below actually
enables it.

  *

semodule -i ssl_crtdlocal.pp


1.

Now for pinger (if needed):

  *

grep pinger /var/log/audit/audit.log | audit2allow -m pingerlocal 
pingerlocal.te

  *

cat pingerlocal.te # to review the created file and show what will
be done

  *

grep pinger /var/log/audit/audit.log | audit2allow -M pingerlocal

  *

semodule -i pingerlocal.pp

After those are entered, go back in and edit /etc/selinux/config and
change to “enforcing”. Reboot the system one more time and watch the
logs for any other entries relating to squid like “ssl_crtd” or “pinger”
(look at the comm=ssl_crtd aspect) to see if any other squid based
items need an allowance:

  *

type=AVC msg=audit(1403808338.272:24): avc: denied { read } for
pid=1457 comm=ssl_crtd name=index.txt dev=dm -0 ino=5376378
scontext=system_u:system_r:squid_t:s0
tcontext=unconfined_u:object_r:var_lib_t:s0 tclass=file

  o

-OR-

  *

type=SYSCALL msg=audit(1403808338.272:24): arch=c03e syscall=2
success=yes exit=3 a0=cfe2e8 a1=0 a2=1b6 a3=0 items=0 ppid=1454
pid=1457 auid=4294967295 uid=500 gid=500 euid=500 suid=500 fsuid=500
egid=500 sgid=500 fsgid=500 tty=(none) ses=4294967295
comm=ssl_crtd exe=/usr/lib64/squid/ssl_crtd
subj=system_u:system_r:squid_t:s0 key=(null)



Thanks all
Mike




[squid-users] Squid v3.3.8 SSL Bumping Issues

2014-07-08 Thread David Marcos
Hi,

I have been attempting to configure SSL bumping with Squid v3.3.8.  I
have a well configured Squid proxy for HTTP and HTTP Intercept
proxying.  I am now trying to expand the configuration to bump SSL
connections.  I believe I have the basics of the configuration correct
for both direct HTTPS proxying as well as intercepted HTTPS, but am
having a few issues that I would appreciate some input on.
Specifically:

 a. HTTPS Page Rendering: Some HTTPS pages load fine.  However, I
have found that if I try to login to online banking or other secure
pages that either (1) the page does not render properly (I get flat,
unorganized text) or (2) the page simply does not load.  With respect
to the latter, some pages simply bring me right back to the login
page; there seems to be some kind of behind-the-scenes redirection
that is being rejected and preventing logging in.  What
recommendations might anyone have to tweak my configuration to address
these issues?

 b. HTTP Strict Transport Security (HSTS): Some pages flat-out
reject any SSL bumping due to HSTS.  I am using Chrome, which I'm sure
aggravates the issue.  Is there a way to configure Squid to get around
HSTS?  (Yes, I know this may be a dumb question given how HSTS works,
but would appreciate any insight.)

Fundamentally, my intent is to set up Squid for home use to block
advertising, malware, and in particular, perform content adaptation.
One of my specific goals is to modify search URL paths to restrict
explicit search returns (e.g. affixing safe=active to any Google
search path).  I have additionally configured ICAP with SquidClamav,
multiple ACLs for blocking of ads and malware, and SquidGuard for
additional domain and url blocking.  SquidGuard is also successfully
manipulating *unencrypted* Google, Yahoo, and Bing URL paths to insert
commands to suppress explicit search returns.  (I should note that
when I tested out SSL bumping, I disabled ICAP, Squidguard, and ACLs
for blocking of ads and malware; the issues described above
persisted.)

Below is my squid.conf file to help out.

Thanks in advance,

Dave

#BEGIN FILE#
hosts_file /etc/hosts
visible_hostname proxyserver
shutdown_lifetime 5 seconds
coredump_dir /tmp


dns_nameservers 192.168.1.1 208.67.222.222 208.67.220.220
half_closed_clients off
negative_ttl 0
negative_dns_ttl 2 minutes

http_port 127.0.0.1:3128

http_port 192.168.1.1:3128 ssl-bump generate-host-certificates=on
dynamic_cert_mem_cache_size=4MB cert=/etc/squid3/certs/cert.crt
key=/etc/squid3/certs/cert.key

http_port 192.168.1.1:3129 intercept

https_port 192.168.1.1:3130 intercept ssl-bump
generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
cert=/etc/squid3/certs/cert.crt key=/etc/squid3/certs/cert.key

sslcrtd_program /usr/lib/squid3/ssl_crtd -s /disk/dyn-certs/sslcrtd_db -M 4MB
sslcrtd_children 5

udp_incoming_address 192.168.1.1
pinger_enable off
forwarded_for delete
via off

memory_replacement_policy heap GDSF
cache_replacement_policy heap LFUDA
maximum_object_size_in_memory 1 MB
minimum_object_size 0 KB
maximum_object_size 64 MB
memory_pools off
cache_mem 256 MB
cache_dir aufs /disk/squid-cache 25000 32 512
cache_swap_low 95
cache_swap_high 97
ipcache_size 10240
fqdncache_size 2048
quick_abort_min 0 KB
quick_abort_max 0 KB
max_filedescriptors 4096
read_ahead_gap 512 KB

client_lifetime 6 hours
connect_timeout 10 seconds

log_icp_queries off
buffered_logs on
debug_options ALL,1
logformat squid %tg %6tr %A %Ss/%03Hs UA=%{User-Agent}h
XFF=%{X-Forwarded-For}h CKE=- %rm %ru %un %Sh/%A %mt BYTES=%st
access_log stdio:/var/log/squid/access.log squid
cache_log /var/log/squid/cache.log
cache_store_log none #/var/log/squid/store.log

icap_enable on
icap_send_client_ip on
icap_send_client_username on
icap_client_username_encode off
icap_client_username_header X-Authenticated-User
icap_preview_enable on
icap_preview_size 1024
icap_service sqclamav_req reqmod_precache bypass=1
icap://127.0.0.1:1344/squidclamav
adaptation_access sqclamav_req allow all
icap_service sqclamav_resp respmod_precache bypass=1
icap://127.0.0.1:1344/squidclamav
adaptation_access sqclamav_resp allow all

refresh_pattern -i \.(gif|png|jpg|jpeg|ico|bmp)$ 10080 90% 43200
override-expire ignore-no-store ignore-private
refresh_pattern -i \.(iso|avi|wav|mp3|m3u|mp4|mpeg|swf|flv|x-flv)$
43200 90% 259200 override-expire ignore-no-store ignore-no-cache
ignore-private
refresh_pattern -i
\.(deb|rpm|exe|zip|tar|tgz|ram|rar|bin|ppt|pptx|doc|docx|xls|xlsx|tiff)$
10080 90% 43200 override-expire ignore-no-store ignore-private
refresh_pattern -i \.index.(html|htm)$ 0 40% 10080
refresh_pattern -i \.(html|htm|css|js)$ 1440 40% 40320
refresh_pattern -i exe$ 0 50% 259200
refresh_pattern -i zip$ 0 50% 259200
refresh_pattern -i tar\.gz$ 0 50% 259200
refresh_pattern -i tgz$ 0 50% 259200
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern -i (\.cgi$|/cgi-bin/|\?) 0 0% 0
refresh_pattern . 0 20% 4320

acl SSL_ports port