Re: [squid-users] Squid 4.x and Peek and Splice - Host Header Forgery

2016-10-18 Thread John Wright
Also here is an example showing the issues when pushing to S3 as well as
the same error with some google url's.

2016/10/17 18:33:32 kid1| SECURITY ALERT: Host header forgery detected on
local=209.85.144.113:443 remote=x.x.x.x:62402 FD 49 flags=33 (local IP does
not match any domain IP)
2016/10/17 18:33:32 kid1| SECURITY ALERT: on URL: tools.google.com:443
2016/10/17 18:34:04 kid1| SECURITY ALERT: Host header forgery detected on
local=209.85.144.113:443 remote=x.x.x.x:62405 FD 110 flags=33 (local IP
does not match any domain IP)
2016/10/17 18:34:04 kid1| SECURITY ALERT: on URL: tools.google.com:443
2016/10/17 18:34:45 kid1| SECURITY ALERT: Host header forgery detected on
local=209.85.144.113:443 remote=x.x.x.x:62409 FD 56 flags=33 (local IP does
not match any domain IP)
2016/10/17 18:34:45 kid1| SECURITY ALERT: on URL: tools.google.com:443
2016/10/17 18:35:16 kid1| SECURITY ALERT: Host header forgery detected on
local=209.85.144.113:443 remote=x.x.x.x:62412 FD 65 flags=33 (local IP does
not match any domain IP)
2016/10/17 18:35:16 kid1| SECURITY ALERT: on URL: tools.google.com:443
2016/10/17 18:57:11 kid1| SECURITY ALERT: Host header forgery detected on
local=172.217.17.78:443 remote=x.x.x.x:52958 FD 66 flags=33 (local IP does
not match any domain IP)
2016/10/17 18:57:11 kid1| SECURITY ALERT: on URL:
alt2-safebrowsing.google.com:443
2016/10/17 18:58:00 kid1| SECURITY ALERT: Host header forgery detected on
local=172.217.17.78:443 remote=x.x.x.x:52965 FD 42 flags=33 (local IP does
not match any domain IP)
2016/10/17 18:58:00 kid1| SECURITY ALERT: on URL:
alt2-safebrowsing.google.com:443



Also please note my dig response time :

;; Query time: 1 msec


And from my DNS server itself :

;; Query time: 2 msec


My bind server is setup as a simple forwarder which always returns
repsonses in about 1-2 msec

So again , things that are big , big files, timely requests to query some
API , they all appear to have host header forgery problems that squid shows
and then drops if the request takes longer to process than the TTL of the
DNS entry associated with the traffic

I have many examples and if i dont use squid everything works fine, with
squid it breaks , thats my simple point is squid is seeing an issue the app
and client themselves dont and thats OK but with no way to "disable " or
"workaround" the errors for lets say S3 on AWS

how do i keep using squid?

On Tue, Oct 18, 2016 at 2:10 PM, John Wright  wrote:

> In response to it not being a false positive , maybe its not specifically
> the TTL but in this other article on the mailing lists someone else had the
> same issue
>
>
> Here is the response Amos gave, this is a known issue and apparently there
> is no way to "ignore host header forgery issues" or bypass them in the
> squid config.
> My understanding is that , maybe the short TTL is ok, but it is small
> enough to where when a cloud based client is connecting to server a server
> b to amazon S3 etc it can take a few seconds
> Thus that 5 second TTL (which again is often 2-3 seconds) is small enough
> to hurt.
>
> Specifically some of these people (aws , google) in some dns situations
> they are doing things that squid has been known to identify as host header
> forgery just becuse it doesnt understand whats happening.
> Also if im doing an S3 call pulling or pushing a big file which is very
> common in cloud environments it can take 10-20 seconds for the request to
> process , and if TTL expires mid stream , squid is for some reason flagging
> as forgery and it hangs until it either returns to the same ip in
> DNS by chance or until the connection is dropped.
>
> http://lists.squid-cache.org/pipermail/squid-users/2016-August/012261.html
> Here is the note from Amos
>
> >>* The cases where Squid still gets it wrong are where the popular CDN
> *>>* service(s) in question are performing DNS actions indistinguishable to
> *>>* those malware attacks. If Squid can't tell the difference between an
> *>>* attack and normal DNS behaviour the only code change possible is to
> *>>* disable the check (see above about the risk level).
> *>>
>
>
> On Tue, Oct 18, 2016 at 2:01 PM,  wrote:
>
>> On 2016-10-18 22:42, John Wright wrote:
>>
>>> Hi
>>>
>>> Replying to the list
>>>
>>> Yes i get that error on many different sites same exact error about
>>> host headers.
>>> Also if you watch the TTL on the amazonaws url i provided it changes
>>> from 3 to 5 to 10 seconds to 60 to 10 back and forth.
>>> If you go online to an dns lookup site like kloth i see via kloth 5
>>> seconds TTL
>>>
>>> i get a different TTL value at different times, it appears they dont
>>> have a set TTL but they change it often and it varies.
>>> Right now it appears to be a ttl of 60 seconds as you found but
>>> earlier and over the weekend it has shown 5 seconds and even AWS
>>> support verified it can vary as low as 5 seconds.
>>> That being said , when it is changing every 3-5 seconds which comes
>>> and 

Re: [squid-users] Squid 4.x and Peek and Splice - Host Header Forgery

2016-10-18 Thread John Wright
In response to it not being a false positive , maybe its not specifically
the TTL but in this other article on the mailing lists someone else had the
same issue


Here is the response Amos gave, this is a known issue and apparently there
is no way to "ignore host header forgery issues" or bypass them in the
squid config.
My understanding is that , maybe the short TTL is ok, but it is small
enough to where when a cloud based client is connecting to server a server
b to amazon S3 etc it can take a few seconds
Thus that 5 second TTL (which again is often 2-3 seconds) is small enough
to hurt.

Specifically some of these people (aws , google) in some dns situations
they are doing things that squid has been known to identify as host header
forgery just becuse it doesnt understand whats happening.
Also if im doing an S3 call pulling or pushing a big file which is very
common in cloud environments it can take 10-20 seconds for the request to
process , and if TTL expires mid stream , squid is for some reason flagging
as forgery and it hangs until it either returns to the same ip in
DNS by chance or until the connection is dropped.

http://lists.squid-cache.org/pipermail/squid-users/2016-August/012261.html
Here is the note from Amos

>>* The cases where Squid still gets it wrong are where the popular CDN
*>>* service(s) in question are performing DNS actions indistinguishable to
*>>* those malware attacks. If Squid can't tell the difference between an
*>>* attack and normal DNS behaviour the only code change possible is to
*>>* disable the check (see above about the risk level).
*>>


On Tue, Oct 18, 2016 at 2:01 PM,  wrote:

> On 2016-10-18 22:42, John Wright wrote:
>
>> Hi
>>
>> Replying to the list
>>
>> Yes i get that error on many different sites same exact error about
>> host headers.
>> Also if you watch the TTL on the amazonaws url i provided it changes
>> from 3 to 5 to 10 seconds to 60 to 10 back and forth.
>> If you go online to an dns lookup site like kloth i see via kloth 5
>> seconds TTL
>>
>> i get a different TTL value at different times, it appears they dont
>> have a set TTL but they change it often and it varies.
>> Right now it appears to be a ttl of 60 seconds as you found but
>> earlier and over the weekend it has shown 5 seconds and even AWS
>> support verified it can vary as low as 5 seconds.
>> That being said , when it is changing every 3-5 seconds which comes
>> and goes , squid gives the header forgery errors as shown before.
>>
>
> The time interval between client's and Squid's name lookup is measured in
> milliseconds. So, in most cases, the would not be false positives in
> environments where same cashing DNS server is used.
>
> That specific issue you encounter except alert messages and Squid's
> inability to cache HTTP responses for "forged" HTTP requests?
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>



-- 
Thank you for your time,

John Wright
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Error DiskThreadsDiskFile::openDone: (2) No such file or directory

2016-10-18 Thread erdosain9
Yes.

cache_dir aufs /var/spool/squid 10 16 256
cache_mem 256 MB




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Error-DiskThreadsDiskFile-openDone-2-No-such-file-or-directory-tp4680142p4680149.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 4.x and Peek and Splice - Host Header Forgery

2016-10-18 Thread garryd

On 2016-10-18 22:42, John Wright wrote:

Hi

Replying to the list

Yes i get that error on many different sites same exact error about
host headers.
Also if you watch the TTL on the amazonaws url i provided it changes
from 3 to 5 to 10 seconds to 60 to 10 back and forth.
If you go online to an dns lookup site like kloth i see via kloth 5
seconds TTL

i get a different TTL value at different times, it appears they dont
have a set TTL but they change it often and it varies.
Right now it appears to be a ttl of 60 seconds as you found but
earlier and over the weekend it has shown 5 seconds and even AWS
support verified it can vary as low as 5 seconds.
That being said , when it is changing every 3-5 seconds which comes
and goes , squid gives the header forgery errors as shown before.


The time interval between client's and Squid's name lookup is measured 
in milliseconds. So, in most cases, the would not be false positives in 
environments where same cashing DNS server is used.


That specific issue you encounter except alert messages and Squid's 
inability to cache HTTP responses for "forged" HTTP requests?

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 4.x and Peek and Splice - Host Header Forgery

2016-10-18 Thread John Wright
Hi

Replying to the list

Yes i get that error on many different sites same exact error about host
headers.
Also if you watch the TTL on the amazonaws url i provided it changes from 3
to 5 to 10 seconds to 60 to 10 back and forth.
If you go online to an dns lookup site like kloth i see via kloth 5 seconds
TTL

i get a different TTL value at different times, it appears they dont have a
set TTL but they change it often and it varies.
Right now it appears to be a ttl of 60 seconds as you found but earlier and
over the weekend it has shown 5 seconds and even AWS support verified it
can vary as low as 5 seconds.
That being said , when it is changing every 3-5 seconds which comes and
goes , squid gives the header forgery errors as shown before.





On Tue, Oct 18, 2016 at 12:30 PM,  wrote:

> On 2016-10-18 18:32, John Wright wrote:
>
>> Hi,
>>
>> I have a constant problem with Host header forgery detection on squid
>> doing peek and splice.
>>
>> I see this most commonly with CDN, Amazon and microsoft due to the
>> fact there TTL is only 5 seconds on certain dns entries im connecting
>> to.  So when my client connects through my squid i get host header
>> issues due to the contstant dns changes at these destinations.
>>
>> I have ready many things online but how do i get around this.  I
>> basically want to allow certain domains or ip subnets to not hit the
>> host header error (as things break at this point for me ).
>>
>> Any ideas ?
>>
>> One example is
>>
>> sls.update.microsoft.com [1]
>>
>> Yes my client and Squid use same DNS server, i have even setup my
>> squid as a bind server and tried that just for fun same issue.  Fact
>> is the DNS at these places changes so fast (5 seconds) the dns
>> response keeps changing/
>>
>> I just need these approved destinations to make it through
>>
>>
>>
>> Links:
>> --
>> [1] http://sls.update.microsoft.com/
>>
>
> Hi,
>
> Are you sure, that Squid and all your clients use same _caching_ DNS
> server? For example, here results from my server for name
> sls.update.microsoft.com:
>
> $ dig sls.update.microsoft.com
> ...
> sls.update.microsoft.com. 3345  IN  CNAME
> sls.update.microsoft.com.nsatc.net.
> sls.update.microsoft.com.nsatc.net. 215 IN A157.56.77.141
> ...
>
>
> Second request after 3 seconds:
>
> $ dig sls.update.microsoft.com
> ...
> sls.update.microsoft.com. 3342  IN  CNAME
> sls.update.microsoft.com.nsatc.net.
> sls.update.microsoft.com.nsatc.net. 212 IN A157.56.77.141
> ...
>
>
> Here I see that the TTL for the target A record is 300 seconds (not 5
> seconds), and _caching_ DNS server will serve same A record for all clients
> at least 5 minutes. That behaviour will not introduce false positives for
> host forgery detection.
>
>
>
> On other hand, if the DNS server is not _caching_, you would get different
> A records for each request. For example, below are results from
> authoritative DNS server for zone nsatc.net:
>
>
> $ dig @e.ns.nsatc.net sls.update.microsoft.com.nsatc.net
> ...
> sls.update.microsoft.com.nsatc.net. 300 IN A157.55.240.220
> ...
>
>
> Second request after 5 seconds:
>
> $ dig @e.ns.nsatc.net sls.update.microsoft.com.nsatc.net
> ...
> sls.update.microsoft.com.nsatc.net. 300 IN A157.56.96.54
> ...
>
>
> Here I see, that the DNS server serves exactly one A record in round-robin
> fashion. Same true for Google public DNS services. That behavior could
> cause troubles for host forgery detection.
>
> HTH
>
> Garri
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>



-- 
Thank you for your time,

John Wright
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid change "method patch" to "method other"

2016-10-18 Thread Alex Rousskov
On 10/18/2016 09:42 AM, magali isnard wrote:

> I have a squid running under 3.4.12 version. we have a software that
> tries to send a "method patch" to the ocs, but when squid intercepts the
> packet it changes it into a "method other". So I have an error message :
> {"status":405,"type":"about:blank","title":"Method Not
> Allowed","detail":"No route found for \"METHOD_OTHER \/users\/144\":
> Method Not Allowed (Allow: GET, HEAD, PUT, PATCH)"}.
> 
> I have found no ressources on this problem can you help me to understand.

I recommend trying Squid v3.5. IIRC, support for custom methods have
improved in v3.5 but I have not checked the change log to confirm.

Alex.


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 4.x and Peek and Splice - Host Header Forgery

2016-10-18 Thread garryd

On 2016-10-18 18:32, John Wright wrote:

Hi,

I have a constant problem with Host header forgery detection on squid
doing peek and splice.

I see this most commonly with CDN, Amazon and microsoft due to the
fact there TTL is only 5 seconds on certain dns entries im connecting
to.  So when my client connects through my squid i get host header
issues due to the contstant dns changes at these destinations.

I have ready many things online but how do i get around this.  I
basically want to allow certain domains or ip subnets to not hit the
host header error (as things break at this point for me ).

Any ideas ?

One example is

sls.update.microsoft.com [1]

Yes my client and Squid use same DNS server, i have even setup my
squid as a bind server and tried that just for fun same issue.  Fact
is the DNS at these places changes so fast (5 seconds) the dns
response keeps changing/

I just need these approved destinations to make it through



Links:
--
[1] http://sls.update.microsoft.com/


Hi,

Are you sure, that Squid and all your clients use same _caching_ DNS 
server? For example, here results from my server for name 
sls.update.microsoft.com:


$ dig sls.update.microsoft.com
...
sls.update.microsoft.com. 
3345	IN	CNAME	sls.update.microsoft.com.nsatc.net.

sls.update.microsoft.com.nsatc.net. 215 IN A157.56.77.141
...


Second request after 3 seconds:

$ dig sls.update.microsoft.com
...
sls.update.microsoft.com. 
3342	IN	CNAME	sls.update.microsoft.com.nsatc.net.

sls.update.microsoft.com.nsatc.net. 212 IN A157.56.77.141
...


Here I see that the TTL for the target A record is 300 seconds (not 5 
seconds), and _caching_ DNS server will serve same A record for all 
clients at least 5 minutes. That behaviour will not introduce false 
positives for host forgery detection.




On other hand, if the DNS server is not _caching_, you would get 
different A records for each request. For example, below are results 
from authoritative DNS server for zone nsatc.net:



$ dig @e.ns.nsatc.net sls.update.microsoft.com.nsatc.net
...
sls.update.microsoft.com.nsatc.net. 300 IN A157.55.240.220
...


Second request after 5 seconds:

$ dig @e.ns.nsatc.net sls.update.microsoft.com.nsatc.net
...
sls.update.microsoft.com.nsatc.net. 300 IN A157.56.96.54
...


Here I see, that the DNS server serves exactly one A record in 
round-robin fashion. Same true for Google public DNS services. That 
behavior could cause troubles for host forgery detection.


HTH

Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] squid change "method patch" to "method other"

2016-10-18 Thread magali isnard
Hello,
I have a squid running under 3.4.12 version. we have a software that tries to 
send a "method patch" to the ocs, but when squid intercepts the packet it 
changes it into a "method other". So I have an error message 
:{"status":405,"type":"about:blank","title":"Method Not Allowed","detail":"No 
route found for \"METHOD_OTHER \/users\/144\": Method Not Allowed (Allow: GET, 
HEAD, PUT, PATCH)"}.
I have found no ressources on this problem can you help me to understand.
Thank you

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Error DiskThreadsDiskFile::openDone: (2) No such file or directory

2016-10-18 Thread FredB
Aufs ?

Fred
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Error DiskThreadsDiskFile::openDone: (2) No such file or directory

2016-10-18 Thread erdosain9
Hi.
squid 3.5.20

Im having a lot of these in cache.log

2016/10/18 10:36:11 kid1| DiskThreadsDiskFile::openDone: (2) No such file or
directory
2016/10/18 10:36:11 kid1|   /var/spool/squid/00/92/92E9
2016/10/18 10:36:14 kid1| DiskThreadsDiskFile::openDone: (2) No such file or
directory
2016/10/18 10:36:14 kid1|   /var/spool/squid/00/AA/AA46
2016/10/18 10:36:16 kid1| DiskThreadsDiskFile::openDone: (2) No such file or
directory
2016/10/18 10:36:16 kid1|   /var/spool/squid/00/AA/AA48
2016/10/18 10:36:16 kid1| DiskThreadsDiskFile::openDone: (2) No such file or
directory
2016/10/18 10:36:16 kid1|   /var/spool/squid/00/AA/AA49
2016/10/18 10:36:16 kid1| DiskThreadsDiskFile::openDone: (2) No such file or
directory
2016/10/18 10:36:16 kid1|   /var/spool/squid/00/AA/AA4B
2016/10/18 10:36:16 kid1| DiskThreadsDiskFile::openDone: (2) No such file or
directory
2016/10/18 10:36:16 kid1|   /var/spool/squid/00/AA/AA4C
2016/10/18 10:36:20 kid1| DiskThreadsDiskFile::openDone: (2) No such file or
directory
2016/10/18 10:36:20 kid1|   /var/spool/squid/00/AA/AA60
2016/10/18 10:36:21 kid1| DiskThreadsDiskFile::openDone: (2) No such file or
directory
2016/10/18 10:36:21 kid1|   /var/spool/squid/00/AA/AA67
2016/10/18 10:36:21 kid1| DiskThreadsDiskFile::openDone: (2) No such file or
directory
2016/10/18 10:36:21 kid1|   /var/spool/squid/00/AA/AA66
2016/10/18 10:36:21 kid1| DiskThreadsDiskFile::openDone: (2) No such file or
directory
2016/10/18 10:36:21 kid1|   /var/spool/squid/00/AA/AA65
2016/10/18 10:36:33 kid1| DiskThreadsDiskFile::openDone: (2) No such file or
directory
2016/10/18 10:36:33 kid1|   /var/spool/squid/00/AA/AA10
2016/10/18 10:36:33 kid1| DiskThreadsDiskFile::openDone: (2) No such file or
directory
2016/10/18 10:36:33 kid1|   /var/spool/squid/00/AA/AA8C
2016/10/18 10:36:33 kid1| DiskThreadsDiskFile::openDone: (2) No such file or
directory
2016/10/18 10:36:33 kid1|   /var/spool/squid/00/AA/AA98
2016/10/18 10:36:33 kid1| DiskThreadsDiskFile::openDone: (2) No such file or
directory
2016/10/18 10:36:33 kid1|   /var/spool/squid/00/AA/AA18
2016/10/18 10:36:33 kid1| DiskThreadsDiskFile::openDone: (2) No such file or
directory
2016/10/18 10:36:33 kid1|   /var/spool/squid/00/AA/AA93
2016/10/18 10:36:33 kid1| DiskThreadsDiskFile::openDone: (2) No such file or
directory
2016/10/18 10:36:33 kid1|   /var/spool/squid/00/AA/AA9A
2016/10/18 10:36:34 kid1| DiskThreadsDiskFile::openDone: (2) No such file or
directory
2016/10/18 10:36:34 kid1|   /var/spool/squid/00/70/704B

What can i do?? thanks



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Error-DiskThreadsDiskFile-openDone-2-No-such-file-or-directory-tp4680142.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] CentOS 6.x and SELinux enforcing with Squid 3.5.x (thanks to Eliezer Croitoru for the RPM)

2016-10-18 Thread Garri Djavadyan
On Tue, 2016-10-18 at 14:56 +0200, Walter H. wrote:
> with the 3.1.x there is no problem with
> 
> url_rewrite_program /etc/squid/url-rewrite-program.pl
> url_rewrite_children 8
> url_rewrite_host_header on
> url_rewrite_access allow all
> 
> but with the 3.5.x there is access denied (shown in
> /var/log/audit/audit.log)
> and squid doesn't start;
> 
> specific to the 3.5.x release, I added a certificate validator
> helper,
> which has also problems ...
> 
> 
> Greetings,
> Walter

Hi Walter,

Have you tried to move helpers to '/usr/lib64/squid/' and ensure that
the label for them is 'lib_t'?

Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid 4.x and Peek and Splice - Host Header Forgery

2016-10-18 Thread John Wright
Hi,

I have a constant problem with Host header forgery detection on squid doing
peek and splice.

I see this most commonly with CDN, Amazon and microsoft due to the fact
there TTL is only 5 seconds on certain dns entries im connecting to.  So
when my client connects through my squid i get host header issues due to
the contstant dns changes at these destinations.

I have ready many things online but how do i get around this.  I basically
want to allow certain domains or ip subnets to not hit the host header
error (as things break at this point for me ).

Any ideas ?

One example is

sls.update.microsoft.com

Yes my client and Squid use same DNS server, i have even setup my squid as
a bind server and tried that just for fun same issue.  Fact is the DNS at
these places changes so fast (5 seconds) the dns response keeps changing/


I just need these approved destinations to make it through
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid is not responding when the number of connection exceeds

2016-10-18 Thread georgej
Hi Eliezer,

Thanks for your reply.

I made the changes as per your suggestion. But again i faced the same issue.
Then i used another ISP link to test the load. Now its seems to be working
fine. I will put it on live later and let you know the status.

ulimit -a
core file size  (blocks, -c) 0
data seg size   (kbytes, -d) unlimited
scheduling priority (-e) 0
file size   (blocks, -f) unlimited
pending signals (-i) 7369
max locked memory   (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files  (-n) 65535
pipe size(512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority  (-r) 0
stack size  (kbytes, -s) 10240
cpu time   (seconds, -t) unlimited
max user processes  (-u) 65535
virtual memory  (kbytes, -v) unlimited
file locks  (-x) unlimited

Thanks,
George



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-is-not-responding-when-the-number-of-connection-exceeds-tp4680091p4680139.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] CentOS 6.x and SELinux enforcing with Squid 3.5.x (thanks to Eliezer Croitoru for the RPM)

2016-10-18 Thread Walter H.
On Tue, October 18, 2016 13:31, Garri Djavadyan wrote:
> On Tue, 2016-10-18 at 13:02 +0200, Walter H. wrote:
>> Hello,
>>
>> just in case anybody wants to run Squid 3.5.x on CentOS
>> with SELinux enforcing,
>>
>> here is the semodule
>>
>> 
>> module squid_update 1.0;
>>
>> require {
>> type squid_conf_t;
>> type squid_t;
>> type var_t;
>> class file { append open read write getattr lock
>> execute_no_trans };
>> }
>>
>> #= squid_t ==
>> allow squid_t squid_conf_t:file execute_no_trans;
>> allow squid_t var_t:file { append open read write getattr lock };
>> 
>>
>> and do the following:
>>
>> checkmodule -M -m -o squid_update.mod squid_update.tt
>> semodule_package -o squid_update.pp -m squid_update.mod
>> semodule -i squid_update.pp
>
> Hi,
>
> Have you tried to use default policy and relabel target dirs/files
> using types dedicated for squid? For example:
>
> # semanage fcontext -l | grep squid
> ...

my output differs a little bit; and yes the target files/dirs are labeled
as dedicated;

don't ask me why, but I have two CentOS 6.x VMs (each latest) one with the
official package (release 3.1.23) and one with this 3.5.20 RPM package;

with the 3.1.x there is no problem with

url_rewrite_program /etc/squid/url-rewrite-program.pl
url_rewrite_children 8
url_rewrite_host_header on
url_rewrite_access allow all

but with the 3.5.x there is access denied (shown in /var/log/audit/audit.log)
and squid doesn't start;

specific to the 3.5.x release, I added a certificate validator helper,
which has also problems ...

with this semodule package everything works fine ...

so there must be something different, between these two releases;

with SELinux disabled or permissive there is no need of this semodule
package;

Greetings,
Walter


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] CentOS 6.x and SELinux enforcing with Squid 3.5.x (thanks to Eliezer Croitoru for the RPM)

2016-10-18 Thread Garri Djavadyan
On Tue, 2016-10-18 at 13:02 +0200, Walter H. wrote:
> Hello,
> 
> just in case anybody wants to run Squid 3.5.x on CentOS
> with SELinux enforcing,
> 
> here is the semodule
> 
> 
> module squid_update 1.0;
> 
> require {
> type squid_conf_t;
> type squid_t;
> type var_t;
> class file { append open read write getattr lock
> execute_no_trans };
> }
> 
> #= squid_t ==
> allow squid_t squid_conf_t:file execute_no_trans;
> allow squid_t var_t:file { append open read write getattr lock };
> 
> 
> and do the following:
> 
> checkmodule -M -m -o squid_update.mod squid_update.tt
> semodule_package -o squid_update.pp -m squid_update.mod
> semodule -i squid_update.pp

Hi,

Have you tried to use default policy and relabel target dirs/files
using types dedicated for squid? For example:

# semanage fcontext -l | grep squid
/etc/squid(/.*)?   all
files  system_u:object_r:squid_conf_t:s0 
/var/run/squid.*   all
files  system_u:object_r:squid_var_run_t:s0 
/var/log/squid(/.*)?   all
files  system_u:object_r:squid_log_t:s0 
/usr/share/squid(/.*)? all
files  system_u:object_r:squid_conf_t:s0 
/var/cache/squid(/.*)? all
files  system_u:object_r:squid_cache_t:s0 
/var/spool/squid(/.*)? all
files  system_u:object_r:squid_cache_t:s0 
/usr/sbin/squidregular
file   system_u:object_r:squid_exec_t:s0 
/etc/rc\.d/init\.d/squid   regular
file   system_u:object_r:squid_initrc_exec_t:s0 
/usr/lib/squid/cachemgr\.cgi   regular
file   system_u:object_r:httpd_squid_script_exec_t:s0 
/usr/lib64/squid/cachemgr\.cgi regular
file   system_u:object_r:httpd_squid_script_exec_t:s0 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users