[squid-users] Re: YouTube Resolution Locker

2014-07-25 Thread Stakres
Hi All,

Feel free to modify the script (client side) to do not send all requests.
As Cassiano said, only the YouTube urls need to be rewritten...

Bye Fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/YouTube-Resolution-Locker-tp4667042p4667054.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Re: Sibling cache peer for a HTTPS reverse proxy

2014-07-25 Thread Amos Jeffries
On 26/07/2014 11:44 a.m., Makson wrote:
> Thanks for your reminder, i think the HTML RAW tag caused the problem, send
> the log again.
> 
> Some records found in access.log in server b, 
> 
> 1406185920.441   1282 172.17.210.5 TCP_MISS/200 814 GET
> https://serverb.domain:9443/ccm/service/com.ibm.team.scm.common.IVersionedContentService/content/com.ibm.team.filesystem/FileItem/_houAAK2yEeOvOJ84krOqLg/_EPGIsq20EeOEJLtkkn17bg/h2LjUv8WJVDwJ3rcbA6_u3fNuJylQ0sQlSZdRL_IMkA
> - FIRSTUP_PARENT/172.17.96.148 application/octet-stream
> 1406185921.151  46349 172.17.210.5 TCP_MISS/200 219202 GET
> https://serverb.domain:9443/ccm/service/com.ibm.team.scm.common.IVersionedContentService/content/com.ibm.team.filesystem/FileItem/_hpCwIK2yEeOvOJ84krOqLg/_EN-HVK20EeOEJLtkkn17bg/rnslrsXloPXpudCIXRFjShexoc97mr7-2RxWPs7pVnI
> - FIRSTUP_PARENT/172.17.96.148 application/octet-stream
> 
> 
> All records found in access.log in server a, 
> 
> 1406185543.094  0 172.17.192.145 UDP_MISS/000 124 ICP_QUERY
> https://serverb.domain:9443/ccm/authenticated/identity?redirectPath=%2Fccm%2Fjauth-issue-token
> - HIER_NONE/- -
> 1406185544.871  0 172.17.192.145 UDP_MISS/000 79 ICP_QUERY
> https://serverb.domain:9443/ccm/auth/authrequired - HIER_NONE/- -
> 1406185565.202  0 172.17.192.145 UDP_MISS/000 124 ICP_QUERY
> https://serverb.domain:9443/ccm/authenticated/identity?redirectPath=%2Fccm%2Fjauth-issue-token
> - HIER_NONE/- -
> 1406185566.732  0 172.17.192.145 UDP_MISS/000 79 ICP_QUERY
> https://serverb.domain:9443/ccm/auth/authrequired - HIER_NONE/- -
> 1406185615.090  0 172.17.192.145 UDP_MISS/000 124 ICP_QUERY
> https://serverb.domain:9443/ccm/authenticated/identity?redirectPath=%2Fccm%2Fjauth-issue-token
> - HIER_NONE/- -
> 

Showing that server B is in fact qeuerying server A for the objects. But
it would seem that server A did not have them cached.

It may be that these responses use Vary: header. ICP does not handle
that type of response properly. You may get better behaviour using HTCP
instead of ICP between the siblings.


I also note that you have 40GB of RAM allocated to each of these Squid
instances. Do you actually have over 100GB of RAM on those machines
(*excluding* swap space)?

Amos



Re: [squid-users] YouTube Resolution Locker

2014-07-25 Thread Cassiano Martin
Yes, as the youtube accelerator cache does.

Only youtube urls need to be rewritten, so you dont need to forward
all URLs to storeid.

2014-07-25 23:25 GMT-03:00 Amm :
> On 07/25/2014 09:03 PM, Stakres wrote:
>>
>> Hi All,
>>
>> Free API to lock resolution in YouTube players via your prefered Squid
>> Cache.
>> https://sourceforge.net/projects/youtuberesolutionlocker/
>
>
> BIG WARNING:
>
> I looked at the script out of curiosity. It sends all queries to
> storeid.unveiltech.com in background.
>
> Amm


Re: [squid-users] Change Protocol of Squid Error Pages

2014-07-25 Thread Amos Jeffries
On 26/07/2014 5:42 a.m., max wrote:
> Am 25.07.2014 13:38, schrieb Amos Jeffries:
>> On 25/07/2014 9:09 p.m., max wrote:
>>> Hey there,
>>> i'm wondering is it possible to change the protocol of Squid error
>>> Pages?
>>>
>>> For Example:
>>>
>>> When squid redirects to "deny_info 307:ERR_BLOCK" the request is made in
>>> http but i want to use https.
>>> Is that possible?
>>> I am not able to use https://somedomain because of dynamic content on
>>> the Error Page.
>> You answered your own question right there.
>>
>> The 307 code is just an instruction for the client to fetch a different
>> URL - the one following the ':' in deny_info parameter. That can be any
>> valid URI. Including https:// ones.
>>
>> Dynamic content in the page that deny_info URL presents has nothing to
>> do with Squid.
>>
>> Amos
>>
>>
> Well yes, in my case it has.
> I use Squid to load the dynamic Content. My ERR_BLOCK calls a Page with
> an iframe - this loads content.
> So i would would need to call the URI with some kind of variable. A
> token to call the iframe Data.
> like
> https://somepage.tld/?=randomtokenhere
> But i dont know if there is a way i can do that within squid.conf
> 
> Cheers
> Max


  "deny_info 307:ERR_BLOCK"

causes Squid to generate the Http response message:

 HTTP/1.1 307 See Other\r\n
 Location: ERR_BLOCK\r\n
 \r\n

Please see  for the
available macro codes. This may require you to upgrade your Squid if it
is older than 3.2.

Amos


Re: [squid-users] YouTube Resolution Locker

2014-07-25 Thread Amm

On 07/25/2014 09:03 PM, Stakres wrote:

Hi All,

Free API to lock resolution in YouTube players via your prefered Squid
Cache.
https://sourceforge.net/projects/youtuberesolutionlocker/


BIG WARNING:

I looked at the script out of curiosity. It sends all queries to 
storeid.unveiltech.com in background.


Amm


Re: [squid-users] Re: Never used Squid, need to access it

2014-07-25 Thread Cassiano Martin
if you don't know where is squid.conf you can locate it by running:

find / -name squid.conf 2>/dev/null

It will print full path to config file. When found, you will need to
know how to manage it.

Your server might be running an old squid version, so you have to pay
attention to newer ACLs types, that your squid
may not accept.

2014-07-25 7:59 GMT-03:00 babajaga :
>>how to actually access the software itself. <
>
> Pls, be more specific. What do you want to know or achieve ?
>
> (Usually, either in /etc OR in /usr/local/squid/etc the config-files to be
> found).
> Search for squid.conf. That's the entry for the features used.
>
> Depending on, whether squid has been installed from a binary package, or
> not, you also might find sources.
>
>
>
>
> --
> View this message in context: 
> http://squid-web-proxy-cache.1019090.n4.nabble.com/Never-used-Squid-need-to-access-it-tp4667025p4667026.html
> Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Sibling cache peer for a HTTPS reverse proxy

2014-07-25 Thread Makson
Thanks for your reminder, i think the HTML RAW tag caused the problem, send
the log again.

Some records found in access.log in server b, 

1406185920.441   1282 172.17.210.5 TCP_MISS/200 814 GET
https://serverb.domain:9443/ccm/service/com.ibm.team.scm.common.IVersionedContentService/content/com.ibm.team.filesystem/FileItem/_houAAK2yEeOvOJ84krOqLg/_EPGIsq20EeOEJLtkkn17bg/h2LjUv8WJVDwJ3rcbA6_u3fNuJylQ0sQlSZdRL_IMkA
- FIRSTUP_PARENT/172.17.96.148 application/octet-stream
1406185921.151  46349 172.17.210.5 TCP_MISS/200 219202 GET
https://serverb.domain:9443/ccm/service/com.ibm.team.scm.common.IVersionedContentService/content/com.ibm.team.filesystem/FileItem/_hpCwIK2yEeOvOJ84krOqLg/_EN-HVK20EeOEJLtkkn17bg/rnslrsXloPXpudCIXRFjShexoc97mr7-2RxWPs7pVnI
- FIRSTUP_PARENT/172.17.96.148 application/octet-stream


All records found in access.log in server a, 

1406185543.094  0 172.17.192.145 UDP_MISS/000 124 ICP_QUERY
https://serverb.domain:9443/ccm/authenticated/identity?redirectPath=%2Fccm%2Fjauth-issue-token
- HIER_NONE/- -
1406185544.871  0 172.17.192.145 UDP_MISS/000 79 ICP_QUERY
https://serverb.domain:9443/ccm/auth/authrequired - HIER_NONE/- -
1406185565.202  0 172.17.192.145 UDP_MISS/000 124 ICP_QUERY
https://serverb.domain:9443/ccm/authenticated/identity?redirectPath=%2Fccm%2Fjauth-issue-token
- HIER_NONE/- -
1406185566.732  0 172.17.192.145 UDP_MISS/000 79 ICP_QUERY
https://serverb.domain:9443/ccm/auth/authrequired - HIER_NONE/- -
1406185615.090  0 172.17.192.145 UDP_MISS/000 124 ICP_QUERY
https://serverb.domain:9443/ccm/authenticated/identity?redirectPath=%2Fccm%2Fjauth-issue-token
- HIER_NONE/- -



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Sibling-cache-peer-for-a-HTTPS-reverse-proxy-tp4667011p4667048.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Trouble with Session Handler

2014-07-25 Thread Cemil Browne
Hi Amos,

Thanks so much for the prompt reply.I've got it working, but
please see inline below:

On 25 July 2014 21:30, Amos Jeffries  wrote:
> On 25/07/2014 7:13 p.m., Cemil Browne wrote:
>> Hi all, I'm trying to set up a situation as follows:  I have a web
>> server at [server]:80   .  I've got squid installed on [server]:3000 .
>
> This is back to front.
>
> Squid should be the gateway listening on [server]:80, with the web
> server listening on a private IP of the machine, also port 80 if
> possible (ie localhost:80).

Agreed - for testing purposes at this point, final IPs/Ports TBD.
Thank you for the advice.
>
>
>> The requirement is to ensure that any request to web server protected
>> content (/FP/*) is redirected to a splash page (terms and conditions),
>> accepted, then allowed.  I've got most of the way, but the last bit
>> doesn't work.  This is on a private network.
>>
>> Squid config:
>>
>> http_port 3000 accel defaultsite=192.168.56.101
>> cache_peer 127.0.0.1 parent 80 0 no-query originserver
>>
>>
>> external_acl_type session ttl=3 concurrency=100 %SRC
>> /usr/lib/squid/ext_session_acl -a -T 60
>>
>> acl session_login external session LOGIN
>>
>> external_acl_type session_active_def ttl=3 concurrency=100 %SRC
>> /usr/lib/squid/ext_session_acl -a -T 60
>>
>
> Each of the above two external_acl_type definitions runs different
> helper instances. Since you have not defined a on-disk database that
> they share the session data will be stored in memory for whichever one
> is startign teh sessions, but inaccessible to teh one checking if
> session exists.

Interesting - I've changed this and it works, however, I was following
the instructions at:

http://wiki.squid-cache.org/ConfigExamples/Portal/Splash

Which has two different external_acl_type definitions - agreed that
the example at the wiki stores to disk, but I tried that as well.
Perhaps I stored to a file rather than a directory (/tmp/session.db)
and that's the issue?

>
>
>> acl session_is_active external session_active_def
>>
>
> What you should have is exactly *1* external_acl_type directive, used by
> two different acl directives.
>
> Like so:
>   external_acl_type session ttl=3 concurrency=100 %SRC
> /usr/lib/squid/ext_session_acl -a -T 60
>
>   acl session_login external session LOGIN
>   acl session_is_active external session
>
>> acl accepted_url url_regex -i accepted.html.*
>> acl splash_url url_regex -i ^http://192.168.56.101:3000/splash.html$
>> acl protected url_regex FP.*
>
> Regex has implicit .* before and after every pattern unless an ^ or $
> anchor is specified. You do not have to write the .*

Thanks again - good to know.

>
> Also, according to your policy description that last pattern should be
> matching path prefix "/FP" not any URL containing "FP".
>
>>
>> http_access allow splash_url
>> http_access allow accepted_url session_login
>>
>> http_access deny protected !session_is_active
>>
>> deny_info http://192.168.56.101:3000/splash.html session_is_active
>
> It is best to use splash.html as static page deliverd in place of the
> access denied page:
>  deny_info splash.html session_is_active
>
> then have the ToC accept button URL be the one which begins the session.
>
> So stitching the above changes into your squid.conf you should have this:
>
>   http_port 192.168.56.101:80 accel defaultsite=192.168.56.101
>   cache_peer 127.0.0.1 parent 80 0 no-query originserver
>
>   external_acl_type session ttl=3 concurrency=100 %SRC
> /usr/lib/squid/ext_session_acl -a -T 60
>
>   acl session_login external session LOGIN
>   acl session_is_active external session
>   deny_info /etc/squid/splash.html session_is_active
>
>   acl accepted_url urlpath_regex -i accepted.html$
>   acl splash_url url_regex -i ^http://192.168.56.101/splash.html$
>   acl protected urlpath_regex ^/FP
>
>   http_access allow splash_url
>   http_access allow accepted_url session_login
>   http_access deny protected !session_is_active
>
>
> Amos

Thanks again - I've made some minor tweaks to what you've put above
and this is now working.  I really appreciate the help on this one -
got me over a serious hump!

Thanks,
Cemil


Re: [squid-users] Re: Sibling cache peer for a HTTPS reverse proxy

2014-07-25 Thread Alex Rousskov
On 07/24/2014 02:57 AM, Makson wrote:
> And here are some records found in access.log in server b,
> 
> 
> 
> here are ALL records found in access.log in server a,
> 
> 
> 
> 
> 
> --
> Sent from the Squid - Users mailing list archive at Nabble.com.

Just FYI: The above is how Babble sends your email to the mailing list.
Notice that all the log lines are gone.


HTH,

Alex.



Re: [squid-users] Set up squid as a transparent proxy

2014-07-25 Thread Israel Brewster
Ok, I think I finally got this working. It took a combination of using 
divert-to in the pf.conf, and intercept (rather than tproxy or transparent) in 
squid.conf. At any rate, basic functionality appears to be restored. So now I 
just need to expand the system to the full level of functionality that I need. 
Thanks for bearing with me!

---
Israel Brewster
Systems Analyst II
Ravn Alaska
5245 Airport Industrial Rd
Fairbanks, AK 99709
(907) 450-7293
---


BEGIN:VCARD
VERSION:3.0
N:Brewster;Israel;;;
FN:Israel Brewster
ORG:Frontier Flying Service;MIS
TITLE:PC Support Tech II
EMAIL;type=INTERNET;type=WORK;type=pref:isr...@frontierflying.com
TEL;type=WORK;type=pref:907-450-7293
item1.ADR;type=WORK;type=pref:;;5245 Airport Industrial Wy;Fairbanks;AK;99701;
item1.X-ABADR:us
CATEGORIES:General
X-ABUID:36305438-95EA-4410-91AB-45D16CABCDDC\:ABPerson
END:VCARD
 

On Jul 25, 2014, at 8:38 AM, Israel Brewster  wrote:

> On Jul 25, 2014, at 3:32 AM, Amos Jeffries  wrote:
> 
>> On 25/07/2014 10:15 a.m., Israel Brewster wrote:
>>> I have been using Squid 2.9 on OpenBSD 5.0 for a while as a transparent 
>>> proxy. PF on the proxy box rdr-to redirects all web requests not destined 
>>> for the box itself to squid running on port 3128. Squid then processes the 
>>> request based on a series of ACLs, and either allows the request or 
>>> redirects (deny_info ... all) the request to a page on the proxy box.
>>> 
>> 
>> There are some big changes in OpenBSD between those versions. Have you
>> tried divert-to in the PF rules and tproxy option on the Squid http_port ?
>> 
>> Amos
> 
> I figured as much. Thus the reason I am going back to just trying to get a 
> basic setup working. So I have now gone back to the default config files for 
> pf and squid. 
> 
> First, I set up PF to just do basic routing (no squid) and made sure that 
> worked by adding the single line (along with some macros):
> 
> match out on $outsideIF from !(outsideIF:network) nat-to $OutsideIP
> 
> I was then able to properly access webpages through the box. So far so good. 
> I then followed this guide: 
> http://wiki.squid-cache.org/ConfigExamples/Intercept/OpenBsdPf, which uses 
> tproxy and divert-to, as you suggested. Other than the changes listed in the 
> guide, I also stripped down the squid http_access rules to the basic "block 
> all but a few" set I listed earlier, and added an extra http_port line (with 
> no modifiers) to avoid errors on startup. The only set skip rule I have in PF 
> is set skip on lo, which should be fine (I think).
> 
> At this point, from what I can tell, everything was broken. Attempting to 
> connect to a website through the box now returns (using firefox) "Unable to 
> connect. Firefox can't establish a connection to the server at ..." 
> regardless of the site I attempt to connect to. Perhaps more to the point, 
> squid running in debug mode shows no indication of an attempted connection. 
> 
> looking at the PF.log shows the following when I attempt to connect to a 
> webpage:
> 
> 08:28:50.954386 rule 0/(match) match in on em0: 192.168.10.51.49635 > 
> 96.30.50.156.80: S 2366946536:2366946536(0) win 65535  4,nop,nop,timestamp 721039242 0,sackOK,eol> (DF)
> 08:28:50.954393 rule 2/(match) pass in on em0: 192.168.10.51.49635 > 
> 96.30.50.156.80: S 2366946536:2366946536(0) win 65535  4,nop,nop,timestamp 721039242 0,sackOK,eol> (DF)
> 08:28:50.954398 rule 2/(match) pass in on em0: 192.168.10.51.49635 > 
> 96.30.50.156.80: S 2366946536:2366946536(0) win 65535  4,nop,nop,timestamp 721039242 0,sackOK,eol> (DF)
> 
> Where rule 0 is the logging rule (match log (matches) inet from 
> 192.168.10.0/24 to any) and rule 2 is the divert-to rule (pass in quick inet 
> proto tcp from 192.168.10.0/24 to any port = 80 flags S/SA divert-to 
> 127.0.0.1 port 3129)
> 
> Squid debugging output shows nothing, as I mentioned - no attempted 
> connection, no activity of any kind, although the startup sequence does show 
> "Accepting TPROXY intercepted HTTP Socket connections at local=127.0.0.1:3129 
> remote=[::] FD 9 flags=25", which would appear to indicate that it IS 
> listening on port 3129, which is what PF is (supposedly) diverting to. Using 
> rdr-to in pf, at least I saw the attempted connection in squid, and got a 
> return page from squid, although it never let anything through (perhaps due 
> to the redirection loop?). 
> 
> So to summarize, at this point I have added the following three lines to 
> pf.conf (my inside network is 192.168.10.0/24, and the interface IP on the 
> inside NIC is 192.168.10.1):
> 
> match out on $outsideIF from !(outsideIF:network) nat-to $OutsideIP
> pass in quick inet proto tcp from 192.168.10.0/24 to port www divert-to 
> 127.0.0.1 port 3129
> pass out quick inet from 192.168.10.0/24 divert-reply
> 
> And my squid.conf contains the following:
> 
> acl authorized_hosts dstdomain .google.com
> acl authorized_hosts dstd

Re: [squid-users] Change Protocol of Squid Error Pages

2014-07-25 Thread max

Am 25.07.2014 13:38, schrieb Amos Jeffries:

On 25/07/2014 9:09 p.m., max wrote:

Hey there,
i'm wondering is it possible to change the protocol of Squid error Pages?

For Example:

When squid redirects to "deny_info 307:ERR_BLOCK" the request is made in
http but i want to use https.
Is that possible?
I am not able to use https://somedomain because of dynamic content on
the Error Page.

You answered your own question right there.

The 307 code is just an instruction for the client to fetch a different
URL - the one following the ':' in deny_info parameter. That can be any
valid URI. Including https:// ones.

Dynamic content in the page that deny_info URL presents has nothing to
do with Squid.

Amos



Well yes, in my case it has.
I use Squid to load the dynamic Content. My ERR_BLOCK calls a Page with 
an iframe - this loads content.
So i would would need to call the URI with some kind of variable. A 
token to call the iframe Data.

like
https://somepage.tld/?=randomtokenhere
But i dont know if there is a way i can do that within squid.conf

Cheers
Max


Re: [squid-users] Set up squid as a transparent proxy

2014-07-25 Thread Israel Brewster
On Jul 25, 2014, at 3:32 AM, Amos Jeffries  wrote:

> On 25/07/2014 10:15 a.m., Israel Brewster wrote:
>> I have been using Squid 2.9 on OpenBSD 5.0 for a while as a transparent 
>> proxy. PF on the proxy box rdr-to redirects all web requests not destined 
>> for the box itself to squid running on port 3128. Squid then processes the 
>> request based on a series of ACLs, and either allows the request or 
>> redirects (deny_info ... all) the request to a page on the proxy box.
>> 
> 
> There are some big changes in OpenBSD between those versions. Have you
> tried divert-to in the PF rules and tproxy option on the Squid http_port ?
> 
> Amos

I figured as much. Thus the reason I am going back to just trying to get a 
basic setup working. So I have now gone back to the default config files for pf 
and squid. 

First, I set up PF to just do basic routing (no squid) and made sure that 
worked by adding the single line (along with some macros):

match out on $outsideIF from !(outsideIF:network) nat-to $OutsideIP

I was then able to properly access webpages through the box. So far so good. I 
then followed this guide: 
http://wiki.squid-cache.org/ConfigExamples/Intercept/OpenBsdPf, which uses 
tproxy and divert-to, as you suggested. Other than the changes listed in the 
guide, I also stripped down the squid http_access rules to the basic "block all 
but a few" set I listed earlier, and added an extra http_port line (with no 
modifiers) to avoid errors on startup. The only set skip rule I have in PF is 
set skip on lo, which should be fine (I think).

At this point, from what I can tell, everything was broken. Attempting to 
connect to a website through the box now returns (using firefox) "Unable to 
connect. Firefox can't establish a connection to the server at ..." regardless 
of the site I attempt to connect to. Perhaps more to the point, squid running 
in debug mode shows no indication of an attempted connection. 

looking at the PF.log shows the following when I attempt to connect to a 
webpage:

08:28:50.954386 rule 0/(match) match in on em0: 192.168.10.51.49635 > 
96.30.50.156.80: S 2366946536:2366946536(0) win 65535  (DF)
08:28:50.954393 rule 2/(match) pass in on em0: 192.168.10.51.49635 > 
96.30.50.156.80: S 2366946536:2366946536(0) win 65535  (DF)
08:28:50.954398 rule 2/(match) pass in on em0: 192.168.10.51.49635 > 
96.30.50.156.80: S 2366946536:2366946536(0) win 65535  (DF)

Where rule 0 is the logging rule (match log (matches) inet from 192.168.10.0/24 
to any) and rule 2 is the divert-to rule (pass in quick inet proto tcp from 
192.168.10.0/24 to any port = 80 flags S/SA divert-to 127.0.0.1 port 3129)

Squid debugging output shows nothing, as I mentioned - no attempted connection, 
no activity of any kind, although the startup sequence does show "Accepting 
TPROXY intercepted HTTP Socket connections at local=127.0.0.1:3129 remote=[::] 
FD 9 flags=25", which would appear to indicate that it IS listening on port 
3129, which is what PF is (supposedly) diverting to. Using rdr-to in pf, at 
least I saw the attempted connection in squid, and got a return page from 
squid, although it never let anything through (perhaps due to the redirection 
loop?). 

So to summarize, at this point I have added the following three lines to 
pf.conf (my inside network is 192.168.10.0/24, and the interface IP on the 
inside NIC is 192.168.10.1):

match out on $outsideIF from !(outsideIF:network) nat-to $OutsideIP
pass in quick inet proto tcp from 192.168.10.0/24 to port www divert-to 
127.0.0.1 port 3129
pass out quick inet from 192.168.10.0/24 divert-reply

And my squid.conf contains the following:

acl authorized_hosts dstdomain .google.com
acl authorized_hosts dstdomain .wunderground.com
acl authorized_hosts dstdomain .noaa.gov

http_access allow authorized_hosts
http_access deny to_localhost
http_access deny all

http_port 3129 tproxy
http_port 3128

coredump_dir /var/squid/cache

refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320

deny_info http://192.168.10.1/login.py all

Although as I said it doesn't appear to me that squid is getting the traffic at 
all. When running squid in debug mode, I see the following:

# squid -d8 -N 
2014/07/25 08:10:58| Set Current Directory to /var/squid/cache
2014/07/25 08:10:58| Starting Squid Cache version 3.4.2 for 
i386-unknown-openbsd5.5...
2014/07/25 08:10:58| Process ID 28065
2014/07/25 08:10:58| Process Roles: master worker
2014/07/25 08:10:58| With 128 file descriptors available
2014/07/25 08:10:58| Initializing IP Cache...
2014/07/25 08:10:58| DNS Socket created at [::], FD 5
2014/07/25 08:10:58| DNS Socket created at 0.0.0.0, FD 6
2014/07/25 08:10:58| Adding nameserver 8.8.8.8 from /etc/resolv.conf
2014/07/25 08:10:58| Adding nameserver 8.8.4.4 from /etc/resolv.conf
2014/07/25 08:10:58| Logfile: opening log daemon:/var/s

[squid-users] YouTube Resolution Locker

2014-07-25 Thread Stakres
Hi All,

Free API to lock resolution in YouTube players via your prefered Squid
Cache.
https://sourceforge.net/projects/youtuberesolutionlocker/

Very easy to use 

Bye Fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/YouTube-Resolution-Locker-tp4667042.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] How to install squid 3.4.6 in freebsd

2014-07-25 Thread Amos Jeffries
On 26/07/2014 12:57 a.m., Soporte Técnico wrote:
> Anyone have idea how i can download / install squid 3.4.6 in freebsd 9?
> 
> There´s any tutorial, instructions, download sites or similar?

http://wiki.squid-cache.org/KnowledgeBase/FreeBSD

Amos



[squid-users] Re: 3.HEAD and delay pools

2014-07-25 Thread masterx81
Sorry, i've posted using the web interface that allow "special" quoting...
I've edited the messages...

With qos of the os i can do qos based on AD group membership? And how it do
the autenitication?

With delay pools, it output that error on reconfigure,  and with
client_delay_pool the squid process crash




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/3-HEAD-and-delay-pools-tp4667023p4667040.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] How to install squid 3.4.6 in freebsd

2014-07-25 Thread Soporte Técnico
Anyone have idea how i can download / install squid 3.4.6 in freebsd 9?

There´s any tutorial, instructions, download sites or similar?

I´m looking on the net and can´t find nothing really working...

Jorge.




Re: [squid-users] Web/URL categorisation list

2014-07-25 Thread Marcus Kool

Hi Alan,

On http://www.squid-cache.org/Misc/redirectors.html
you can find a list of URL redirectors.

ufdbGuard is a free URL redirector that supports free databases
and a commercial database from www.urlfilterdb.com

Marcus


On 07/25/2014 08:33 AM, Alan Dawson wrote:

Hi,

Apologies if this is not completely on topic, but it does concern squid use!

I'm working with an UK Academic institution who are researching whether squid
can provide a usable web filtering solution.

Whilst they are pretty confident that squid will be able to perform at the
required level they are wondering where they can purchase a subscription to
a maintained list of categorised web sites and urls, that could be used to 
develop a bunch
of allow/deny acl's.

Does anyone on this list use squid in this way, and knows of such service ?

Please reply off list, thanks


Alan Dawson



Re: [squid-users] FW: Problem with server IO resource, need to reduce logging level by excluding specific sites from being logged

2014-07-25 Thread Amos Jeffries
On 25/07/2014 11:28 p.m., RYAN Justin wrote:
> Cheers Marcus,
> I did see via googling a rule of thumb quote " cache_mem = total physical 
> memory / 3" - ref 
> http://forums.justlinux.com/showthread.php?126396-Squid-cache-tuning there is 
> a more complex formula quoted too.
> 
> Money and access constraints negate the move to faster storage :)
> 
> I will look into your recommendations.
> 
> The question of removing noise from being logged still exists - would be a 
> nice to have option

Depends on what you mean by noise.

I assume you mean entries in access.log ...

The relevant directive is in your config file as "cache_access_log".
Nowdays that should be configured as:

  access_log /squid/logs/access.log squid

the line can be followed by a list of ACL names, all of which must match
for a transaction to be recorded in the log file.


For example; in order to log only requests for example.com

  acl example1 dstdomain example.com
  access_log /squid/logs/access.log squid example1


... or in order to omit all CONNECT requests:


  # ACL for CONNECT is already defined.
  access_log /squid/logs/access.log squid CONNECT


Amos



Re: [squid-users] 3.HEAD and delay pools

2014-07-25 Thread Amos Jeffries
On 25/07/2014 10:25 p.m., masterx81 wrote:
> Hi!
> I'm trying to limit the bandwidth of squid and i've a problem.
> I'm using the following directives:
> 
> But on reconfigure i get the error:
> 
> squid -v list the "--enable-delay-pools" compile option, so seem all ok...
> 
> What i'm doing wrong?
> 

Using Nabble to send graphical quotations to a text-only mailing list.
Please try again without the fancy quoting.

> And also, what's the best way to limit upload bandwidth of squid?

Using operating system QoS controls. They work far better than Squid
delay pools do.

> client_delay_pools?

If need be, yes.

Amos


Re: [squid-users] Change Protocol of Squid Error Pages

2014-07-25 Thread Amos Jeffries
On 25/07/2014 9:09 p.m., max wrote:
> Hey there,
> i'm wondering is it possible to change the protocol of Squid error Pages?
> 
> For Example:
> 
> When squid redirects to "deny_info 307:ERR_BLOCK" the request is made in
> http but i want to use https.
> Is that possible?
> I am not able to use https://somedomain because of dynamic content on
> the Error Page.

You answered your own question right there.

The 307 code is just an instruction for the client to fetch a different
URL - the one following the ':' in deny_info parameter. That can be any
valid URI. Including https:// ones.

Dynamic content in the page that deny_info URL presents has nothing to
do with Squid.

Amos



[squid-users] Re: 3.HEAD and delay pools

2014-07-25 Thread masterx81
For now i'm only playing with the limits, but at the end i want to limit some
class of users, for save some bandwidth
Both in upload and in download... I don't want that is someone
download/upload a big file, the whole network have slow internet access...
I'll read your post, thanks for the link!



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/3-HEAD-and-delay-pools-tp4667023p4667034.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Web/URL categorisation list

2014-07-25 Thread Alan Dawson
Hi, 

Apologies if this is not completely on topic, but it does concern squid use!

I'm working with an UK Academic institution who are researching whether squid
can provide a usable web filtering solution.

Whilst they are pretty confident that squid will be able to perform at the
required level they are wondering where they can purchase a subscription to
a maintained list of categorised web sites and urls, that could be used to 
develop a bunch
of allow/deny acl's.

Does anyone on this list use squid in this way, and knows of such service ?

Please reply off list, thanks


Alan Dawson
-- 
"The introduction of a coordinate system to geometry is an act of violence"


signature.asc
Description: Digital signature


Re: [squid-users] Set up squid as a transparent proxy

2014-07-25 Thread Amos Jeffries
On 25/07/2014 10:15 a.m., Israel Brewster wrote:
> I have been using Squid 2.9 on OpenBSD 5.0 for a while as a transparent 
> proxy. PF on the proxy box rdr-to redirects all web requests not destined for 
> the box itself to squid running on port 3128. Squid then processes the 
> request based on a series of ACLs, and either allows the request or redirects 
> (deny_info ... all) the request to a page on the proxy box.
> 

There are some big changes in OpenBSD between those versions. Have you
tried divert-to in the PF rules and tproxy option on the Squid http_port ?

Amos


Re: [squid-users] Trouble with Session Handler

2014-07-25 Thread Amos Jeffries
On 25/07/2014 7:13 p.m., Cemil Browne wrote:
> Hi all, I'm trying to set up a situation as follows:  I have a web
> server at [server]:80   .  I've got squid installed on [server]:3000 .

This is back to front.

Squid should be the gateway listening on [server]:80, with the web
server listening on a private IP of the machine, also port 80 if
possible (ie localhost:80).


> The requirement is to ensure that any request to web server protected
> content (/FP/*) is redirected to a splash page (terms and conditions),
> accepted, then allowed.  I've got most of the way, but the last bit
> doesn't work.  This is on a private network.
> 
> Squid config:
> 
> http_port 3000 accel defaultsite=192.168.56.101
> cache_peer 127.0.0.1 parent 80 0 no-query originserver
> 
> 
> external_acl_type session ttl=3 concurrency=100 %SRC
> /usr/lib/squid/ext_session_acl -a -T 60
> 
> acl session_login external session LOGIN
> 
> external_acl_type session_active_def ttl=3 concurrency=100 %SRC
> /usr/lib/squid/ext_session_acl -a -T 60
> 

Each of the above two external_acl_type definitions runs different
helper instances. Since you have not defined a on-disk database that
they share the session data will be stored in memory for whichever one
is startign teh sessions, but inaccessible to teh one checking if
session exists.


> acl session_is_active external session_active_def
> 

What you should have is exactly *1* external_acl_type directive, used by
two different acl directives.

Like so:
  external_acl_type session ttl=3 concurrency=100 %SRC
/usr/lib/squid/ext_session_acl -a -T 60

  acl session_login external session LOGIN
  acl session_is_active external session

> acl accepted_url url_regex -i accepted.html.*
> acl splash_url url_regex -i ^http://192.168.56.101:3000/splash.html$
> acl protected url_regex FP.*

Regex has implicit .* before and after every pattern unless an ^ or $
anchor is specified. You do not have to write the .*

Also, according to your policy description that last pattern should be
matching path prefix "/FP" not any URL containing "FP".

> 
> http_access allow splash_url
> http_access allow accepted_url session_login
> 
> http_access deny protected !session_is_active
> 
> deny_info http://192.168.56.101:3000/splash.html session_is_active

It is best to use splash.html as static page deliverd in place of the
access denied page:
 deny_info splash.html session_is_active

then have the ToC accept button URL be the one which begins the session.

So stitching the above changes into your squid.conf you should have this:

  http_port 192.168.56.101:80 accel defaultsite=192.168.56.101
  cache_peer 127.0.0.1 parent 80 0 no-query originserver

  external_acl_type session ttl=3 concurrency=100 %SRC
/usr/lib/squid/ext_session_acl -a -T 60

  acl session_login external session LOGIN
  acl session_is_active external session
  deny_info /etc/squid/splash.html session_is_active

  acl accepted_url urlpath_regex -i accepted.html$
  acl splash_url url_regex -i ^http://192.168.56.101/splash.html$
  acl protected urlpath_regex ^/FP

  http_access allow splash_url
  http_access allow accepted_url session_login
  http_access deny protected !session_is_active


Amos


RE: [squid-users] FW: Problem with server IO resource, need to reduce logging level by excluding specific sites from being logged

2014-07-25 Thread RYAN Justin
Cheers Marcus,
I did see via googling a rule of thumb quote " cache_mem = total physical 
memory / 3" - ref 
http://forums.justlinux.com/showthread.php?126396-Squid-cache-tuning there is a 
more complex formula quoted too.

Money and access constraints negate the move to faster storage :)

I will look into your recommendations.

The question of removing noise from being logged still exists - would be a nice 
to have option


-Original Message-
From: Marcus Kool [mailto:marcus.k...@urlfilterdb.com]
Sent: 25 July 2014 12:11
To: RYAN Justin
Cc: 'squid-users@squid-cache.org'
Subject: Re: [squid-users] FW: Problem with server IO resource, need to reduce 
logging level by excluding specific sites from being logged

Juz,

The mount options rw,noatime reduce I/O a little for ext4 so they are 
recommended for /squid.

Since the system has 4 GB memory it is recommendable to increase cache_mem from 
32 MB to 512 MB and to change maximum_object_size_in_memory from 20 KB to 128 
KB.
Both options help to cache more in-memory instead of on-disk and hence reduce 
disk reads.

But only increase the parameters if the system has enough free memory to give 
to Squid.
Note that 512 MB memory cache 'translates' into roughly 1.4 GB total memory 
requirement for Squid.

And last but not least, since the disk is a virtual disk, it is recommendable 
to see if the virtual disk can be allocated on a device with more I/O capacity.

Marcus


On 07/25/2014 05:52 AM, RYAN Justin wrote:
> Sorry Marcus, new to this forum support.
>
> You mention cache_mem is small, excuse me noobness  can you explain the 
> impact.
> The Memory allocation to the VM is 4GB, and it has at present 4 VCPU (doesn't 
> look like it being stressed at all).
>
> Version = Squid Cache: Version 3.2.5
>
> Disk structure is as follows
>
> 20GB VDMK = System
> 40GB VMDK = SQUID only
>
> #
> # /etc/fstab
> # Created by anaconda on Mon Apr 23 16:24:28 2012 # # Accessible
> filesystems, by reference, are maintained under '/dev/disk'
> # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more
> info #
> /dev/mapper/vg_008-lv_root /   ext4defaults1 1
> UUID=c13ba480-17e3-4df3-b6d3-9a2eb9cea766 /boot   ext4
> defaults1 2
> # UUID=08301dc8-4e84-4cd9-a402-f4e71a461098 /squid  ext4
> defaults1 2
> /dev/mapper/vg_008-lv_swap swapswapdefaults0 0
> /dev/sdb/squid  ext4defaults  
>   1 2
>
> -Original Message-
> From: Marcus Kool [mailto:marcus.k...@urlfilterdb.com]
> Sent: 25 July 2014 00:37
> To: RYAN Justin
> Subject: Re: [squid-users] FW: Problem with server IO resource, need
> to reduce logging level by excluding specific sites from being logged
>
> Juz,
>
> The systems seems to have a very small config.
> 32 MB for cache_mem is very small indeed Do you have room/RAM to extend the 
> in-memory cache of Squid?
>
>   From the data that you posted it is not clear if /squid shares its disk 
> with /.
>
> What version of Squid do you have (output of squid -v) ?
>
> What file system type and mount options are used for /squid ?
>
> You did not reply to the squid list.
> I suggest to include the squid list in the CC: and replace the 
> cachemgr_passwd to XXX in the post.
>
> Marcus
>
>
>
> On 07/24/2014 10:39 AM, RYAN Justin wrote:
>> Sorry Marcus, was a little light on background. Storage on 2
>> partitions
>>
>> [root@ ]# df -k  
>>  Filesystem  
>> 1K-blocks   Used Available Use% Mounted on   
>>devtmpfs  
>> 2057264  0   2057264   0% /dev
>> tmpfs 2066040  0   2066040   0% 
>> /dev/shm
>> tmpfs 2066040504   2065536   1% /run
>> /dev/mapper/vg_008-lv_root   160623843864120  11382344  26% /
>> tmpfs 2066040  0   2066040   0% 
>> /sys/fs/cgroup
>> tmpfs 2066040  0   2066040   0% 
>> /media
>> /dev/sdb 41284928   14322924  24864852  37% 
>> /squid
>> /dev/sda2  495844  65891404353  15% /boot
>>
>> Below is the config
>>
>> http_port 3128
>> dns_nameservers 8.8.8.8
>> icp_port 0
>> acl QUERY urlpath_regex cgi-bin \?
>> no_cache deny QUERY
>> append_domain .phoenix.loc
>>
>> cache_mgr i...@pms.co.uk
>> cachemgr_passwd * all
>>
>> buffered_logs on
>> coredump_dir /squid/cache
>>
>> cache_access_log /squid/logs/access.log
>>
>> cache_log /squid/logs/cache.log
>> logfile_rotate 60
>>
>> cache_dir aufs /squid/cache 4096 16 256 cache_mem 32 MB
>> maximum_object_size 64 MB

maximum_object_size_in_memory 20 KB
>> c

Re: [squid-users] FW: Problem with server IO resource, need to reduce logging level by excluding specific sites from being logged

2014-07-25 Thread Marcus Kool

Juz,

The mount options rw,noatime reduce I/O a little for ext4 so they are 
recommended for /squid.

Since the system has 4 GB memory it is recommendable to increase
cache_mem from 32 MB to 512 MB and to change
maximum_object_size_in_memory from 20 KB to 128 KB.
Both options help to cache more in-memory instead of on-disk and hence reduce 
disk reads.

But only increase the parameters if the system has enough free memory to give 
to Squid.
Note that 512 MB memory cache 'translates' into roughly 1.4 GB total memory 
requirement for Squid.

And last but not least, since the disk is a virtual disk, it is recommendable
to see if the virtual disk can be allocated on a device with more I/O capacity.

Marcus


On 07/25/2014 05:52 AM, RYAN Justin wrote:

Sorry Marcus, new to this forum support.

You mention cache_mem is small, excuse me noobness  can you explain the impact.
The Memory allocation to the VM is 4GB, and it has at present 4 VCPU (doesn't 
look like it being stressed at all).

Version = Squid Cache: Version 3.2.5

Disk structure is as follows

20GB VDMK = System
40GB VMDK = SQUID only

#
# /etc/fstab
# Created by anaconda on Mon Apr 23 16:24:28 2012
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/vg_008-lv_root /   ext4defaults1 1
UUID=c13ba480-17e3-4df3-b6d3-9a2eb9cea766 /boot   ext4
defaults1 2
# UUID=08301dc8-4e84-4cd9-a402-f4e71a461098 /squid  ext4
defaults1 2
/dev/mapper/vg_008-lv_swap swapswapdefaults0 0
/dev/sdb/squid  ext4defaults
1 2

-Original Message-
From: Marcus Kool [mailto:marcus.k...@urlfilterdb.com]
Sent: 25 July 2014 00:37
To: RYAN Justin
Subject: Re: [squid-users] FW: Problem with server IO resource, need to reduce 
logging level by excluding specific sites from being logged

Juz,

The systems seems to have a very small config.
32 MB for cache_mem is very small indeed Do you have room/RAM to extend the 
in-memory cache of Squid?

  From the data that you posted it is not clear if /squid shares its disk with 
/.

What version of Squid do you have (output of squid -v) ?

What file system type and mount options are used for /squid ?

You did not reply to the squid list.
I suggest to include the squid list in the CC: and replace the cachemgr_passwd 
to XXX in the post.

Marcus



On 07/24/2014 10:39 AM, RYAN Justin wrote:

Sorry Marcus, was a little light on background. Storage on 2
partitions

[root@ ]# df -k 
  Filesystem
  1K-blocks   Used Available Use% Mounted on
  devtmpfs  
2057264  0   2057264   0% /dev
tmpfs 2066040  0   2066040   0% /dev/shm
tmpfs 2066040504   2065536   1% /run
/dev/mapper/vg_008-lv_root   160623843864120  11382344  26% /
tmpfs 2066040  0   2066040   0% 
/sys/fs/cgroup
tmpfs 2066040  0   2066040   0% /media
/dev/sdb 41284928   14322924  24864852  37% /squid
/dev/sda2  495844  65891404353  15% /boot

Below is the config

http_port 3128
dns_nameservers 8.8.8.8
icp_port 0
acl QUERY urlpath_regex cgi-bin \?
no_cache deny QUERY
append_domain .phoenix.loc

cache_mgr i...@pms.co.uk
cachemgr_passwd * all

buffered_logs on
coredump_dir /squid/cache

cache_access_log /squid/logs/access.log

cache_log /squid/logs/cache.log
logfile_rotate 60

cache_dir aufs /squid/cache 4096 16 256 cache_mem 32 MB
maximum_object_size 64 MB


maximum_object_size_in_memory 20 KB

cache_effective_user squid max_filedesc 4096


# acl all src all
# acl manager proto cache_object
acl localhost src 127.0.0.1
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443  # https
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
#acl SSL method CONNECT
acl CONNECT method CONNECT

acl webserver src 192.168.100.0/24
http_access allow manager webserver

http_access allow manager localhost
http_access deny manager
http_access deny CONNECT !SSL_ports
http_access deny !Safe_ports
http_access allow localhost

# 

[squid-users] Change Protocol of Squid Error Pages

2014-07-25 Thread max

Hey there,
i'm wondering is it possible to change the protocol of Squid error 
Pages?


For Example:

When squid redirects to "deny_info 307:ERR_BLOCK" the request is made in 
http but i want to use https.

Is that possible?
I am not able to use https://somedomain because of dynamic content on 
the Error Page.


Regards,
Max


[squid-users] Re: 3.HEAD and delay pools

2014-07-25 Thread babajaga
What do you want to achieve ?
You might also refere to my responses here:
http://squid-web-proxy-cache.1019090.n4.nabble.com/Re-split-the-connexion-using-Squid-td4666739.html#a4666742



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/3-HEAD-and-delay-pools-tp4667023p4667027.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Never used Squid, need to access it

2014-07-25 Thread babajaga
>how to actually access the software itself. <

Pls, be more specific. What do you want to know or achieve ?

(Usually, either in /etc OR in /usr/local/squid/etc the config-files to be
found).
Search for squid.conf. That's the entry for the features used.

Depending on, whether squid has been installed from a binary package, or
not, you also might find sources.




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Never-used-Squid-need-to-access-it-tp4667025p4667026.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: 3.HEAD and delay pools

2014-07-25 Thread masterx81
Little addition... If i use the following lines:


The squid process terminate itself...



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/3-HEAD-and-delay-pools-tp4667023p4667024.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] 3.HEAD and delay pools

2014-07-25 Thread masterx81
Hi!
I'm trying to limit the bandwidth of squid and i've a problem.
I'm using the following directives:

But on reconfigure i get the error:

squid -v list the "--enable-delay-pools" compile option, so seem all ok...

What i'm doing wrong?

And also, what's the best way to limit upload bandwidth of squid?
client_delay_pools?

Thanks!



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/3-HEAD-and-delay-pools-tp4667023.html
Sent from the Squid - Users mailing list archive at Nabble.com.


RE: [squid-users] FW: Problem with server IO resource, need to reduce logging level by excluding specific sites from being logged

2014-07-25 Thread RYAN Justin
Sorry Marcus, new to this forum support.

You mention cache_mem is small, excuse me noobness  can you explain the impact.
The Memory allocation to the VM is 4GB, and it has at present 4 VCPU (doesn't 
look like it being stressed at all).

Version = Squid Cache: Version 3.2.5

Disk structure is as follows

20GB VDMK = System
40GB VMDK = SQUID only

#
# /etc/fstab
# Created by anaconda on Mon Apr 23 16:24:28 2012
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/vg_008-lv_root /   ext4defaults1 1
UUID=c13ba480-17e3-4df3-b6d3-9a2eb9cea766 /boot   ext4
defaults1 2
# UUID=08301dc8-4e84-4cd9-a402-f4e71a461098 /squid  ext4
defaults1 2
/dev/mapper/vg_008-lv_swap swapswapdefaults0 0
/dev/sdb/squid  ext4defaults
1 2

-Original Message-
From: Marcus Kool [mailto:marcus.k...@urlfilterdb.com]
Sent: 25 July 2014 00:37
To: RYAN Justin
Subject: Re: [squid-users] FW: Problem with server IO resource, need to reduce 
logging level by excluding specific sites from being logged

Juz,

The systems seems to have a very small config.
32 MB for cache_mem is very small indeed Do you have room/RAM to extend the 
in-memory cache of Squid?

 From the data that you posted it is not clear if /squid shares its disk with /.

What version of Squid do you have (output of squid -v) ?

What file system type and mount options are used for /squid ?

You did not reply to the squid list.
I suggest to include the squid list in the CC: and replace the cachemgr_passwd 
to XXX in the post.

Marcus



On 07/24/2014 10:39 AM, RYAN Justin wrote:
> Sorry Marcus, was a little light on background. Storage on 2
> partitions
>
> [root@ ]# df -k   
> Filesystem
>   1K-blocks   Used Available Use% Mounted on  
> devtmpfs  
> 2057264  0   2057264   0% /dev
> tmpfs 2066040  0   2066040   0% 
> /dev/shm
> tmpfs 2066040504   2065536   1% /run
> /dev/mapper/vg_008-lv_root   160623843864120  11382344  26% /
> tmpfs 2066040  0   2066040   0% 
> /sys/fs/cgroup
> tmpfs 2066040  0   2066040   0% /media
> /dev/sdb 41284928   14322924  24864852  37% /squid
> /dev/sda2  495844  65891404353  15% /boot
>
> Below is the config
>
> http_port 3128
> dns_nameservers 8.8.8.8
> icp_port 0
> acl QUERY urlpath_regex cgi-bin \?
> no_cache deny QUERY
> append_domain .phoenix.loc
>
> cache_mgr i...@pms.co.uk
> cachemgr_passwd * all
>
> buffered_logs on
> coredump_dir /squid/cache
>
> cache_access_log /squid/logs/access.log
>
> cache_log /squid/logs/cache.log
> logfile_rotate 60
>
> cache_dir aufs /squid/cache 4096 16 256 cache_mem 32 MB
> maximum_object_size 64 MB maximum_object_size_in_memory 20 KB
> cache_effective_user squid max_filedesc 4096
>
>
> # acl all src all
> # acl manager proto cache_object
> acl localhost src 127.0.0.1
> acl to_localhost dst 127.0.0.0/8
> acl SSL_ports port 443  # https
> acl Safe_ports port 80  # http
> acl Safe_ports port 21  # ftp
> acl Safe_ports port 443 # https
> acl Safe_ports port 70  # gopher
> acl Safe_ports port 210 # wais
> acl Safe_ports port 1025-65535  # unregistered ports
> acl Safe_ports port 280 # http-mgmt
> acl Safe_ports port 488 # gss-http
> acl Safe_ports port 591 # filemaker
> acl Safe_ports port 777 # multiling http
> #acl SSL method CONNECT
> acl CONNECT method CONNECT
>
> acl webserver src 192.168.100.0/24
> http_access allow manager webserver
>
> http_access allow manager localhost
> http_access deny manager
> http_access deny CONNECT !SSL_ports
> http_access deny !Safe_ports
> http_access allow localhost
>
> # ---
> auth_param ntlm program /usr/bin/ntlm_auth
> --helper-protocol=squid-2.5-ntlmssp
> auth_param ntlm children 30 startup=30 # auth_param ntlm
> use_ntlm_negotiate on auth_param ntlm keep_alive off
>
> auth_param basic program /usr/bin/ntlm_auth
> --helper-protocol=squid-2.5-basic auth_param basic children 10
> startup=10 auth_param basic realm Squid proxy-caching web server
> auth_param basic credentialsttl 2 hours
>
>
> external_acl_type ADS children-max=30 children-startup=30 %LOGIN
> /usr/lib/squid/ext_wbinfo_group_acl
>
>
>
> acl block_all dstdomain "/squid/rules/block-all acl malware dstdomain
> "/squid/rul

Re: [squid-users] Re: kerberos authentication with load balancers

2014-07-25 Thread Giorgi Tepnadze
Hi Markus

Excuse me for posting in old list, but I have a small question:

So I have 2 squid servers (proxy1.domain.com and proxy2.domain.com) and
one DNS RR record (proxy.mia.gov.ge). Regarding your recommendation how
should I create keytab file.

msktutil -c -b "CN=COMPUTERS" -s HTTP/proxy1.domain.com -h
proxy1.domain.com -k /root/keytab/PROXY.keytab --computer-name PROXY1-K
--upn HTTP/proxy1.mia.gov.ge --server addc03.domain.com --verbose
--enctypes 28
msktutil -c -b "CN=COMPUTERS" -s HTTP/proxy2.domain.com -h
proxy2.domain.com -k /root/keytab/PROXY.keytab --computer-name PROXY2-K
--upn HTTP/proxy2.mia.gov.ge --server addc03.domain.com --verbose
--enctypes 28

and one for DNS RR record

msktutil -c -b "CN=COMPUTERS" -s HTTP/proxy.domain.com -h
proxy1.domain.com -k /root/keytab/PROXY.keytab --computer-name PROXY2-K
--upn HTTP/proxy.mia.gov.ge --server addc03.domain.com --verbose
--enctypes 28

But there is problem with last one, which server name should I put in
-s, -h, --upn and --computer-name?

Many Thanks

George



On 07/02/14 01:26, Markus Moeller wrote:
> Hi Joseph,
>
>   it is all possible :-)
>
>   Firstly I suggest not to use samba tools to create the squid keytab,
> but use msktutil (see
> http://wiki.squid-cache.org/ConfigExamples/Authenticate/Kerberos). 
> Then create a keytab for the loadbalancer name ( that is the one
> configured in IE or Firefox). use this keytab on both proxy servers
> and use negotiate_kerberos_auth with  -s GSS_C_NO_NAME
>
>  When you say multiple realms, do you have trust between the AD
> domains or are they separate ?   If the domains do not have trust do
> you intend to use the same loadbalancer name for the users of both
> domains ?
>
> Markus
>
>
>
> "Joseph Spadavecchia"  wrote in message
> news:2b43c569f8254a4e82c948ce4c247ed5158...@blx-ex01.alba.local...
>
> Hi there,
>
> What is the recommended way to configure Kerberos authentication
> behind two load balancers?
>
> AFAIK, based on the mailing lists, I should
>
> 1) Create a user account KrbUser on the AD server and add an SPN
> HTTP/loadbalancer.example.com for the load balancer
> 2) Join the domain with Kerberos and kinit
> 3) net ads keytab add HTTP/loadbalancer.example.com@REALM -U KrbUser
> 4) update squid.conf with an auth helper like negotiate_kerberos_auth
> -s HTTP/loadbalancer.example.com@REALM
>
> Unfortunately, when I try this it fails.
>
> The only way I could get it to work at all was by removing the SPN
> from the KrbUser and associating the SPN with the machine trust
> account (of the proxy behind the loadbalancer)  However, this is not a
> viable solution since there are two machines behind the load balancer
> and AD only allows you to associate a SPN with one account.
>
> Furthermore, given that I needed step (4) above, is it possible to
> have load balanced Kerberos authentication working with multiple
> realms?  If so, then how?
>
> Many thanks.
>



[squid-users] Trouble with Session Handler

2014-07-25 Thread Cemil Browne
Hi all, I'm trying to set up a situation as follows:  I have a web
server at [server]:80   .  I've got squid installed on [server]:3000 .
The requirement is to ensure that any request to web server protected
content (/FP/*) is redirected to a splash page (terms and conditions),
accepted, then allowed.  I've got most of the way, but the last bit
doesn't work.  This is on a private network.

Squid config:

http_port 3000 accel defaultsite=192.168.56.101
cache_peer 127.0.0.1 parent 80 0 no-query originserver


external_acl_type session ttl=3 concurrency=100 %SRC
/usr/lib/squid/ext_session_acl -a -T 60

acl session_login external session LOGIN

external_acl_type session_active_def ttl=3 concurrency=100 %SRC
/usr/lib/squid/ext_session_acl -a -T 60

acl session_is_active external session_active_def

acl accepted_url url_regex -i accepted.html.*
acl splash_url url_regex -i ^http://192.168.56.101:3000/splash.html$
acl protected url_regex FP.*

http_access allow splash_url
http_access allow accepted_url session_login

http_access deny protected !session_is_active

deny_info http://192.168.56.101:3000/splash.html session_is_active

 quid.conf is also at http://pastebin.com/PNqcVV1L
Basically, if I access protected content, I get redirected correctly
to splash_url (/splash.html) .  I then click to go to "accepted.html",
which then redirects, theoretically, to
 /FP/.  The problem is, accepted.html is never creating the session
(No LOGIN) so /FP just redirects back to the splash page.

 So I'm not getting sessions, in short.

 With debugging on, I get a match when I access accepted.html
(http://pastebin.com/PuCGL6m0) but still, no session login

 Any ideas?

 Thanks all!

-Cemil