Re: [squid-users] blocking ads/sites not working anymore?

2013-03-11 Thread Tim Bates

On 10/03/2013 10:45 PM, Andreas Westvik wrote:


So what kind of format do I have now then?
Do you have any examples?


You've got dstdom_regex in the line that includes the list file, so 
it's processing it through regex for every entry, and none of them need to.
Change the entries in the file to simply the following format and remove 
the _regex from the acl:

.yieldmanager.edgesuite.net

Note - Having not actually checked that, I could be very wrong. Test 
carefully ;)


TB




-Andreas
On Mar 10, 2013, at 07:46 , Amos Jeffries squ...@treenet.co.nz wrote:


On 2013-03-10 01:54, Andreas Westvik wrote:

Hi everyone

Over the time I have collected a lot of sites to block.

snip


#Block
acl ads dstdom_regex -i /etc/squid3/adservers
http_access deny ads


cat /etc/squid3/adservers | less

(^|\.)yieldmanager\.edgesuite\.net$
(^|\.)yieldmanager\.net$
(^|\.)yoc\.mobi$
(^|\.)yoggrt\.com$
(^|\.)yourtracking\.net$
(^|\.)z\.times\.lv$
(^|\.)z5x\.net$
(^|\.)zangocash\.com$
(^|\.)zanox-affiliate\.de$
(^|\.)zanox\.com$
(^|\.)zantracker\.com$
(^|\.)zde-affinity\.edgecaching\.net$
(^|\.)zedo\.com$
(^|\.)zencudo\.co\.uk$
(^|\.)zenzuu\.com$
(^|\.)zeus\.developershed\.com$
(^|\.)zeusclicks\.com$
(^|\.)zintext\.com$
(^|\.)zmedia\.com$



Besides the fix Amm already gave, you will find Squid runs a bit faster if you 
convert that listing to dstdomain format and use a dstdomain ACL to check it.


Amos







[squid-users] squid_kerb_auth problem after upgrade from 2.x to 3.1.10

2013-03-11 Thread Almot
Hello, previous version 2.x worked fine.
OS: Centos 6.3, kinit pass fine - Authenticated to Kerberos v5


When i upgraded to 3.1.10 i got error in cache.log

authenticateNegotiateHandleReply: Error validating user via Negotiate. Error
returned 'BH gss_acquire_cred() failed: Unspecified GSS failure.  Minor code
may provide more information. 

I tried check helper


/usr/lib/squid/squid_kerb_auth -s HTTP/srvproxy.xxx.local@XX.LOCAL -d
user pass
2013/03/11 11:34:03| squid_kerb_auth: DEBUG: Got 'user pass' from squid
(length: 17).
2013/03/11 11:34:03| squid_kerb_auth: ERROR: Invalid request [aabaev
asban81K27]
BH Invalid request


I do debug

-
1689  execve(/usr/lib/squid/squid_kerb_auth,
[/usr/lib/squid/squid_kerb_auth, -s, -d,
HTTP/srvproxy.7flowers.local@7FL...], [/* 23 vars */]) = 0
1689  brk(0)= 0x1cc7000
1689  mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1,
0) = 0xb7781000
1689  access(/etc/ld.so.preload, R_OK) = -1 ENOENT (No such file or
directory)
1689  open(/etc/ld.so.cache, O_RDONLY) = 3
1689  fstat64(3, {st_mode=S_IFREG|0644, st_size=29287, ...}) = 0
1689  mmap2(NULL, 29287, PROT_READ, MAP_PRIVATE, 3, 0) = 0xb7779000
1689  close(3)  = 0
1689  open(/lib/libgssapi_krb5.so.2, O_RDONLY) = 3
1689  read(3,
\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\360m\0\0004\0\0\0..., 512)
= 512
1689  fstat64(3, {st_mode=S_IFREG|0755, st_size=262124, ...}) = 0
1689  mmap2(NULL, 261128, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3,
0) = 0xdb2000
1689  mmap2(0xdf, 8192, PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x3e) = 0xdf
1689  close(3)  = 0
1689  open(/lib/libkrb5.so.3, O_RDONLY) = 3
1689  read(3,
\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\240\t\1\0004\0\0\0...,
512) = 512
1689  fstat64(3, {st_mode=S_IFREG|0755, st_size=901552, ...}) = 0
1689  mmap2(NULL, 904716, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3,
0) = 0x4a5000
1689  mmap2(0x57b000, 28672, PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0xd5) = 0x57b000
1689  close(3)  = 0
1689  open(/lib/libk5crypto.so.3, O_RDONLY) = 3
1689  read(3,
\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\340*\0\0004\0\0\0..., 512)
= 512
1689  fstat64(3, {st_mode=S_IFREG|0755, st_size=169712, ...}) = 0
1689  mmap2(NULL, 172056, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3,
0) = 0xec3000
1689  mmap2(0xeeb000, 8192, PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x28) = 0xeeb000
1689  mmap2(0xeed000, 24, PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0xeed000
1689  close(3)  = 0
1689  open(/lib/libcom_err.so.2, O_RDONLY) = 3
1689  read(3,
\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0P\16\0\0004\0\0\0..., 512)
= 512
1689  fstat64(3, {st_mode=S_IFREG|0755, st_size=13836, ...}) = 0
1689  mmap2(NULL, 16596, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3,
0) = 0x37c000
1689  mmap2(0x37f000, 8192, PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x2) = 0x37f000
1689  close(3)  = 0
1689  open(/lib/libm.so.6, O_RDONLY)  = 3
1689  read(3,
\177ELF\1\1\1\3\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0p4\0\0004\0\0\0..., 512) =
512
1689  fstat64(3, {st_mode=S_IFREG|0755, st_size=200024, ...}) = 0
1689  mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1,
0) = 0xb7778000
1689  mmap2(NULL, 168064, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3,
0) = 0x385000
1689  mmap2(0x3ad000, 8192, PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x27) = 0x3ad000
1689  close(3)  = 0
1689  open(/lib/libc.so.6, O_RDONLY)  = 3
1689  read(3,
\177ELF\1\1\1\3\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0@n\1\0004\0\0\0..., 512) =
512
1689  fstat64(3, {st_mode=S_IFREG|0755, st_size=1902708, ...}) = 0
1689  mmap2(NULL, 1665416, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE,
3, 0) = 0x6bf000
1689  mprotect(0x84f000, 4096, PROT_NONE) = 0
1689  mmap2(0x85, 12288, PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x190) = 0x85
1689  mmap2(0x853000, 10632, PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x853000
1689  close(3)  = 0
1689  open(/lib/libkrb5support.so.0, O_RDONLY) = 3
1689  read(3,
\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\360\36\0\0004\0\0\0...,
512) = 512
1689  fstat64(3, {st_mode=S_IFREG|0755, st_size=42716, ...}) = 0
1689  mmap2(NULL, 45592, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3,
0) = 0x588000
1689  mmap2(0x592000, 8192, PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x9) = 0x592000
1689  close(3)  = 0
1689  open(/lib/libdl.so.2, O_RDONLY) = 

Re: [squid-users] Squid 3.3.2 and SMP

2013-03-11 Thread Ralf Hildebrandt
* Alex Rousskov rouss...@measurement-factory.com:
 On 03/08/2013 08:11 AM, Adam W. Dace wrote:
  Does anyone have a simple example configuration for running Squid
  3.3.2 with multiple workers?
 
 You can just add workers 2 to squid.conf.default.

That's really all?

-- 
Ralf Hildebrandt   Charite Universitätsmedizin Berlin
ralf.hildebra...@charite.deCampus Benjamin Franklin
http://www.charite.de  Hindenburgdamm 30, 12203 Berlin
Geschäftsbereich IT, Abt. Netzwerk fon: +49-30-450.570.155


[squid-users] In which mode squid runs with ruckus accesspoint

2013-03-11 Thread benjamin fernandis
Hi,

Integrating squid box with rukus access point and captiive portal.

we have wifi users in network and we have captive portal for them.

For wifi, we are using ruckus access point and in there we configure
that to forward web traffic to squid box and in squid box we configure
url_rewrite, which only allow certain URL to surf and for rest it
rewrite the url with captive portal url.

Here what could be mode of squid ?  intercept / tproxy or ?

As in ruckus, simply redirect to ip : port.

Regards,
Ben


Re: [squid-users] Question about proxy_auth REQUIRED and the case of flushing the authentication-cache

2013-03-11 Thread Tom Tom
I'm still confused about squid's behavior in 3.2.7 concerning
credentials-caching and the order of the http_access-directive.

Does someone has an explanation for squids behavior (look the
questions in this post..) and does squid even cache the
negotiate-credentials??

Many thanks.
Tom


On Thu, Feb 28, 2013 at 1:45 PM, Tom Tom tomtux...@gmail.com wrote:
 On Tue, Feb 26, 2013 at 2:31 AM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 25/02/2013 8:27 p.m., Tom Tom wrote:

 I've attached both cache-traces (squid 3.2.7).

 without_407.txt has the following configuration:
 ...
 ...
 external_acl_type SQUID_KERB_LDAP ttl=7200 children-max=10
 children-startup=1 children-idle=1 negative_ttl=7200 %LOGIN
 /usr/local/squid/libexec/ext_kerberos_ldap_group_acl -g
 INTERNET_USERS
 acl INTERNET_ACCESS external SQUID_KERB_LDAP
 acl AUTHENTICATED proxy_auth REQUIRED
 http_access deny !INTERNET_ACCESS
 http_access deny !AUTHENTICATED
 http_access allow INTERNET_ACCESS AUTHENTICATED
 http_access allow localhost
 http_access deny all
 ...
 ...


 Note for anyone else reading this:
The above was a copy-n-paste typo. The without-407 config has no
 AUTHENTICATED access control definition.
 I think that the without-407.txt file HAS AUTHENTICATED access
 control definition. The without-407.txt means (for me), that there
 is NO 407 response. So this trace should show the behavior, where
 squid accepts the request without firing the 407.
 2013/02/25 07:26:30.936 kid1| Acl.cc(250) cacheMatchAcl:
 ACL::cacheMatchAcl: cache hit on acl 'AUTHENTICATED' (0xaafa60)
 2013/02/25 07:26:30.936 kid1| Acl.cc(312) checklistMatches:
 ACL::ChecklistMatches: result for 'AUTHENTICATED' is 1

 Is it possible, that squid uses his already filled up
 credentials-cache (mapping of ie. username+ip with
 kerberos-credentials) in the without-407.txt-trace? (look above:
 cache hit on acl 'AUTHENTICATED'...)





 In this case, the access.log shows the following:
 Mon Feb 25 08:14:23 2013 15 10.X.X.X TCP_REFRESH_UNMODIFIED/304
 283 GET http://imagesrv.adition.com/banners/750/683036/dummy.gif
 u...@example.com HIER_DIRECT/217.79.188.10 image/gif



 with_407.txt has the following configuration:
 ...
 ...
 external_acl_type SQUID_KERB_LDAP ttl=7200 children-max=10
 children-startup=1 children-idle=1 negative_ttl=7200 %LOGIN
 /usr/local/squid/libexec/ext_kerberos_ldap_group_acl -g
 INTERNET_USERS
 acl INTERNET_ACCESS external SQUID_KERB_LDAP
 acl AUTHENTICATED proxy_auth REQUIRED
 http_access deny !INTERNET_ACCESS
 http_access deny !AUTHENTICATED
 http_access allow INTERNET_ACCESS
 http_access allow localhost
 http_access deny all
 ...
 ...


 In this case, the access.log shows the following:
 Mon Feb 25 08:14:22 2013  0 10.X.X.X TCP_DENIED/407 4136 GET
 http://imagesrv.adition.com/banners/750/683036/dummy.gif - HIER_NONE/-
 text/html
 Mon Feb 25 08:14:22 2013 56 10.X.X.X TCP_REFRESH_UNMODIFIED/304
 354 GET http://imagesrv.adition.com/banners/750/683036/dummy.gif
 u...@example.com HIER_DIRECT/217.79.188.10 image/gif

 The only different between config1 and config2 is the
 AUTHENTICATED-flag on the http_access allow INTERNET_ACCESS line.

 Many thanks.
 Kind regards,
 Tom


 Thank you. I have an explanation for you. But I'm not exactly happy with it
 how it is working in practice ...


 The difference is the fine boundary between authentication and authorization
 with a couple of factors (* marked) leading into the behaviour:

 * Negotiate auth protocol credentials, once delivered, are tied to the TCP
 connection state.

 Seems simple at face value, but HTTP is stateless - ie there is no TCP
 connection state persistence between requests. So we have to jack up all
 sorts of pinning an connection persistence just to make our HTTP connections
 stateful for each client using Negotiate.

 Keep in mind that Negotiate satetfulness is the anomally haunting us here.


 * both your traces look like the connection is already setup and had
 previous traffic so the request credentials should be checked against the
 existing validated credentials.
 Yes, before I took the traces, I made a few requests before (to
 enforce the behavior).


 * external_acl_type with %LOGIN is simply an authorization test. Are the
 credentials we have allowing permission for this request to continue?
 yes/no.

 IMPORTANT: authorization takes no notice of validity of the credentials. Or
 of their accuracy. Only existence and permissions assigned to it matters.

 It locates credentials anywhere it can and uses those. In *both* your traces
 the external ACL test locates credentials already tied to the TCP connection
 state, which it validates with the helper as _authorized_. The helper says
 OK, as you would expect since these connection credentials were okay on
 previous traffic.

 This is fine and consistent with external ACL definition as an authorization
 API. It is not intended for use as authenticator.


  * proxy_auth ACL is an authentication test. Is the request user who they

[squid-users] Squid as proxy with interception

2013-03-11 Thread Magali Bernard

Hello,

After many years with squid as a proxy-cache combined with the proxy.pac or
WPAD client configurations, we are considering to use squid as a proxy with
interception (WCCP2) on our whole university site.

The reason essentially lies on complaints from users with their browsers
configurations, but also with applications that can not talk to a proxy...

We'd like to know if interception is widely used and approved.
Some feedback, good or bad experiences, would be precious for us.

Regards,



-- 
**
Magali BERNARD - DSI pôle Système, Réseau et Sécurité
Université Jean Monnet Saint-Étienne - FRANCE
-
A: Yes.
 Q: Are you sure ?
 A: Because it reverses the logical flow of conversation.
 Q: Why is top posting annoying in email ?




Re: [squid-users] Re: kerberos auth failing behind a load balancer

2013-03-11 Thread Sean Boran
(sorry for the slow answer, an over-eager spam filter swallowed this msg).

In wireshark, the server name sent in the ticket is correct
(proxy.example.com) , encryption is rc4-hmac and knvo=5.
This is the same kvno as seen in klist -ekt /etc/krb5.keytab (with
des-cbc-crc, des-cbc-md5, arcfour-hmac).

Now there are two squids behind the balancer; one of them will behave
correctly and accept kerberos authentication to the balanced  proxy
name. (I had not realised the second one worked before). Comparing the
quid and kerb config does not explain the difference.

However on a windows client, querying SPN for the balanced name only
lists the squid proxy that works(proxy2) and no mention of proxy3.

C:\tempcscript spn_query.vbs http/proxy.example.com example.net
CN=proxy2,OU=Ubuntu,OU=Server,..
O,DC=example,DC=net
Class: computer
Computer DNS: proxy2.example.com
-- http/proxy.example.com
-- HTTP/proxy.example.com/proxy2
-- HTTP/proxy.example.com/proxy2.example.com
-- HTTP/proxy2
-- HTTP/proxy2.example.com
-- HOST/proxy2.example.com
-- HOST/PROXY2

Next, tried to use the windows tool setspn to add an spn for proxy3:
setspn -S http/proxy.example.com proxy3
but it says Duplicate SPN found, aborting operation!
which makes me think I'm misunderstanding. Its is not possible to
assign the same SPN to real names of both the squids behind the
balancer?

Thanks,

Sean


On 1 March 2013 21:06, Markus Moeller hua...@moeller.plus.com wrote:
 That should work. What do you see in Wireshark when you look at the traffic
 to the proxy ?  If you exand the Negotiate header you should see what is the
 principal name and kvno. Both must match what is in your keytab ( check with
 klist -ekt /etc/keytab)

 Markus


 Sean Boran s...@boran.com wrote in message
 news:caonghjuye0oyoomkquwl5frmnyozfrvuekslbnxyao0kel_...@mail.gmail.com...

 Hi,

 I’ve received (kemp) load balancers to put in front of squids to
 provide failover.
 The failover / balancing  works fine until I enable Kerberos auth on the
 squid.

 Test setup:
 Browser == Kemp balancer == Squid  ==
 Internet
 proxy.example.com proxy3.example.com

 The client in Windows7 in an Active Directory domain.
 If the browser proxy is set to proxy3.example.com  (bypassing the LB),
 Kerberos auth works just fine, but via the kemp (proxy.example.com)
 the browser prompts for a username/password which is not accepted
 anyway

 Googling on Squid+LBs, the key is apparently to add a principal for the LB,
 e.g.
 net ads keytab add HTTP/proxy.example.com

 In the logs (below), one can see the client sending back a Krb ticket
 to squid, but it rejects it:
 negotiate_wrapper: Return 'BH gss_accept_sec_context() failed:
 Unspecified GSS failure.  
 When I searched on that. one user suggested changing the encryption in
 /etc/krb5.conf . In /etc/krb5.conf   I tried with the recommended
 squid settings (see below), and also with none at all. The results
 were the same. Anyway, if encryption was the issue, it would not work,
 via LB or directly.


 Analysis:
 -
 When the client sent a request, squid replies with:

 HTTP/1.1 407 Proxy Authentication Required
 Server: squid
 X-Squid-Error: ERR_CACHE_ACCESS_DENIED 0
 X-Cache: MISS from gsiproxy3.vptt.ch
 Via: 1.1 gsiproxy3.vptt.ch (squid)

 ok so far. the client answer with a kerberos ticket:

 Proxy-Authorization: Negotiate YIIWpgYGKwYBXXX

 UserRequest.cc(338) authenticate: header Negotiate
 YIIWpgYGKwYBXXX
 UserRequest.cc(360) authenticate: No connection authentication type
 Config.cc(52) CreateAuthUser: header = 'Negotiate YIIWpgYGKwYBBQUC
 auth_negotiate.cc(303) decode: decode Negotiate authentication
 UserRequest.cc(93) valid: Validated. Auth::UserRequest '0x20d68d0'.
 UserRequest.cc(51) authenticated: user not fully authenticated.
 UserRequest.cc(198) authenticate: auth state negotiate none. Received
 blob: 'Negotiate
 YIIWpgYGKwYBBQUCoIIWmjCCFpagMDAuBgkqhkiC9xIBAXX
 ..
 UserRequest.cc(101) module_start: credentials state is '2'
 helper.cc(1407) helperStatefulDispatch: helperStatefulDispatch:
 Request sent to negotiateauthenticator #1, 7740 bytes
 negotiate_wrapper: Got 'YR YIIWpgYGKwYBBQXXX
 negotiate_wrapper: received Kerberos token
 negotiate_wrapper: Return 'BH gss_accept_sec_context() failed:
 Unspecified GSS failure.  Minor code may provide more information.


 Logs for a (successful) auth without LB:
 .. as above 
 negotiate_wrapper: received Kerberos token
 negotiate_wrapper: Return 'AF oYGXXA==
 u...@example.net


 - configuration ---
 Ubuntu 12.04 + std kerberod. Squid 3.2 bzr head from lat Jan.
 - squid.conf:
 - debug_options ALL,2 29,9 (to catch auth)
 auth_param negotiate program
 /usr/local/squid/libexec/negotiate_wrapper_auth -d --kerberos
 /usr/local/squid/libexec/negotiate_kerberos_auth -s GSS_C_NO_NAME
 --ntlm /usr/bin/ntlm_auth 

[squid-users] Squid 3.3.2 SMP Problem

2013-03-11 Thread Adam W. Dace
I've updated Bug #3805 a lot, does anybody mind if I move this to the
New Feature Request component?

-- 

Adam W. Dace colonelforbi...@gmail.com

Phone: (815) 355-5848
Instant Messenger: AIM  Yahoo! IM - colonelforbin74 | ICQ - #39374451
Microsoft Messenger - colonelforbi...@live.com

Google Profile: http://www.google.com/profiles/ColonelForbin74


Re: [squid-users] Squid 3.3.2 and SMP

2013-03-11 Thread Alex Rousskov
On 03/09/2013 12:24 AM, Amm wrote:

 In short, for best results and to make sure that each worker uses separate 
 core
 and dont end up using same core, one must use cpu_affinity_map as well?
 
 Am I correct?

Based on our experience, yes. Similarly, keeping other significant
activities (e.g., NIC interrupt handling) away from Squid CPU cores may
help with performance. I suspect this we know better than the kernel!
approach works well for Squid-dedicated servers. It may not be a good
idea for general-purpose servers where there are many different
processes competing for scarce CPU cycles. Kernel algorithms ought to
work better than rigid affinity schemes in those cases.

BTW, there are command-line tools that can set or change CPU affinity of
a process (e.g., taskset), but cpu_affinity_map is usually easier to use
when it comes to Squid kids.


Cheers,

Alex.



Re: [squid-users] squid3 SMP aufs storage/process

2013-03-11 Thread Alex Rousskov
On 03/09/2013 12:48 AM, jiluspo wrote:

 Therefore squid SMP is not stable.

Support for ufs caching is not related to stability IMO, but perhaps
your definition of stable is different from mine.


 if we need to store more than 32KB the
 best way is to use multi-instance and peering...

Or use the unofficial Large Rock branch. It all depends on individual
circumstances and needs. There is no single Squid version that works
well for everybody.


 When would probably finish the rock for large content?

The Large Rock branch on Launchpad is ready for testing. It will
probably be submitted for Squid Project review in a few months.


HTH,

Alex.



 -Original Message-
 From: Alex Rousskov [mailto:rouss...@measurement-factory.com]
 Sent: Saturday, March 09, 2013 3:03 PM
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] squid3 SMP aufs storage/process

 On 03/08/2013 11:21 PM, jiluspo wrote:

 If squid3 configured with cache_dir aufs per process would they
 share to other process?

 No. Ufs-based store modules, including aufs, are currently not
 SMP-aware. If you use them in SMP Squid (without protecting them with
 SMP conditionals), your cache will get corrupted.

 SMP conditionals in squid.conf can be used to prevent corruption, but
 they also prevent sharing of cache_dirs among workers.

 Rock store and memory cache are SMP-aware, share cache among workers,
 and do not need SMP macros, but they have their own limitations (we are
 actively working on addressing most of them).


 Pick your poison,

 Alex.

 Email secured by Check Point



Re: Res: [squid-users] squid 3.2.0.5 smp scaling issues

2013-03-11 Thread paulm
Excusme David

What are the ab paramenters that use to test agains squid ?

thnks, Paul 



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/squid-3-2-0-5-smp-scaling-issues-tp3395333p4658947.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Squid 3.3.2 and SMP

2013-03-11 Thread Alex Rousskov
On 03/11/2013 03:05 AM, Ralf Hildebrandt wrote:
 * Alex Rousskov rouss...@measurement-factory.com:
 On 03/08/2013 08:11 AM, Adam W. Dace wrote:
 Does anyone have a simple example configuration for running Squid
 3.3.2 with multiple workers?

 You can just add workers 2 to squid.conf.default.
 
 That's really all?

Yes, that was a complete answer to the above question.

There is a lot more to running SMP Squid in production, of course. The
SmpScale wiki page documents some of the known caveats.


Cheers,

Alex.





Re: [squid-users] stale files on disk

2013-03-11 Thread Alex Rousskov
On 03/10/2013 03:13 PM, Hussam Al-Tayeb wrote:

 I am using squid 3.1.23 and not planning on migrating to a higher version for 
 another few months.
 squid says 8 Duplicate URLs purged
 It will not clear the actual stale files form disk
 so cache.log says there are 8 less files than find command says.

That sounds like a Squid bug to me. I do not recall seeing code that
would delay purging ufs files from disk (but I am not a ufs expert).


 I fall into this situation once every few weeks and I end up purging the 
 whole 
 cache or restore a back up of the disk cache.

Why? Are you afraid of Squid serving those stale responses to users?
That is unlikely even of Squid has a bug deleting those files from disk.


 Is there any way to clear the stale files from disk instead?

If this is a bug, and it still exists in recent versions, then it should
be reported and fixed.

Meanwhile, if those leftovers really bother you, you can add custom (or
enable higher-level) debugging to figure out which stale files were not
deleted and delete them manually (or even from a script). You may be
able to do the same by analyzing store.log as well. Sorry, I do not have
specific step-by-step instructions for doing this.


Alex.



Re: [squid-users] squid3 SMP aufs storage/process

2013-03-11 Thread Alex Rousskov
On 03/09/2013 09:19 AM, Adam W. Dace wrote:

 being able to use configuration like this sure makes it easier:
 
 # Uncomment and adjust the following to add a disk cache directory.
 cache_dir aufs /usr/local/squid/var/cache${process_number}/squid 1024 16 256


One should also add squid.conf conditional to exclude the above line
from being seen by the Coordinator process. Here is a sketch:

if ${process_number} = coordinator process number here
# do nothing -- this is Coordinator
else
# configure all SMP-unaware, non-shared cache_dirs here
cache_dir aufs ...${process_number}...
endif

Otherwise, Coordinator will create (during squid -z) and then open the
corresponding cache_dir, creating confusion and possibly running into bugs.


HTH,

Alex.


 On Sat, Mar 9, 2013 at 1:48 AM, jiluspo jilu...@smartbro.net wrote:
 Therefore squid SMP is not stable. if we need to store more than 32KB the
 best way is to use multi-instance and peering...I wish I could use multicast
 in localhost.

 When would probably finish the rock for large content?

 I've tried in production squid3head SMP rock storage only and crashed with
 BUG 3279: HTTP reply without Date:

 @1kreq/sec squid3(storied worker2) vs squid2(storeurl) coss. squid2 gets
 higher hit.
 And to be honest. Squid2head runs more stable than squid3 stable.

 -Original Message-
 From: Alex Rousskov [mailto:rouss...@measurement-factory.com]
 Sent: Saturday, March 09, 2013 3:03 PM
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] squid3 SMP aufs storage/process

 On 03/08/2013 11:21 PM, jiluspo wrote:

 If squid3 configured with cache_dir aufs per process would they
 share to other process?

 No. Ufs-based store modules, including aufs, are currently not
 SMP-aware. If you use them in SMP Squid (without protecting them with
 SMP conditionals), your cache will get corrupted.

 SMP conditionals in squid.conf can be used to prevent corruption, but
 they also prevent sharing of cache_dirs among workers.

 Rock store and memory cache are SMP-aware, share cache among workers,
 and do not need SMP macros, but they have their own limitations (we are
 actively working on addressing most of them).


 Pick your poison,

 Alex.

 Email secured by Check Point

 
 
 



Re: Res: [squid-users] squid 3.2.0.5 smp scaling issues

2013-03-11 Thread Amos Jeffries

On 12/03/2013 8:11 a.m., paulm wrote:

Excusme David

What are the ab paramenters that use to test agains squid ?


-n for request count
-c for concurrency level

SMP in Squid shares a listening port so -c 1 will still test both 
workers. But the results are more interesting as you vary client count 
versus request count.


For worst-case traffic scenario test with a guaranteed MISS response, 
for best-case test with a small HIT response.


Other than that whatever you like. Using a FQDN you host yourself is polite.

Amos


Re: [squid-users] Squid as proxy with interception

2013-03-11 Thread Amos Jeffries

On 12/03/2013 4:33 a.m., Magali Bernard wrote:

Hello,

After many years with squid as a proxy-cache combined with the proxy.pac or
WPAD client configurations, we are considering to use squid as a proxy with
interception (WCCP2) on our whole university site.

The reason essentially lies on complaints from users with their browsers
configurations, but also with applications that can not talk to a proxy...

We'd like to know if interception is widely used and approved.
Some feedback, good or bad experiences, would be precious for us.


It is widely used, and equally widely hated.

Your best choice of configuration is to use mutiple layers of client 
configuration:

 
http://wiki.squid-cache.org/SquidFaq/ConfiguringBrowsers#Recommended_network_configuration

Since you have the WPAD/PAC layer(s) currently working *keep them*.
Just add the interception as a backup method for traffic which bypasses 
the WPAD/PAC.


Using the layered approach you get full proxy functionality with any 
software which correctly supports WPAD/PAC. While still getting the 
proxy access control and some caching with other software despite the 
interception limitations.


Amos


Re: [squid-users] Squid 3.3.2 is available

2013-03-11 Thread Amos Jeffries

On 8/03/2013 11:33 p.m., Helmut Hullen wrote:

Hallo, Amos,

Du meintest am 08.03.13:


The Squid HTTP Proxy team is very pleased to announce the
availability of the Squid-3.3.2 release!

Compiling it on one of my machines stopped with

depbase=`echo peer_proxy_negotiate_auth.o | sed 's|[^/]*$|.deps/
|;s|\.o$||'`;\
g++ -DHAVE_CONFIG_H -DDEFAULT_CONFIG_FILE=\/etc/squid/squid.conf\
-DDEFAULT_SQUID_DATA_DIR=\/usr/share/squid\
-DDEFAULT_SQUID_CONFIG_DIR=\/etc/squid\  -I.. -I../include
-I../lib -I../src -I../include  -I/usr/heimdal/include

[...]


deprecated (declared at /usr/heimdal/include/krb5-protos.h:2284)
[-Werror=deprecated-declarations] cc1plus: all warnings being
treated as errors make[3]: *** [peer_proxy_negotiate_auth.o] Error 1
make[3]: Leaving directory `/tmp/SBo/squid-3.3.2/src'
May be related in some strange way with the kerberos installation;
on another machine compiling worked, but when running it stopped
with a kerberos related error message.

(My kerberos installations may be corrupt, but that should be
another problem)

It would be best to sort that out krb5 stuff first IMO. The above
warnings are about internal errors in the krb5 installation.

Next try: compiling on a machine without any kerberos installation
(neither MIT nor Heimdal).

stopped with

libtool: link: cannot find the library `/usr/lib/libcom_err.la' or
unhandled argument `/usr/lib/libcom_err.la'

That file is at least part of the heimdal packet.


Once you have krb5 sorted out if it is still halting Squid build you
can use --disable-strict-error-checking to get Squid to build. The
usual lack of guarantees about both operation and future build
success if you use it regularly though.

Isn't there any switch --without kerberos or so?


Unfortunately no. Squid should be using the krb5-config tool to locate 
the krb5 dependencies and bits though.


Amos


RE: [squid-users] Squid as proxy with interception

2013-03-11 Thread James Harper
 
 Hello,
 
 After many years with squid as a proxy-cache combined with the proxy.pac
 or
 WPAD client configurations, we are considering to use squid as a proxy with
 interception (WCCP2) on our whole university site.
 
 The reason essentially lies on complaints from users with their browsers
 configurations, but also with applications that can not talk to a proxy...
 
 We'd like to know if interception is widely used and approved.
 Some feedback, good or bad experiences, would be precious for us.
 

If the first thing a student tries to do on your network is to check their 
facebook or google something then they will get an error as you can't (or 
shouldn't for a university network) do interception proxy with HTTPS. A lot of 
other things are https by default these days too.

Maybe put interception proxy in place as a backup, but stick with regular proxy 
as well.

James



Re: [squid-users] In which mode squid runs with ruckus accesspoint

2013-03-11 Thread Amos Jeffries

On 11/03/2013 11:07 p.m., benjamin fernandis wrote:

Hi,

Integrating squid box with rukus access point and captiive portal.

we have wifi users in network and we have captive portal for them.

For wifi, we are using ruckus access point and in there we configure
that to forward web traffic


How?


  to squid box and in squid box we configure
url_rewrite, which only allow certain URL to surf and for rest it
rewrite the url with captive portal url.


Do not re-write, that can corrupt the client cache state which is 
particularly bad for intercepted traffic.


Use 30x HTTP redirect responses instead. That can be setup with an ACL 
and deny_info, or by sending 30x status code from the url_rewrite helper.



Here what could be mode of squid ?  intercept / tproxy or ?


Either. NAT and TPROXY meet different requirements - such as the traffic 
type (IPv4 / IPv6), how transparent you want it (NAT = half transparent, 
TPROXY = fully transparent), and what your skill level configuring 
packet routing are (beginners ... NAT, experts ... TPROXY).



As in ruckus, simply redirect to ip : port.



Redirect *how*?

Amos


Re: [squid-users] Squid 3.3.2 is available

2013-03-11 Thread Amos Jeffries

On 12/03/2013 11:50 a.m., Amos Jeffries wrote:

On 8/03/2013 11:33 p.m., Helmut Hullen wrote:

Hallo, Amos,

Du meintest am 08.03.13:


The Squid HTTP Proxy team is very pleased to announce the
availability of the Squid-3.3.2 release!

Compiling it on one of my machines stopped with

depbase=`echo peer_proxy_negotiate_auth.o | sed 's|[^/]*$|.deps/
|;s|\.o$||'`;\
g++ -DHAVE_CONFIG_H -DDEFAULT_CONFIG_FILE=\/etc/squid/squid.conf\
-DDEFAULT_SQUID_DATA_DIR=\/usr/share/squid\
-DDEFAULT_SQUID_CONFIG_DIR=\/etc/squid\  -I.. -I../include
-I../lib -I../src -I../include  -I/usr/heimdal/include

[...]


deprecated (declared at /usr/heimdal/include/krb5-protos.h:2284)
[-Werror=deprecated-declarations] cc1plus: all warnings being
treated as errors make[3]: *** [peer_proxy_negotiate_auth.o] Error 1
make[3]: Leaving directory `/tmp/SBo/squid-3.3.2/src'
May be related in some strange way with the kerberos installation;
on another machine compiling worked, but when running it stopped
with a kerberos related error message.

(My kerberos installations may be corrupt, but that should be
another problem)

It would be best to sort that out krb5 stuff first IMO. The above
warnings are about internal errors in the krb5 installation.

Next try: compiling on a machine without any kerberos installation
(neither MIT nor Heimdal).

stopped with

libtool: link: cannot find the library `/usr/lib/libcom_err.la' or
unhandled argument `/usr/lib/libcom_err.la'

That file is at least part of the heimdal packet.


Once you have krb5 sorted out if it is still halting Squid build you
can use --disable-strict-error-checking to get Squid to build. The
usual lack of guarantees about both operation and future build
success if you use it regularly though.

Isn't there any switch --without kerberos or so?


Unfortunately no. Squid should be using the krb5-config tool to locate 
the krb5 dependencies and bits though.


Amos



Actually, please try --without-krb5-config  , that should do the trick.

Amos