[squid-users] Re: Using squid as an SSL/TLS endpoint/unwrapper for other protocols

2012-05-08 Thread Henrik Nordström
tis 2012-05-08 klockan 10:48 +0500 skrev Ahmed Talha Khan:

 I am interested in knowing how i can use squid as an SSL endpoint for
 protocols other then HTTPS.

Short answer, no. Squid is an HTTP proxy.

 The scenario is that i want to use its SSL
 handling capability and use it for some other protocol which is going
 inside SSL. This requires hooks into the squid code-base. I assume
 that the design being modular, will offer ssl handling layer with
 interfaces connecting it too the main Data Processing engine for HTTP.

Not really modular at that level.

 I want to tap into that interface and use the ssl layer output, which
 should be plain-traffic. Since SSL output is not protocol specific, i
 would be able to use it for any protocol that i want.

I think you are looking for stunnel which is a generic SSL wrapper for
any TCP protocol.

both Squid and stunnel uses OpenSSL for the SSL part.

Regards
Henrik



Re: [squid-users] Extract session 5-tuples for HTTP requests in squid

2012-04-18 Thread Henrik Nordström
ons 2012-04-18 klockan 11:08 +0500 skrev Ahmed Talha Khan:

 I want to extract session 5-tuples inside squid and send them to an
 ICAP servers as an argument to the service being invoked. By session
 5-tuple i mean the following
 
 Source IP
 Destination IP,
 Source Port,
 Destination Port,
 Protocol
 
 for a specific HTTP request. These are the 5-tuples that uniquely
 identify a traffic flow. Is there a way to do it? Any place i can put
 such hooks? Or does squid have some other way of identifying
 individual requests from different IPs/Ports ?

The above identifies a flow at TCP level, not a request. Within a flow
there may be multiple requests (connection keep-alive), or even for
different clients when there is a proxy involved.

It's further complicated by Squid being a proxy, so you have two
independent TCP flows, client-squid, and squid-nexthopserver, and
depending on which ICAP hook you use and the details of the
request/response you may have any combination of the two available
within Squid.

To simplify matters to a manageable level most selects to identify
requests by the following tuple instead

  - Time, high resolution and NTP controlled.
  - Requesting IP (and optionally port but usually port is ignored).
  - Requested URL

This is generally sufficient to identify a single request even in high
traffic environments, even if there is a chance of collisions.

Information about the requesting client IP is sent as part of the ICAP
transaction by default in the X-Client-IP ICAP header. Maybe you also
have client information in the X-Forwarded-For HTTP request header.

If you want to add more informaition then
Adaptation::Icap::ModXact::makeRequestHeaders is the method where the
ICAP request headers is filled in.

Regards
Henrik



Re: [squid-users] Extract session 5-tuples for HTTP requests in squid

2012-04-18 Thread Henrik Nordström
ons 2012-04-18 klockan 14:03 +0500 skrev Ahmed Talha Khan:

 Thanks for the info. I am aware but that these are TCP level
 identifiers. I digged into it and saw that class HttpRequest has
 members client_ip, host_ip, port and my_addr. Client_ip is very
 obvious and i can see that the X-Client-IP field is populated with it.
 What about the following fields
 
 host_ip: is this the ip of the origin server to which the request is
 going? And will it remain same in the response?

Not sure. Can't find any host_ip in my sources. Which version are you
looking at?

But the destination server destination IP is not known until the request
is forwarded, and then only if the request is forwarded directly and not
via another proxy. Until then the destination is the requested host
name.

 port: is this the port from which the request originated? Source port
 of the request? What will be the value in response from the server?

port is the port number from parsing the requested URL.

 my_addr: This seems like the ip on which squid is listening. Correct
 me if i am wrong

Yes.

 How to get destination port. It is either http (80) or https(443). But
 how can i differentiate? How do i know what was the destination port
 of the request?

proxy requests are sent to the proxy, not the destination server. HTTP
requests are addressed by the requested URL not IP:PORT.

The URL tells which host name and port the request is targeted at.

Regards
Henrik



Re: [squid-users] Extract session 5-tuples for HTTP requests in squid

2012-04-18 Thread Henrik Nordström
ons 2012-04-18 klockan 17:41 +0500 skrev Ahmed Talha Khan:

 What do you mean by until- then here? Does this have to do with the
 vectoring point, ICAP coming in PRE_CACHE before the request going
 out?

Yes.

Regards
Henrik



Re: [squid-users] squid + sslbump compile errors

2012-04-02 Thread Henrik Nordström
mån 2012-04-02 klockan 16:47 +0930 skrev Michael Hendrie:
 On 06/02/2012, at 10:08 AM, Henrik Nordström wrote:
 
  sön 2012-02-05 klockan 14:09 -0600 skrev James R. Leu:
  
  certificate_db.cc: In member function ‘void Ssl::CertificateDb::load()’:
  certificate_db.cc:455:1: error: ‘index_serial_hash_LHASH_HASH’ was not 
  declared in this scope
  
  Hm.. fails for me as well. Please try the attached patch.
 
 Getting the same error as the original poster with 3.2.0.16.  Patch fixes 
 part of the errors but not all.  Remaining is :
 
 certificate_db.cc: In member function ‘bool 
 Ssl::CertificateDb::deleteInvalidCertificate()’:
 certificate_db.cc:522: error: invalid conversion from ‘void*’ to ‘const 
 _STACK*’
 certificate_db.cc:522: error:   initializing argument 1 of ‘void* 
 sk_value(const _STACK*, int)’
 certificate_db.cc: In member function ‘bool 
 Ssl::CertificateDb::deleteOldestCertificate()’:
 certificate_db.cc:553: error: invalid conversion from ‘void*’ to ‘const 
 _STACK*’
 certificate_db.cc:553: error:   initializing argument 1 of ‘void* 
 sk_value(const _STACK*, int)’
 certificate_db.cc: In member function ‘bool 
 Ssl::CertificateDb::deleteByHostname(const std::string)’:
 certificate_db.cc:570: error: invalid conversion from ‘void*’ to ‘const 
 _STACK*’
 certificate_db.cc:570: error:   initializing argument 1 of ‘void* 
 sk_value(const _STACK*, int)’
 
 This is with Scientific Linux 6.1 (x86_64):
 OpenSSL 1.0.0-fips 29 Mar 2010
 gcc version 4.4.5 20110214 (Red Hat 4.4.5-6) (GCC) 

The problem is due to a RedHat patch to OpenSSL 1.0 where OpenSSL lies
about it's version. Not yet sure what is the best way to solve this but
I guess we need to make configure probe for these OpenSSL features
instead of relying on the advertised version if we want to support
--enable-ssl-crtd on these OS version.

It should be fixed in Fedora rawhide, but apparently can't be fixed for
released versions of Fedora or RHEL having the hacked openssl version.

Regards
Henrik



Re: [squid-users] Roadmap Squid 3.2

2012-03-07 Thread Henrik Nordström
ons 2012-03-07 klockan 10:35 -0700 skrev Alex Rousskov:

 I think it is neither reasonable nor practical to make Squid v3.2
 stable designation dependent on 2.x bugs, especially those filed years
 ago with insufficient information. Squid v3.2 can be stable regardless
 of what bugs the old 2.x version had.

Yes.

3.2 release should not be held by Squid-2 bugs. Only confirmed Squid-3.2
bugs affecting new functionality or indicating a regression from 3.1.
Plus any known significant security issues which may impact 3.2.

We can't aim for having each new release fixing all possibly known bugs
in all earlier releases, but it's reasonable that we do not knowingly
introduce new bugs in old functionality or release new functionality
known to not be working well enough.

It is acceptable to have some known bugs in new functionality, as long
as it do not impose any security issues or make the functionality
useless.

Regards
Henrik



Re: [squid-users] SSLBump SSL error (FAO Henrik)

2012-02-19 Thread Henrik Nordström
tis 2012-02-14 klockan 12:20 + skrev Alex Crow:

 Strangely s_client without any additional parameters seems to work:


 OpenSSL s_client -connect applyonline.abbeynational.co.uk:443
 CONNECTED(0003)

Do not work for me when testing this site.

$ openssl s_client -connect applyonline.abbeynational.co.uk:443
CONNECTED(0003)
140471392831296:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake
failure:s23_lib.c:177:
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 0 bytes and written 113 bytes
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
---

Which version of OpenSSL are you testing with?

$ openssl version
OpenSSL 1.0.0g-fips 18 Jan 2012


 New, TLSv1/SSLv3, Cipher is RC4-MD5

And I get here if I enable SSLv2 ciphers, making OpenSSL send an SSLv2
formatted hello handshake.

$ openssl s_client -connect applyonline.abbeynational.co.uk:443 -cipher
'ALL:!COMPLEMENTOFDEFAULT'
[...]
New, TLSv1/SSLv3, Cipher is RC4-MD5


 Unless that verify return code is a problem?

For me it's not.

 I really don't know where to go from here...

Fire up wireshark and stare at any difference in the SSL handshake
presented by OpenSSL when called by Squid compared to when using the
openssl s_client command.

Just tried, and it's sending a SSLv3/TLSv1 handshake even with the
sslproxy_ciphers set to the same that works with openssl_sclient. 

But seriously, the right action is to complain to the site owners to
have the site fixed. A SSLv3/TLSv1 server requiring the initial client
hello handshake to be SSLv2 with SSLv3/TLSv1 chiphers and failing if
seeing an SSLv3/TLSv1 handshake is just broken.

Regards
Henrik



Re: [squid-users] squid sessions behind NAT

2012-02-19 Thread Henrik Nordström
tor 2012-02-16 klockan 23:32 +0400 skrev Vyacheslav Maliev:
 Thanks for your answer, but both variants are not suitable in my
 situation. My proxy is working in transparent mode and there is not
 possible to authenticate in this mode as i know. I can`t expose
 networks behind routers because there might be duplicated networks and
 routes.

You can expose neworks behind routers by using Squid proxies or another
proxy adding X-Forwarded-For to the request.

Then base the session on %SRC %{X-Forwarded-For}

Regards
Henrik



Re: [squid-users] error processing the URL

2012-02-12 Thread Henrik Nordström
ons 2012-02-08 klockan 16:50 -0300 skrev Martin Nigoul:
 Thanks!
 
 You may be right about the session cookies but we are behind a
 firewall so we have no way other than our proxy parents to get to any
 internet site.
 prefer_direct off was in place as default.
 never_direct allow all

Have you tried what I suggested before? Using cache_peer_access to force
this site via one of the parents only?

Regards
Henrik



Re: [squid-users] OWA Reverse Proxy Problems

2012-02-12 Thread Henrik Nordström
tor 2012-02-09 klockan 17:05 +0100 skrev sauro...@gmx.de:
 Hi all,
 i have huge problem with getting Squid working as a reverse proxy for OWA. 
 I have created a certificate request on my Windows Server 2008, then I
 have created a certificate and converted it to .pfx. This one I could
 get into IIS and enable it to my DefaultWebsite in IIS and OWA. So far
 so good

What site name have you configured in OWA?

Recommended setup is to use a hostname, and to first verify that the OWA
server responds properly to this hostname and then introduce the reverse
proxy inbetween, changing the hostname to point to the reverse proxy
instead of OWA.

Accessing directly by IP is NOT RECOMMENDED.

I also recommend using https both client-squid and squid-owa for
simplicity.


 visible_hostname my.dyndns.org
 https_port 192.168.1.199:443 cert=/usr/local/src/sslowa/my.dyndns.org.pem 
 key=/usr/local/src/sslowa/my.dyndns.org.key defaultsite=192.168.1.249

defaultsite SHOULD NOT be the internal IP of OWA. It should be the same
as the hostname you use in the https:// URL. If unsure then use vhost
instead and forget about defaultsite.

Based on your acls below I would guess your OWA server name is
my.dyndns.org?

 #cache_peer 192.168.1.249 parent 80 0 no-query originserver login=PASS 
 front-end-https=on name=owaServer
 cache_peer 192.168.1.249 parent 443 0 no-query originserver login=PASS 
 front-end-https=on name=owaServer

front-end-https is only for when you use https client-squid but http
squid-owa.

Port 443 is https so you need the ssl flag there.

 #cache_peer 192.168.1.249 parent 443 0 no-query originserver login=PASS ssl 
 sslcert=/usr/local/src/sslowa/my.dyndns.org.key name=owaServer

No need to specify a SSL client certificate for using in the connection
to OWA.

cache_peer 192.168.1.249 parent 443 0 no-query originserver login=PASS ssl 
name=owaServer

 acl OWA dstdomain my.dyndns.org
 cache_peer_access owaServer allow OWA
 never_direct allow OWA

This is fine, assuming your OWA name is my.dyndns.org, and you correct
the https_port and cache_peer parts above.

 # lock down access to only query the OWA server!
 http_access allow OWA
 http_access deny all

 miss_access allow OWA
 miss_access deny all

You don't need miss_access.

Regards
Henrik



Re: [squid-users] Fwd: Cipher Suites

2012-02-12 Thread Henrik Nordström
fre 2012-02-10 klockan 04:33 -0500 skrev PS:

 It seems like every site that I connect to while using Squid, the
 server always chooses Cipher Suite: TLS_RSA_WITH_CAMELLIA_256_CBC_SHA
 (0x0084). I'm not sure why. Exactly what does the cipher option do?

The cipher string sets the list of SSL ciphers Squid accepts.

SSL then negotiates the best cipher supported by both sides of the
connection.

Normally it's the client who have the last say on which of the mutually
supported chiphers should be used, but servers MAY override if they
insist (within the mutually supported set of ciphers).

Squid is both server and client depending on which connection you look
at. In the client-squid connection it's a server and in
squid-webserver connection it's a client.

Note: Above description only applies to ssl-bump or reverse proxying. In
normal tunneling of SSL squid is neither server or client, only relaying
the encrypted traffic as-is between the client and requested server.

Regards
Henrik



Re: [squid-users] Squid 3.2.0.14: failed to select source for ...

2012-02-12 Thread Henrik Nordström
fre 2012-02-10 klockan 11:31 +0100 skrev Helmut Hullen:
 Hallo, squid-users,
 
 my self made squid 3.2.0.14 sometimes produces messages like
 
 
 Jan 30 08:56:58 Arktur squid[4263]: Failed to select source for 'http:// 
 ivwbox.de/'
 Jan 30 08:56:58 Arktur squid[4263]:   always_direct = 0
 Jan 30 08:56:58 Arktur squid[4263]:never_direct = 0
 Jan 30 08:56:58 Arktur squid[4263]:timedout = 0

Reverse proxy with no matching cache_peer?

Regards
Henrik



Re: [squid-users] Squid 3.2.0.14: failed to select source for ...

2012-02-12 Thread Henrik Nordström
lör 2012-02-11 klockan 02:07 +1300 skrev Amos Jeffries:

 Direct access is permitted, but DNS produced no usable results.

Should not result in failed to select source...

Regards
Henrik



Re: [squid-users] samba pdc join itself

2012-02-12 Thread Henrik Nordström
sön 2012-02-12 klockan 13:12 +0100 skrev zumike:

 How can I to join for the PDC itself?

You don't. It's already joined when it created the domain.

Regards
Henrik



Re: [squid-users] maximum_object_size wrong in cachemgr.cgi ?

2012-02-12 Thread Henrik Nordström
tor 2012-02-09 klockan 23:13 -0800 skrev babajaga:

 Accepted object sizes: 262144 - (unlimited) bytes

Where in cachemgr do you see this message?

Regards
Henrik



RE: [squid-users] Squid/NTLM and site timeouts

2012-02-12 Thread Henrik Nordström
sön 2012-02-12 klockan 14:07 + skrev Jason Gauthier:

 In regards to this log entry:
 
 1329010018.324  1 192.168.71.117 TCP_DENIED/407 4067 GET 
 http://www.pendulus.org/loadshortpause.php - NONE/- text/html
 1329010018.473  0 192.168.71.117 TCP_DENIED/407 4332 GET 
 http://www.pendulus.org/loadshortpause.php - NONE/- text/html
 1329010048.720  30194 192.168.71.117 TCP_MISS/200 330 GET 
 http://www.pendulus.org/loadshortpause.php jgauthier DIRECT/69.135.186.43 
 text/html
 
 Except the server is contacted at 1329010018, not 1329010048.

Correct.

The access.log timestamp is when the response is completed. The request
arrived at Squid at timetamp - duration.

In the above numbers the request was parsed by Squid at

  1329010048.720 s -  30194 ms = 1329010018.526 s

and forwarded slightly after.

Regards
Henrik



Re: [squid-users] Squid 3.2.0.14: failed to select source for ...

2012-02-12 Thread Henrik Nordström
sön 2012-02-12 klockan 15:19 +0100 skrev Helmut Hullen:

  Jan 30 08:56:58 Arktur squid[4263]: Failed to select source for
  'http:// ivwbox.de/'
  Jan 30 08:56:58 Arktur squid[4263]:   always_direct = 0
  Jan 30 08:56:58 Arktur squid[4263]:never_direct = 0
  Jan 30 08:56:58 Arktur squid[4263]:timedout = 0
 
  Reverse proxy with no matching cache_peer?
 
 No. classic proxy.
 
 Which part of the squid.conf might you need?

None. I saw Amos explanation. But it's a bug.

Regards
Henrik



RE: [squid-users] Squid/NTLM and site timeouts

2012-02-12 Thread Henrik Nordström
sön 2012-02-12 klockan 19:01 + skrev Jason Gauthier:

 I attempted to add persistent_request_timeout 6 minutes, but that
 did not achieve the desired effect.

That makes Squid wait at most 6 minutes for a new request after the
first completed, closing the connection if no new request is seen.

It does not terminate anything.

client_lifetime is the setting closest to what you describe, but be
warned that it's very blunt, and is not related to requests at all.

Regards
Henrik



Re: [squid-users] SSLBump SSL error

2012-02-05 Thread Henrik Nordström
sön 2012-02-05 klockan 17:52 + skrev Alex Crow:

 One example I know can reproduce this every time is:
 
 https://applyonline.abbeynational.co.uk/olaWeb/OLALogonServlet?action=prepareapplication=OnlineBankingRegistrationServletjs=on

that's a broken server the initial client hello handshake to be SSL2
compatible, but then requires immediate protocol upgrade to SSL3 or
TLSv1, but fails if the initial handshake is SSL3 or TLSv1. OpenSSL in
somewhat current versions by default disable all use pf SSLv2 due to
numerous weaknesses in the SSLv2 protocol and is as result normally
sending an SSL3 client hello handshake.

It's likely to hit problems some newer browsers as well, as SSL/TLS
security is being tightened up.

A workaround is to set ciphers to 'ALL:!COMPLEMENTOFDEFAULT' which
somehow magically enables SSLv2 again. But it's not a very good idea as
it may also enable some SSLv2 related attacks.

Regards
Henrik



Re: [squid-users] squid + sslbump compile errors

2012-02-05 Thread Henrik Nordström
sön 2012-02-05 klockan 14:09 -0600 skrev James R. Leu:

 I get the following errors:
 
 ufs/store_dir_ufs.cc: In member function 'virtual void 
 UFSSwapDir::statfs(StoreEntry) const':
 ufs/store_dir_ufs.cc:321:55: error: unable to find string literal operator 
 'operator PRIu64'

What compiler and operating system are you compiling Squid on?


 I was able to 'resolve' the above by using %jd instead
 of the PRIu64

%jd? Should be %lld

and compat/types.h should automatically define it as suitable if not
defined by the compiler headers.

 certificate_db.cc: In member function ‘void Ssl::CertificateDb::load()’:
 certificate_db.cc:455:1: error: ‘index_serial_hash_LHASH_HASH’ was not 
 declared in this scope

Hm.. fails for me as well. Please try the attached patch.

Regards
Henrik

=== modified file 'src/ssl/certificate_db.cc'
--- src/ssl/certificate_db.cc	2012-01-20 18:55:04 +
+++ src/ssl/certificate_db.cc	2012-02-05 23:35:46 +
@@ -445,7 +445,7 @@
 corrupt = true;
 
 // Create indexes in db.
-#if OPENSSL_VERSION_NUMBER = 0x104fL
+#if OPENSSL_VERSION_NUMBER = 0x1000L
 if (!corrupt  !TXT_DB_create_index(temp_db.get(), cnlSerial, NULL, LHASH_HASH_FN(index_serial), LHASH_COMP_FN(index_serial)))
 corrupt = true;
 
@@ -484,7 +484,7 @@
 void Ssl::CertificateDb::deleteRow(const char **row, int rowIndex)
 {
 const std::string filename(cert_full + / + row[cnlSerial] + .pem);
-#if OPENSSL_VERSION_NUMBER = 0x104fL
+#if OPENSSL_VERSION_NUMBER = 0x1000L
 sk_OPENSSL_PSTRING_delete(db.get()-data, rowIndex);
 #else
 sk_delete(db.get()-data, rowIndex);
@@ -492,7 +492,7 @@
 
 const Columns db_indexes[]={cnlSerial, cnlName};
 for (unsigned int i = 0; i  countof(db_indexes); i++) {
-#if OPENSSL_VERSION_NUMBER = 0x104fL
+#if OPENSSL_VERSION_NUMBER = 0x1000L
 if (LHASH_OF(OPENSSL_STRING) *fieldIndex =  db.get()-index[db_indexes[i]])
 lh_OPENSSL_STRING_delete(fieldIndex, (char **)row);
 #else
@@ -513,7 +513,7 @@
 return false;
 
 bool removed_one = false;
-#if OPENSSL_VERSION_NUMBER = 0x104fL
+#if OPENSSL_VERSION_NUMBER = 0x1000L
 for (int i = 0; i  sk_OPENSSL_PSTRING_num(db.get()-data); i++) {
 const char ** current_row = ((const char **)sk_OPENSSL_PSTRING_value(db.get()-data, i));
 #else
@@ -538,14 +538,14 @@
 if (!db)
 return false;
 
-#if OPENSSL_VERSION_NUMBER = 0x104fL
+#if OPENSSL_VERSION_NUMBER = 0x1000L
 if (sk_OPENSSL_PSTRING_num(db.get()-data) == 0)
 #else
 if (sk_num(db.get()-data) == 0)
 #endif
 return false;
 
-#if OPENSSL_VERSION_NUMBER = 0x104fL
+#if OPENSSL_VERSION_NUMBER = 0x1000L
 const char **row = (const char **)sk_OPENSSL_PSTRING_value(db.get()-data, 0);
 #else
 const char **row = (const char **)sk_value(db.get()-data, 0);
@@ -561,7 +561,7 @@
 if (!db)
 return false;
 
-#if OPENSSL_VERSION_NUMBER = 0x104fL
+#if OPENSSL_VERSION_NUMBER = 0x1000L
 for (int i = 0; i  sk_OPENSSL_PSTRING_num(db.get()-data); i++) {
 const char ** current_row = ((const char **)sk_OPENSSL_PSTRING_value(db.get()-data, i));
 #else



Re: [squid-users] Capturing HTTPS traffic

2012-02-05 Thread Henrik Nordström
sön 2012-02-05 klockan 17:33 -0600 skrev James R. Leu:
 If squid is configure to use ICAP and the ICAP server supports
 RESMOD would the ICAP server be given the full response unencrypted?

In sslbump mode yes.

Regards
Henrik



Re: [squid-users] Capturing HTTPS traffic

2012-02-05 Thread Henrik Nordström
sön 2012-02-05 klockan 22:44 -0500 skrev PS:


 Is there a specific place where that temp certificate is located, or
 is it the same certificate that I generated using OpenSSL  and is
 provided to squid in the http_port option of the squid.conf?

See sslcrt_program option,.

Regards
Henrik




Re: [squid-users] error processing the URL

2012-02-04 Thread Henrik Nordström
mån 2012-01-30 klockan 19:10 -0300 skrev Martin Nigoul:
 As we try to retrieve those files through our proxy we recive
 An error occurred on the server when processing the URL. Please
 contact the system administrator.
 If we configure one of our cache_peer parents as the proxy for the
 browsers the file is downloaded
 whithout problem.

A guess is that the site is using session cookies with embedded source
IP information.

Try setting prefer_direct off in squid.conf. This will make it more
insisting in using parents even when doing so makes no sense from a
caching perspective.

You can also use never_direct to force Squid hard to go via peers. And
cache_peer_access to limit which peers. Both these allows you to tune
for this site only if you like.

Alternatively try using always_direct for the site to not go via peers
at all, which also accomplishes the same goal of having a single source
IP for the session.

Regards
Henrik



Re: [squid-users] Q: squid as proxy for OWA: authentication not passing through

2012-02-04 Thread Henrik Nordström
lör 2012-02-04 klockan 15:22 +1300 skrev Amos Jeffries:

 Sigh. Exchange is VERY sensitive to the nature of requests it receives. 
 I suspect very much that this URL re-writing is part of the problem.

Yes. You can not rewrite URLs in any manner when reverse proxying

  - Exchange
  - Most WebDAV servers
  - Or many other non-browser things

Not even port numbers in most cases.

And if doing https offload then you MUST enable proper support for
negotiating this to the web server. I.e. front-end-https cache_peer
option when talking to Microsoft IIS/OWA, or special configuration at
the web server telling it that the requested URLs are really https://
even if received unencrypted by the web server..

 Start with re-considering *why* your Exchange server and Outlook clients 
 are not communicating the correct URLs between each other and what can 
 be done to their configuration to fix that.

In most cases it's a matter of

1. Configure the reverse proxy with vhost option. Works for https_port
as well. Or if you use defaultsite then this SHOULD be the actual
requested hostname normally requested by the clients, not the backend
server name.

2. Add the actual requested hostname as site name in the web server
configuration.

cache_peer takes care of routing the request to the right server so it's
only a matter of making the web server recognize the requested host name
as valid for it's content.

  A connection attempt through squid to the exchange server on a browser is 
  logged in access.log as follows
  https://squid's IP/testuser@exchange server/

Don't test using IP. Set up a proper hostname in DNS for the access. Or
in your local hosts file for testing only before updating DNS.

Regards
Henrik



Re: [squid-users] NTLM with a fall back to anonymous

2012-02-04 Thread Henrik Nordström
lör 2012-02-04 klockan 13:23 + skrev Jason Fitzpatrick:

 I was hoping that if a client failed to authenticate then it would be
 forwarded to the upstream and fall under what ever the default (un
 authorized) ruleset is, known risky sites etc would be getting
 filtered there,

Unfortunately HTTP do not work in that way.

Clients not supporting authentication sends requests without any
credentials at all. Proxies (and servers) wanting to see authentication
then rejects the request with an error authentication required
challenging the client to present valid credentials.

Clients supporting authentication also starts out by sending the request
without any credentials at all like above. The difference is only how
the client reacts to the received error. If the client supports
authentication then it collects the needed user credentials and retries
the same request but with user credentials this time.

If the credentials is invalid then the authentication fails, which in
most cases results in the exact same error as above to challenge the
user to enter the correct credentials.

Regards
Henrik



Re: [squid-users] Any idea to configure squid as a reverse-proxy to work with IIS/SharePoint plus NTLM

2012-01-30 Thread Henrik Nordström
mån 2012-01-30 klockan 11:48 +0800 skrev kimi ge(巍俊葛):

 Could anyone give any suggestion to configure squid as a reverse-proxy
 to work with IIS/SharePoint plus NTLM?

The normal recommended setup should just work.

http_port 80 accel vhost
cache_peer ip.of.iss.server 80 0 no-query originserver

If it fails then please provide a little more data

* Version of Squid used
* What does access.log say?

Regards
Henrik



Re: [squid-users] Intercept requests and send to a different URL

2012-01-30 Thread Henrik Nordström
mån 2012-01-30 klockan 13:00 -0500 skrev Carter, David:
 I looked in the FAQ, but I'm sure even what to call what I'm looking
 for.  I saw entries about redirects, but I don't see how to apply them
 to what I need.  I want to use Squid to intercept requests from
 internal test machines and be able to point them to a particular batch
 of servers for testing.

You don't want redirects for this. Just request routing.

See cache_peer + cache_peer_access + never_direct.

Regards
Henrik



Re: [squid-users] Any idea to configure squid as a reverse-proxy to work with IIS/SharePoint plus NTLM

2012-01-30 Thread Henrik Nordström
tis 2012-01-31 klockan 11:38 +0800 skrev kimi ge(巍俊葛):

 1. squid 2.6.23

Please use Squid-2.7.STABLE9 if using Squid-2. Not sure if connection
pinning to peers (required for NTLM) works well in 2.6.23.

 http_port 192.85.142.88:80 accel defaultsite=usplsvulx104.elabs.eds.com
 cache_peer wtestsm1.asiapacific.hpqcorp.net parent 80 0 no-query originserver 
 name=main

 1327979985.763390 16.178.121.18 TCP_MISS/404 600 GET 
 http://usplsvulx104.elabs.eds.com/ - FIRST_UP_PARENT/main text/html

Do the web server have a site named usplsvulx104.elabs.eds.com and an
index page? The web server says that the page do not exists (404).

 2. squid 3.1.18

 http_port 192.85.142.88:80 accel defaultsite=usplsvulx104.elabs.eds.com 
 connection-auth=on
 cache_peer wtestsm1.asiapacific.hpqcorp.net parent 80 0 no-query originserver 
 name=main

 1327980594.156 72 16.212.0.105 TCP_MISS/503 4098 GET 
 http://usplsvulx104.elabs.eds.com/ - FIRST_UP_PARENT/main text/html

Hmm.. now the web server says 503 Service Unavailable. Very odd.
Request sent by Squid should be close to identical to 2.6.23 above.

Regards
Henrik



Re: [squid-users] problem with squid_ldap_group

2012-01-26 Thread Henrik Nordström
tor 2012-01-26 klockan 10:20 +0400 skrev CyberSoul:

 dn: CN=internetusers,OU=KNG-Services,DC=kng,DC=local
 member: CN=ldapreader,OU=KNG-Services,DC=kng,DC=local

member have full LDAP DNs.
 Well, command for authorized by users I used is:
 /usr/lib/squid/squid_ldap_auth -R -D ldapreader@kng.local -w 12345678 \
 -b dc=kng,dc=local -f sAMAccountName=%s -h 192.168.4.100
 and it's work:
 ldapreader 12345678
 OK

Good. So you know how to look up users. Not reuse that in
squid_ldap_group as documented in it's man page. The two are closely
related.

squid_ldap_group -R -D ldapreader@kng.local -w 12345678 \
-b dc=kng,dc=local -F sAMAccountName=%s -h 192.168.4.100 \
-f ((objectClass=group)(member=%s))

note the -F which needs to be the same as -f to squid_ldap_auth. This
allows squid_ldap_group to locate the user object (DN) enabling it to
then lookup DN based group membership.

Regards
Henrik




Re: [squid-users] Re: Unable to forward this request at this time.

2012-01-25 Thread Henrik Nordström
ons 2012-01-25 klockan 08:50 -0800 skrev Luc Igert:
 Hi Amos, and thanks a lot for your answer.I Forgot to say I’m running as a
 Reverse Proxy with multiple backends, Squid 3.1
 
 What’s  confusing for me is the fact that www.xxx.ch is working, while
 backup.xxx.ch or wbbltest.xxx.ch aren’t.

So what cache_peer and cache_peer_access/cache_peer_domain rules do you
have?

You get this error if Squid can not find any acceptable cache_peer to
forward the request to, i.e. no alive cache_peer where
cache_peer_access/cache_peer_domain says it can forward the request.

Regards
Henrik



Re: [squid-users] how about releasing the major supported linux distros results? and what about dynamic content sites?

2012-01-23 Thread Henrik Nordström
ons 2012-01-04 klockan 12:48 +0200 skrev Eliezer Croitoru:

 the funny thing  is that fedora 16 with kernel 3.1.6 and squid 3.2.0.13 
 from the repo just works fine.

And have nothing special for making Squid run at all.. except not
mucking around with it and staying as close to upstream as possible.

Regards
Henrik



Re: [squid-users] Use parent proxy for some domains only

2011-12-30 Thread Henrik Nordström
sön 2011-12-25 klockan 17:07 +0200 skrev Eliezer Croitoru:

 acl proxy1 dstdomain secondproxy.com specialdomain1.com specialdomain2.com
 always_direct deny proxy1
 always_direct allow all
 never_direct allow proxy1

Or clearer and easier to extend on:

cache_peer_access secondproxy.com allow proxy1
never_direct allow proxy1

where cache_peer_access replaces cache_peer_domain.

You don't need to fiddle with always_direct here, never_direct allow
have higher priority than always_direct.

Note: If you have other cache_peer lines then remember to deny proxy1
from those.

Regards
Henrik



Re: [squid-users] Ldap secure user-authentication

2011-12-30 Thread Henrik Nordström
ons 2011-12-28 klockan 14:33 +1300 skrev Amos Jeffries:

 In order to move to the more secure auth methods usually requires a 
 config setting in the LDAP to enable support for secure authentication 
 tokens instead of a password. If you are lucky the LDAP server already 
 has that turned on and you only need to add other authentication LDAP 
 helpers to Squid.

To use Digest the LDAP tree needs to contain either

  a) plain-text passwords and allow the digest helper access to these
(very bad from a security perspective)

or

  b) Digest auth hashes specifically hashed for your proxy server realm,
and allow the Squid digest helper access to these. The needed password
hash is digest A1 hash which is MD5(login : realm : password) where
the realm is the realm configured on the proxy.

There is not many LDAP Servers that fall into category 'a' above for
obvious security reasons (but some do), and for 'b' you need to explicit
configure how the LDAP server stores passwords enabling digest hashing,
and have each user change their password after to allow the needed hash
to be stored in LDAP.

Note: The Digest A1 MD5 hash is security sensitive. If you add this to
your LDAP tree then also make sure the attribute is properly protected
only giving read access to Squid. As far as HTTP digest is concerned it
is equivalent to the password.

Regards
Henrik



Re: [squid-users] unable to connect to ssl site: google+

2011-12-30 Thread Henrik Nordström
tis 2011-12-27 klockan 22:56 +0100 skrev ftiaronsem:
 without problems. However I am unable to connect to google+
 https://plus.google.com, getting: The connection has timed out.

Maybe Google have finally enabled some 10+ year old badly needed TCP
extensions to improve performance and your firewall is now falling over
in total confusion dropping it's packets on the floor?

That's namely the usual cause to unexpected The connection has timed
out issues where the same connections works when not going via the
proxy server.

Look for ECN , window scaling and to some extent PAWS.

Linux TCP/IP stack by default enables all these features very
aggressively. But not Windows and also many Linux based web servers have
ECN disabled and windows scaling aggressively tuned down to avoid broken
firewalls.

Regards
Henrik



Re: [squid-users] enabling https 443 on vanilla squid -debian squeeze-

2011-12-30 Thread Henrik Nordström
ons 2011-12-28 klockan 15:10 +1300 skrev Amos Jeffries:

 On Debian yes, it must be re-compiled with --enable-ssl. The Debian 
 policy has problems with the way Squid (GPLv2+) and OpenSSL 
 (proprietary) licenses combine.

The OpenSSL license is not an proprietary license, it's an very liberal
free software license without copyleft. Just not 100% GPL compatible due
to an advertising clause.

Squid can lawfully be linked with OpenSSL when OpenSSL is provided as a
system library part of the operating system where Squid runs. But it's
not entirely trivial to define that boundary.

Regards
Henrik



Re: [squid-users] TCP_MEM_HIT long elapsed time

2011-12-24 Thread Henrik Nordström
tor 2011-12-22 klockan 13:10 +1300 skrev Amos Jeffries:

 Could also be a slow client (ie dialup modem).  68KB of data at dial 
 speeds of 1-2 KB/sec would take that long. Modern browsers open many 
 concurrent requests, which can drop modem speed down into that range 
 very easily.

Well, would expect 68KB to fit in the transmit window in nearly all
cases, even to dialup clients.

Regards
Henrik



RE: [squid-users] After reloading squid3, takes about 2 minutes to serve pages?

2011-12-21 Thread Henrik Nordström
tis 2011-12-20 klockan 10:48 -0500 skrev Terry Dobbs:

 I am using Berkley DB for the first time, perhaps that's why it takes
 longer? Although, I don't really see what Berkley DB is doing for me as
 I am still using flat files for my domains/urls? Guess I should take
 this to the squidGuard list!

Please generate the DB files offline after updating the blacklist,
then issue a squid -k rotate to have Squid restart the helpers.

squidGuard starts very quick if the databases have been properly
populated already, but will take a very long time to start up if not.

Regards
Henrik



Re: [squid-users] TCP_MEM_HIT long elapsed time

2011-12-21 Thread Henrik Nordström
ons 2011-12-21 klockan 18:47 +0100 skrev feralert:

 Maybe a dump question: why does it take so long for some TCP_MEM_HITS
 to 'show up', for example i got this:
 
 Dec 21 17:37:15  42721 192.X.X.X TCP_MEM_HIT/200 68873 GET http://example.com

Possibly ACL processing needing to wait for something (dns lookup, auth
processing, external acl, ...?)

The time is the time between Squid reading the request and sending the
last piece of the response.

Regards
Henrik



RE: [squid-users] After reloading squid3, takes about 2 minutes to serve pages?

2011-12-21 Thread Henrik Nordström
ons 2011-12-21 klockan 18:44 + skrev Jenny Lee:
 
 It takes me a minute and half to reach full load when squid doing 100 req/sec 
 is sent a reconfigure. Squid barely serves anything during this time (but it 
 is functional). All my timeouts are low. It was not like this on 3.2.0.1.

How big is your on-disk cache?

Is there any swap activity on the server?

Regards
Henrik



Re: [squid-users] Re : [squid-users] Re : [squid-users] Anonymous FTP and login pass url based

2011-12-20 Thread Henrik Nordström
mån 2011-12-19 klockan 23:53 +1300 skrev Amos Jeffries:

 Do you have a trace from this server when requesting something from the 
 login-required area of the site?

If the requested URL contains login credentials then anonymous FTP login
SHOULD NOT be attempted.

Regards
Henrik



Re: [squid-users] integrating with wlc

2011-12-20 Thread Henrik Nordström
mån 2011-12-19 klockan 18:35 +0200 skrev E.S. Rosenberg:
 Hi all,
 We have a Cisco WLC controlling our local wireless network, I would
 like it for squid to know which user is associated with the IP of the
 wireless client, so that I can implement user based
 restrictions/freedoms for our wireless network as well.
 So far my searches haven't turned up anything useful so I was
 wondering if anyone here had made that link in the past.

Is it possible to somehow query the WLC or perhaps your radius
accounting server which user is logged on to which IP?

Regards
Henrik



Re: [squid-users] squid occupying 100% cpu at free time also

2011-12-20 Thread Henrik Nordström
tis 2011-12-20 klockan 14:02 +0530 skrev Benjamin:

 When i remove traffic from router to squid means that time, there is no 
 traffic on squid box and that time also i can see same 100% cpu 
 utilization in top command.

Sounds like a bug.

First step, upgrade to a current release. 3.1.10 is pretty dated by now
(a year to be exact). Current release is 3.1.17.

Then if you still see this, please run

   /path/to/sbin/squid -k debug ; sleep 5; /path/to/sbin/squid -k debug

then file a bug report at bugs.squid-cache.org describing the problem
and attach your cache.log.

Regards
Henrik



Re: [squid-users] integrating with wlc

2011-12-20 Thread Henrik Nordström
tis 2011-12-20 klockan 14:09 +0200 skrev E.S. Rosenberg:

 About the wlc I don't know for sure yet, I can probably create a
 script/program that when presented with an IP can convert it to a
 username on the Radius server...
 But I don't know how that would then interact with squid...
 Thanks,

You can then plug that into Squid via the extenal acl interface. See
external_acl_type.

  http://www.squid-cache.org/Doc/config/external_acl_type/

Regards
Henrik



Re: [squid-users] integrating with wlc

2011-12-20 Thread Henrik Nordström
tis 2011-12-20 klockan 15:37 +0100 skrev Sean Boran:
 It might be possible to sent the WLC logs to a syslog server, where
 one could pipe into a parser to extract the pairs needed and front
 there create an ACL for squid?

As soon as you from the Squid server somehow can query who is the user
at IP X then you can plug this into Squid via external_acl_type,
providing the username to Squid for usage in logs and access controls.

Squid do not care how you do this. All Squid cares about in this context
is being able to query I have ip X, who is the user?

Regards
Henirk



Re: [squid-users] Squid logs not showing original client IP

2011-12-18 Thread Henrik Nordström
lör 2011-12-17 klockan 19:15 +0530 skrev Sekar Duraisamy:

 I have configured the log format with %{X-Forwarded-For}h . But in
 this field shows - . Not showing original client IP.

Is the load balancer adding a X-Forwarded-For header?

 How to configure the squid to find the original client IP in squid logs ?

How do the load balancer indicate the original client IP in the request
sent to Squid?

Regards
Henrik



Re: [squid-users] STABLE squid repo location?

2011-12-16 Thread Henrik Nordström
tor 2011-12-15 klockan 11:48 -0500 skrev Michael Altfield:

 I think I might have found it here (https://code.launchpad.net/~squid/squid=
 /3.1), but I'm not sure if this is the STABLE repository. If it is, can som=
 eone please explicitly say so in the README of the repo or on the wiki (htt=
 p://wiki.squid-cache.org/BzrInstructions). If not, please let me know where=
  to find it.

The official source repository for Squid-3 is the bazaar repository at
bzr.squid-cache.org/squid3/ where you find 3.1 in branches/SQUID_3_1

But launchpad is an automatic mirror of the same, and contains exactly
the same information with just a slight delay, and much better
connectivity.

And as others have mentioned you can also view the changesets from our
web page, divided per release. This view is slightly filtered to hide
automatic derived changes with no impact on the code as such.

http://www.squid-cache.org/Versions/v3/3.1/changesets/SQUID_3_1_17.html
http://www.squid-cache.org/Versions/v3/3.1/changesets/SQUID_3_1_18.html

Any specific change you are looking for?

Regards
Henrik



Re: [squid-users] Session not transferred when redirected by a website

2011-12-16 Thread Henrik Nordström
fre 2011-12-16 klockan 12:50 +0700 skrev Widhiyanto, Projo:

 I have a problem with certain website that doesn't seem to maintain
 session when it is redirected after a login process. Login was
 successful, but once you got redirected, the session is lost - and you
 got logged out. However the problem is only seen if I am using a parent
 cache (which is a Squid proxy of my ISP).

One possible cause to this is if the site encodes the requesting IP in
the session, and you allow your first Squid to go direct bypassing the
parent.

Setting prefer_direct off, or never_direct allow all, may help in such
case. But it this is the cause then it's really a bug in the web site as
source IP may vary pretty randomly when requests is forwarded via a mesh
of proxies or when the client is roaming between different networks.

Regards
Henrik



[squid-users] Re : [squid-users] Anonymous FTP and login pass url based

2011-12-16 Thread Henrik Nordström
Please try testing this with squidclient or another dumb http client.

The major browsers are all pretty braindead in different manners when it
comes to non-anonymous FTP URLs and can confuse matters greatly.

Regards
Henrik



Re: [squid-users] STABLE squid repo location?

2011-12-16 Thread Henrik Nordström
lör 2011-12-17 klockan 03:44 +0100 skrev Henrik Nordström:
 tor 2011-12-15 klockan 11:48 -0500 skrev Michael Altfield:
 
  I think I might have found it here (https://code.launchpad.net/~squid/squid=
  /3.1), but I'm not sure if this is the STABLE repository. If it is, can som=
  eone please explicitly say so in the README of the repo or on the wiki (htt=
  p://wiki.squid-cache.org/BzrInstructions). If not, please let me know where=
   to find it.
 
 The official source repository for Squid-3 is the bazaar repository at
 bzr.squid-cache.org/squid3/ where you find 3.1 in branches/SQUID_3_1

bzr.squid-cache.org/bzr/squid3/ even,..

 But launchpad is an automatic mirror of the same, and contains exactly
 the same information with just a slight delay, and much better
 connectivity.
 
 And as others have mentioned you can also view the changesets from our
 web page, divided per release. This view is slightly filtered to hide
 automatic derived changes with no impact on the code as such.
 
 
 http://www.squid-cache.org/Versions/v3/3.1/changesets/SQUID_3_1_17.html
 
 http://www.squid-cache.org/Versions/v3/3.1/changesets/SQUID_3_1_18.html
 
 Any specific change you are looking for?
 
 Regards
 Henrik




Re: [squid-users] Squid 3.2.0.14 beta is available

2011-12-13 Thread Henrik Nordström
tis 2011-12-13 klockan 22:59 +1300 skrev Amos Jeffries:

 Squid has resolved the domain name (www.facebook.com) the client 
 (10.0.2.45) was supposedly contacting and determined that the IP 
 (66.220.147.33) the packet was going to does not belong to that domain name.
 
 Details about the alert and some things which can be done about it when 
 it appears are at 
 http://wiki.squid-cache.org/KnowledgeBase/HostHeaderForgery

Which can easily happen if the client and Squid is using different DNS
servers as facebook and a number of other sites are responding to DNS
differently based on the source of the DNS query, or even randomly
changing to aid load balancing.

facebook.com is very noticeable here, they have very many server
addresses, but each DNS response contains only one single address.

Regards
Henrik



Re: [squid-users] Squid 3.2.0.14 beta is available

2011-12-13 Thread Henrik Nordström
tis 2011-12-13 klockan 12:59 +0200 skrev Saleh Madi:

 Dos the policy based routing make the Host header forgery detected problem.

All forms of interception runs into this.

The best cure is to get the browser configured to use the proxy. This
avoids the issue entirely.

See WPAD for one way to ease this.

Regards
Henrik



Re: [squid-users] SSL SESSION PARAMS poluting the cache log

2011-10-24 Thread Henrik Nordström
As said earlier this is printed only if you have set debug section 83 to
level 4 or higher.

grep debug_options /path/to/squid.conf


sön 2011-10-23 klockan 21:25 -0700 skrev Yucong Sun (叶雨飞):
 Hi, After a few version this still hasn't gone, my debug_options are
 default, which should be all,1 per manual. I'm compiling from the
 source on a ubuntu 10.04LTS
 
 
 Anyone else seeing this problem? 
 
 2011/8/29 Henrik Nordström hen...@henriknordstrom.net
 sön 2011-08-28 klockan 04:07 -0700 skrev Yucong Sun (叶雨飞):
  Hi,  after turning on https_port , I start to have these
 logs in
  cache.log , which is meaningless to have on a production
 server,
  anyway to turn it off?
 
  -BEGIN SSL SESSION PARAMETERS-
 
 
 What are your debug_options set to? This is only printed if
 you have
 enabled debug section 83 at level 4 or above.
 
 Regards
 Henrik
 
 
 




Re: [squid-users] SSL SESSION PARAMS poluting the cache log

2011-10-24 Thread Henrik Nordström
Which Squid versions have you tried, and is these standard Squid
versions or with any kind of patches applied?

sön 2011-10-23 klockan 23:28 -0700 skrev Yucong Sun (叶雨飞):
 As I said, there's no such setting in my config, I don't even have a
 debug_options in the config.
 
 2011/10/23 Henrik Nordström hen...@henriknordstrom.net:
  As said earlier this is printed only if you have set debug section 83 to
  level 4 or higher.
 
  grep debug_options /path/to/squid.conf
 
 
  sön 2011-10-23 klockan 21:25 -0700 skrev Yucong Sun (叶雨飞):
  Hi, After a few version this still hasn't gone, my debug_options are
  default, which should be all,1 per manual. I'm compiling from the
  source on a ubuntu 10.04LTS
 
 
  Anyone else seeing this problem?
 
  2011/8/29 Henrik Nordström hen...@henriknordstrom.net
  sön 2011-08-28 klockan 04:07 -0700 skrev Yucong Sun (叶雨飞):
   Hi,  after turning on https_port , I start to have these
  logs in
   cache.log , which is meaningless to have on a production
  server,
   anyway to turn it off?
  
   -BEGIN SSL SESSION PARAMETERS-
 
 
  What are your debug_options set to? This is only printed if
  you have
  enabled debug section 83 at level 4 or above.
 
  Regards
  Henrik
 
 
 
 
 
 




[squid-users] Re: SNMP Graphs

2011-09-26 Thread Henrik Nordström
sön 2011-09-25 klockan 15:25 + skrev Jenny Lee:

 Can someone who knows squid SNMP output devise some meaningful
 templates for us to be used in rrdtool or Cacti? I think it is such a
 waste to have all this info available yet nothing to use it from.

I have some rrdtool templates. Not perfect and needs a little more work,
but should collect the most of the relevant data I think.

http://www.henriknordstrom.net/code/squid_statistics.tar.gz
with example output at
http://www.henriknordstrom.net/code/squid_statistics/

let me see if I can find a more up to date copy.

Regards
Henrik



Re: [squid-users] Two authentication helpers in one instance

2011-08-30 Thread Henrik Nordström
tis 2011-08-30 klockan 14:19 +0200 skrev Rafal Zawierta:

 Is it possible to use dual authentication helpers in one squid3 instance.

Kind of, but only one of each authentication type.

 If user is in WinNT domain, he is authenticated against AD in negotiate mode.
 If user is not in in AD, then he is prompted for password.

Unfortunately not how the browsers works.

The selection is done by the browser based on the capabilities of the
browser, not if the user is logged on to a domain. If the browsre is
capable of performing Kerberos authentication then it will either use
the already logged in AD credentials or promt the user for AD
credentials, verified by the negotiate auth helper. If the browser is
not capable of kerberos authentication it will prompt for plain username
+ password authentication validated by the basic auth helper.

 But next, I'd like to match all users that are authenticated with
 basic mode in separate acl. I'm able to use some regex with that
 usernames - for example guest_ prefix in username.
 
 Is it possible?

Yes. See proxy_auth and proxy_auth_regex acl types.

Regards
Henrik



Re: [squid-users] Accelerating proxy not matching cgi files

2011-08-30 Thread Henrik Nordström
tis 2011-08-30 klockan 14:25 +0200 skrev Mateusz Buc:

 every server. Is squid capable of caching content which requires
 'basic' authentication?

Only if excplicitly told to, and then without validating the
authenticaiton.

Responses to requests with authentication is cached if either

a) The server sends Cache-Control: public telling caches that the
content is public and do not really require authentication.

b) The ignore-auth is used in squid.conf http(s)_port or refresh_pattern
directives to acheive the same effect as described above.

 At the moment, client Cache-Control says: Cache-Control: max-age=0 
 and it doesn't send any 'If-Modified-Since headers.

That's not normal unless you are playing with the Reload button.

Note: In reverse proxies you can use the ignore-cc flag telling Squid to
ignore Cache-Control sent by clients. The reverse proxy is an extension
of your web server.

Regards
Henrik



Re: [squid-users] using both havp and dansguardian as cache_peer

2011-08-30 Thread Henrik Nordström
tis 2011-08-30 klockan 14:40 +0200 skrev webmas...@ch-lons.fr:
 I'd like to use squid with both havp and dansguardian as cache_peer.
 It seems I have only one cache_peer working at time.
 How can I use 2 cache_peer ?

You need to chain them together. You can place them in pretty much any
order you prefer except that DG can't be the last as it needs a full
blown proxy to forward requests to.

Havp - DG - Squid
Squid - DG - Havp
DG - Squid - Havp
DG - Havp - Squid

Regards
Henrik



Re: [squid-users] using both havp and dansguardian as cache_peer

2011-08-30 Thread Henrik Nordström
tis 2011-08-30 klockan 16:45 +0200 skrev webmas...@ch-lons.fr:

 I just made this : Squid1 - Havp - DG - Squid2.
 cache_peer is calling havp.
 I defined DG as Parent proxy of HAVP.
 And DG finally connect to Squid2.
 Is it correct ?

Looks fine to me.

 This is the only way I found for getting ntlm_auth working with squid.
 Eric.

probably.

ntlm/negotiate is odd, not following basic HTTP messaging rules.

Regards
Henrik




Re: [squid-users] what does Squid do if two files have the same content and different file name?

2011-08-29 Thread Henrik Nordström
mån 2011-08-22 klockan 09:54 +0800 skrev Raymond Wang:
 Hi, all:
 
In our company, the business logic is common: different URL  may
 refer to the same content files. so in order to optimize the usage of
 memory, it is better that Squid would keep only object cached when the
 content is equal to each other for several files.
 
does squid support this feature? if so, how can I configure it?

You can squash all those identical files to the same URL within Squid by
using an URL rewriter. This tells Squid that requests for X1, X2, X3,
X4,  is all equal to requests for Y.

This requires the rules on identical content to be extracted from the
server and implemented as a search/replace pattern.

Regards
Henrik



Re: [squid-users] Multiple Squid Instances

2011-08-29 Thread Henrik Nordström
ons 2011-08-24 klockan 15:16 +0530 skrev viswanathan sekar:

 Is squid IO bound or CPU bound ?

Depends on how it's being used, your systems I/O capabilities,
configuration and many other parameters.

The main relecant prameters are
* Cache or no cache
* Forward or reverse proxy
* Type of cache_dir being used
* Amount of memory
* Size of cache
* Number of disks

Regards
Henrik



Re: [squid-users] Cache_peer with originserver

2011-08-29 Thread Henrik Nordström
mån 2011-08-29 klockan 15:04 +0530 skrev senthil kumar:

 When selecting cache_peer among many peers, whether peer which has
 originserver, does have any preference or any special feature?

No. It simply tells Squid that this peer is a web server and expects
requests to be sent in web server format and not proxy format.

Note: many web servers accepts both formats (actually required by
HTTP/1.1).

As Amos already mentioned there is some additional minor twists as
originserver also tells Squid that the peer is NOT an HTTP proxy, so it
can't sent proxy type requests to it.

All in all, originserver flag basically only makes sense in reverse
proxy modes, and then only when forwarding requests to acutal web
servers as peers. Normally those are parent peers to the reverse proxy.

Regards
Henrik



Re: [squid-users] SSL SESSION PARAMS poluting the cache log

2011-08-29 Thread Henrik Nordström
sön 2011-08-28 klockan 04:07 -0700 skrev Yucong Sun (叶雨飞):
 Hi,  after turning on https_port , I start to have these logs in
 cache.log , which is meaningless to have on a production server,
 anyway to turn it off?
 
 -BEGIN SSL SESSION PARAMETERS-

What are your debug_options set to? This is only printed if you have
enabled debug section 83 at level 4 or above.

Regards
Henrik



Re: [squid-users] RE: large config file issues?

2011-08-29 Thread Henrik Nordström
Basically the following per site:

https_port unique-ip:443 name=site_a cert=/path/to/cert.pem accel 
defaultsite=sitename.a
acl sites_a dstdomain sitename.a
cache_peer ip.of.web.server parent 443 0 name=server_a ssl no-query originserer
cache_peer_access server_a allos sites_a


But simplifications are possible if

* If there is wildcard certificates involved, enabling more than one
site per public ip:port defined by https_port (add vhost in such case)

* If using HTTP to the web servers terminating SSL in Squid. You can
then use host based vhosting on the web server to run many more sites
off the same ip:port which limits the number of cache_peer you need in
Squid.

* Alternatively if using wildcard certificates on the backend web
server, or ignoring certificate validation completely, enabling host
based vhosting on the backend web server while still using ssl. (using
the same protocol all the way makes some web server applications
happier)

mån 2011-08-29 klockan 11:26 -0400 skrev Daniel Alfonso:
 Any help would be largely appreciated.
 
 Need advice on what my config file should look like for 250+ Different SSL 
 Secured Sites
 
 Thank you :)
 
 From: Daniel Alfonso
 Sent: Tuesday, August 23, 2011 1:51 PM
 To: squid-users@squid-cache.org
 Subject: large config file issues?
 
 Hello, Squid noob here...
 
 I have about 250 or so different sites that I want to setup in SSL reverse 
 proxy mode
 I have a unique ip bound per site and the 250+ ips are responding on the 
 interface
 I am using the following template to build my config and running into parsing 
 issues (lines may wrap in email)
 
 
 http_port SQUIDSERVERIP:80 accel defaultsite=www.DOMAIN
 https_port SQUIDSERVERIP:443 accel cert=/certs/DOMAIN.crt 
 key=/certs/DOMAIN.key cafile=/certs/gd_bundle.crt defaultsite=www.DOMAIN
 cache_peer ORIGINSERVERIP parent 80 0 no-query originserver name=SITENAMEaccel
 acl SITENAMEacl dstdomain www.DOMAIN
 acl SITENAMEacl dstdomain DOMAIN
 cache_peer_access SITENAMEaccel allow SITENAMEacl
 http_access allow SITENAMEacl
 
 
 1 or 2 sites work ok, but at 1700+ lines full config does not work. I get 
 random parse errors which leads me to believe I'm not building this config as 
 efficiently as I could
 
 Any help would be greatly appreciated.
 
 Daniel Alfonso
 System Administrator




Re: [squid-users] about the cache and CARP

2011-08-16 Thread Henrik Nordström
tis 2011-08-16 klockan 16:54 -0400 skrev Carlos Manuel Trepeu Pupo:
 I want to make Common Address Redundancy Protocol or CARP with two
 squid 3.0 STABLE10 that I have, but here I found this question:
 
 If the main Squid with 40 GB of cache shutdown for any reason, then
 the 2nd squid will start up but without any cache.

Why will the second Squid start up without any cache?

If you are using CARP then cache is sort of distributed over the
available caches, and the amount of cache you loose is proportional to
the amount of cache space that goes offline.

However, CARP routing in Squid-3.0 only applies when you have multiple
levels of caches. Still doable with just two servers but you then need
two Squid instances per server.

* Frontend Squids, doing in-memory cache and CARP routing to Cache
Squids
* Cache Squids, doing disk caching

When request routing is done 100% CARP then you loose 50% of the cache
should one of the two cache servers go down.

There is also possible hybrid models where the cache gets more
duplicated among the cache servers, but not sure 3.0 can handle those.

Regards
Henrik



Re: [squid-users] NONE/501 in an https:// POST request

2011-01-24 Thread Henrik Nordström
mån 2011-01-24 klockan 18:44 +0100 skrev Ralf Hildebrandt:

  In the section case Squid and the server did not agree on the SSL
  protocol.
 
 I wonder what went wrong in that case.

Could be many things unfortunatey. But to be honest it's not worth
investigating in your case. You ended up in the http-https gatewaying
case because of a broken application forgetting to enable SSL when
sending an https request via the proxy. It's not the right action to
have the proxy masquerade this problem by wrapping the request in SSL at
the proxy.

There is some valid uses of the http-https gatewaying capability, but
this is not one of them.

However, people using sslbump will run into the same problem quite
likely, and for that reason it may be worth investigating.

 I did that, disabled v2 but it wouldn't work anyway. But in the
 meantime they fixed their broken app :)

Good.

Regards
Henrik




Re: [squid-users] ecap adapter munging cached body

2011-01-24 Thread Henrik Nordström
mån 2011-01-24 klockan 17:46 +1300 skrev Amos Jeffries:

 AFAIK, that proper variant handling was not yet ported to squid-3. Only 
 in squid-2 right now.

Correct, but even the basic variant handling is 1-N. The difference is
that the basic mode do not merge equal responses, and each possible
request variation will cause a new copy in the cache.

 This identical behaviour is causing some problems with recent Chrome 
 using sdch encoding. Thus clashing with the gzip|deflate cached variant 
 from other browsers.

?

Regards
Henrik



RE: [squid-users] Squid 3.x very slow loading on ireport.cnn.com

2011-01-24 Thread Henrik Nordström
sön 2011-01-23 klockan 23:35 -0500 skrev Max Feil:

 If you look through the traces you'll notice that at some point Squid
 sends a TCP [FIN, ACK] right in the middle of a connection for
 seemingly no reason. 
 
 From the browser side it seems to be given no notification that the
 connection was closed (and indeed I can see no reason why it should be
 closed) so it seems to sit around doing nothing as it may have reached
 the max connections limit.

Odd.

Can you reproduce the problem? If so then it would be very helpful if
you could run Squid with full debug output enabled (squid -k debug)
and also capture the data with wireshark. Then send the collected data
to ftp://ftp.squid-cache.se/incoming/ and notify me.

Regards
Henrik



Re: [squid-users] Re: Squid + SSL + Safari

2011-01-24 Thread Henrik Nordström
mån 2011-01-24 klockan 12:09 -0600 skrev jam...@mail.milton.k12.wi.us:

 the CONNECT function and tries to block it but it still passes through.

What does access.log report?

REgards
Henrik



Re: [squid-users] Missing content-length header for POST and PUT

2011-01-24 Thread Henrik Nordström
tis 2011-01-25 klockan 02:01 +1300 skrev Amos Jeffries:

  But to be honest we do not really need to check that POST/PUT have a
  request entity. This is mostly a relic from way back when request
  entities were handled very special.
 
 
 Can I expect a patch soon then?

Sure. Revision 11172. Drops those method checks.

Also fixes a copy-paste adaptation bug related to this if
request_entities on is set in squid.conf.

Regards
Henrik



Re: [squid-users] Why is Cache-Control: max-age added to forwarded HTTP requests?

2011-01-24 Thread Henrik Nordström
mån 2011-01-24 klockan 10:44 -0500 skrev John Craws:
 Hi Amos,
 
 Thank you for your reply.
 
 I am wondering if squid should still be doing this if, as in my
 particular case, caching is disabled on the proxy instance.
 
 Based on my observations, it does.

It's been discussed from time to time if we should stop doing this, with
no final conclusion. But I think I agree that adding max-age adds more
confusion than it fixes.

The code in question is in http.cc
HttpStateData::httpBuildRequestHeader(). Look for Add max-age only.
(line 1789 in trunk today)

Regards
Henrik



RE: [squid-users] Squid as Proxy for Exchange 2010‏

2011-01-24 Thread Henrik Nordström
mån 2011-01-24 klockan 19:52 + skrev smudly Quickhands:
 You are saying that I can use the same certificate on two servers by 
 following the instructions below?  Is that legal?

Sure. Perfectly fine, and commonly done in many situations.

- reverse proxy setups, like yours
- clustered servers
- standby servers
- etc..

Regards
Henrik



Re: [squid-users] Some pages loading very slow in 3.1.10 Stable

2011-01-24 Thread Henrik Nordström
mån 2011-01-24 klockan 18:39 -0200 skrev Marcus Kool:

 I did not find options to configure bind/named to ignore  lookups either
 so I would love to see Squid have the new option.

It does.

a) If Squid is built without IPv6 support

b) If the host where Squid runs do not have IPv6 support at all.

Regards
Henrik




Re: [squid-users] SSL Stops responding

2011-01-23 Thread Henrik Nordström
lör 2011-01-22 klockan 12:16 -0500 skrev James P. Ashton:
 Does anyone have any thoughts on this?   I am not fond of the idea that both 
 squid instances stopped responding to SSL requests at the same time.

Is your OpenSSL up to date?

Regards
Henrik



RE: [squid-users] Squid 3.x very slow loading on ireport.cnn.com

2011-01-23 Thread Henrik Nordström
tor 2011-01-20 klockan 02:50 -0500 skrev Max Feil:
 Thanks. I am looking at the squid access.log and the delay is caused by
 a GET which for some reason does not result in a response from the
 server. Either there is no response or Squid is missing the response.
 After a 120 second time-out the page continues loading, but the end
 result may be malformed due to the object which did not load. 

I would take a peek at the traffic using wireshark to get some insight
in what is going on there.

REgards
Henrik



Re: [squid-users] Missing content-length header for POST and PUT

2011-01-23 Thread Henrik Nordström
fre 2011-01-21 klockan 05:45 +1300 skrev Amos Jeffries:

 empty? No. If they have no content length indicated they have to be 
 assumed as being infinite length transfers. HTTP specs require this 411 
 reply message.

Not quite. Requests without an entity is always headers-only. The
infinite length is only on responses.

 The client software is *supposed* to add a length and retry.

Yes.

But to be honest we do not really need to check that POST/PUT have a
request entity. This is mostly a relic from way back when request
entities were handled very special.

Regards
Henrik



Re: [squid-users] NONE/501 in an https:// POST request

2011-01-23 Thread Henrik Nordström
fre 2011-01-21 klockan 11:31 +0100 skrev Ralf Hildebrandt:
  1294685115.286  0 10.43.120.109 NONE/501 4145 POST 
  https://enis.eurotransplant.nl/donor-webservice/dpa?WDSL - HIER_NONE/- 
  text/html
 
 So, I enabled SSL using --enable-ssl and now I'm getting:
 
 1295605546.943313 141.42.231.227 TCP_MISS/503 4251 GET 
 https://enis.eurotransplant.nl/donor-webservice/dpa?WDSL - 
 HIER_DIRECT/194.151.178.174 text/html
 and the error output consists of the ERR_SECURE_CONNECT_FAIL error message

In both cases Squid received an https:// request unencrypted over plain
HTTP.

In the first case, as your Squid did not have SSL support if could not
forward the request at all, as it can not wrap the unencrypted request
in SSL/TLS for forwardning to the requested server.

In the section case Squid and the server did not agree on the SSL
protocol.

If using this http-https gatewaying capability then you should
configure Squid to not use SSLv2. SSLv2 is considered broken beyond
repair these days. See sslproxy_options for how to tune this in Squid.

Regards
Henrik



Re: [squid-users] ecap adapter munging cached body

2011-01-23 Thread Henrik Nordström
lör 2011-01-22 klockan 23:04 +1300 skrev Amos Jeffries:

 Squid caches only one of N variants so the expected behviour is that 
 each new variant is a MISS but becomes a HIT on repeated duplicate 
 requests until a new variant pushes it out of cache.

No it caches all N variants seen if the origin response has Vary:

But not sure what happens with the gzip eCAP module in this regard.

Regards
Henrik



Re: [squid-users] ecap adapter munging cached body

2011-01-23 Thread Henrik Nordström
sön 2011-01-23 klockan 14:14 -0800 skrev Jonathan Wolfe:

 I'm using the values of asdf for a bogus Accept-Encoding value that
 shouldn't trigger gzipping, and gzip for when I actually want to
 invoke the module.  To be clear, the webserver isn't zipping at all.

Is the web server responding with Vary: Accept-Encoding?

 I can change the behavior of the webserver to not include Vary:
 Accept-Encoding for content meant to be cached by squid, but that
 results in responses of the cached (unzipped) version even for clients
 who accept zipped versions, once the cache is populated by a client
 not requesting a zipped version, and that defeats the point of the
 gzip module for me because I want to gzip cached content for clients
 that support it.

Sounds like the gzip eCAP module handles things in a bad manner. It
should add Vary, and it's responses should be cacheable if the original
response is. Seems it does neither..

Regards
Henrik



Re: [squid-users] What http headers required for squid to work?

2011-01-19 Thread Henrik Nordström
tis 2011-01-18 klockan 08:41 -0800 skrev diginger:

 Please tell me what http headers required in response for squid caching to
 work. 

At least one of
Last-Modified: datetime
Cache-Control: max-age=seconds
Expires: datetime

and no other headers which forbids caching. I.e. Cache-Control:
no-store / no-cache etc.

Regards
Henrik



Re: [squid-users] Problem with squid_kerb_auth

2011-01-19 Thread Henrik Nordström
ons 2011-01-19 klockan 13:12 +0100 skrev Rafal Zawierta:

 authenticateNegotiateHandleReply: Error validating user via Negotiate.
 Error returned 'BH received type 1 NTLM token'

That the client selected to use NTLM, not Kerberos. The squid_kerb_auth
helper only supports Kerberos. To support NTLM you also need to
configure NTLM authentication support in Squid. The Negotiate scheme as
such on the wire supports any authentication method Windows SPNEGO
supports.

I can only guess to why the client did not select to use Kerberos
* Did not find the right kerberos principal in the domain directory.
* do not trust the requested proxy server for Kerbeors authentication
* perhaps kerberos auth failed somehow and it did a fallback on NTLM?

Regards
Henrik



Re: [squid-users] Problem with squid_kerb_auth

2011-01-19 Thread Henrik Nordström
tor 2011-01-20 klockan 01:26 +1300 skrev Amos Jeffries:

 As you can see the browser is sending an NTLM handshake instead of the 
 Kerberos token. The current Squid auth system does not support 
 Negotiate/NTLM only Negotiate/Kerberos but has no way to tell IE8 that.

Technically Squid do not care which SPNEGO (Negotiate scheme) method is
used, but squid_kerb_auth is Kerberos only.

In this case Negotiate/NTLM was used by the client (not to be confused
with bare NTLM).

Regards
Henrik



Re: [squid-users] size of squid binary

2011-01-18 Thread Henrik Nordström
fre 2011-01-14 klockan 21:06 +0200 skrev Eda FLORAT:

 if accept loosing debug symbols and get stripped binary, can we say
 that stripped binary of squid will perform better?

There is a almost non-existing difference in startup time for loading
the binary. Once started there is no difference in CPU or memory usage.

Regards
Henrik



Re: [squid-users] Too many objects in cache?

2011-01-18 Thread Henrik Nordström
mån 2011-01-17 klockan 11:39 -0800 skrev Michael Leong:
 Hi,
 My squid installation keeps crashing w/ the following error:
 
 assertion failed: filemap.c:78: fm-max_n_files = FILEMAP_MAX_SIZE

Which is what subject says.

Each cache_dir can hold up to 2^24 objects.

Reduce the size of your cache_dir:s, dividing the disk over more
cache_dir entries.

You can have up to 31 cache_dir entries.

Regards
Henrik



Re: [squid-users] Persistant Connection Timeout setting, Whats a good start?

2011-01-18 Thread Henrik Nordström
sön 2011-01-16 klockan 20:33 -0800 skrev fix:
 Persistant Connection Timeout setting, Whats a good start?
 
 I have mine set to 120, is that ok??

It's the default, and should be reasonable.

Regards
Henrik




Re: [squid-users] size of squid binary

2011-01-13 Thread Henrik Nordström
mån 2010-12-27 klockan 11:00 -0600 skrev Orestes Leal R.:
 I've built squid 3.1.10 on openbsd4.6 sucessfuly
 but my squid binary it's 40M of size, then I do a:

 it's this size by default normal?

Yes.

 squid gets a debug build by default?

Yes, just as is done for virtually any Open Source software you can
find.

The memory usage is just the stripped size and disk space is cheap
compared to the alternative.  Without the debug info you can't analyze
any crashes in a meaningful way.

I kind of like the way this is handled in Fedora and perhaps other
distributios as well, where packaged binaries is packaged with debug
info kept separately from the binary and installed when needed. Gives
the best of both.

Regards
Henrik



Re: [squid-users] assertion failed in COSS

2011-01-13 Thread Henrik Nordström
tor 2011-01-13 klockan 15:38 +0300 skrev Hasanen AL-Bana:
 Hi,
 
 I am getting these every few minutes causing squid process to restart
 
 2011/01/08 17:30:20| assertion failed: coss/store_dir_coss.c:276:
 curstripe == storeCossFilenoToStripe(cs, e-swap_filen)

This is a bug.

A guess is that it's triggered by you trying to store largeish objects
in the COSS. COSS is designed for storing small objects only.  I would
not recommend using a max-size larger than at most 64K for COSS, and
would recommend 32K.

Regards
Henrik



Re: [squid-users] Squid 3.2 - Dynamic SSL certs that aren't self-signed

2010-12-23 Thread Henrik Nordström
tor 2010-12-23 klockan 11:52 -0800 skrev Alex Ray:
 I've written an ad-hoc bash script, ssl_srtd_ca, that acts like the
 following, but doesn't work when dropped-in.  Is there some sort of
 spec on how ssl_crtd communicates?

src/ssl/ssl_crtd.cc is the closest to a spec I think.

why did you need to write another helper? You can specify a signing CA
by using the cert= and key= options to http_port in combination with
generate-host-certificates.

Regards
Henrik



Re: [squid-users] Squid 3.2 - Dynamic SSL certs that aren't self-signed

2010-12-23 Thread Henrik Nordström
tor 2010-12-23 klockan 13:56 -0800 skrev Alex Ray:

 2010/12/23 13:54:55 kid1| Closing SSL FD 10 as lacking SSL context
 
 in the cache.log, and in a browser bounces between Looking Up and Waiting For.

That means it failed to dynamically generate the cert, and since there
was no default cert assigned by cert= it could not continue.

You should get detailed trace if enabling debug section 33,5

Regards
Jemrol



Re: [squid-users] SQUID + BGP

2010-12-23 Thread Henrik Nordström
tor 2010-12-23 klockan 18:02 -0300 skrev Daniel Echizen:
 HI, i need a best solution to implement a squid proxy in front of a
 bgp. I dont know the bgp system right now, but a was thinking in a
 tproxy or wccp.. any idea the best way to do this.. and also the best
 hardware for 100M of link.

How does BGP come into the picture? BGP i a routing protocol, Squid is
an HTTP proxy.

Regards
Henrik



Re: [squid-users] Modifying the log format

2010-12-22 Thread Henrik Nordström
ons 2010-12-22 klockan 12:37 -0800 skrev Volker-Yoblick, Adam:

 I'd like to further customize the time format of the local time (%tl) to be 
 %Y/%m/%d:%H:%M:%S %z , but the docs don't make it very clear on how to supply 
 the strftime format argument. Can someone explain what the correct syntax is? 
 I've tried a few ways, but none of them worked.

Condensed squid.conf.documented quote:

% format codes all follow the same basic structure where all but
the formatcode is optional.

% [|[|'|#] [-] [[0]width] [{argument}] formatcode

Time related format codes:

tl  Local time. Optional strftime format argument
default %d/%b/%Y:%H:%M:%S %z


Which gives

%{%Y/%m/%d:%H:%M:%S %z}tl

 2. I'd like to change the timestamps in my cache_store_log as well. Is this 
 possible?

No.

But most people disable cache_store_log anyway. It adds very little
information unless you are debugging or analyzing specific store
interactions.

Regards
Henrik



Re: [squid-users] Delay pool question

2010-12-21 Thread Henrik Nordström
lör 2010-12-18 klockan 02:25 +1300 skrev Amos Jeffries:
 On 17/12/10 23:23, Nick Cairncross wrote:
  Hi List,
 
  A quick Delay Pool question..and a favour..
 
  Currently using basic Delay Pool configuration for users:
 
  delay_class 1 4
  delay_parameters 1 -1/-1 -1/-1 -1/-1 200/200
 
 Careful with those big numbers. They are in *bytes* and only the recent 
 versions of Squid can cope with 32-bit values.

Eum.. 200 is not a big number. Just 2^21, far from 32-bit limit.

Regards
Henrik



RE: [squid-users] Beta testers wanted for 3.2.0.1 - Changing 'workers' (from 1 to 2) is not supported and ignored

2010-11-27 Thread Henrik Nordström
fre 2010-11-26 klockan 21:08 + skrev Ming Fu:
 Ktrace shown that the bind failed because it try to open unix socket in 
 /usr/local/squid/var/run and it does not have the permission. So it is easy 
 to fix.
 
 After the permission is corrected, I run into other problem, here is the log 
 snip:
 
 2010/11/26 20:55:35 kid2| Starting Squid Cache version 3.2.0.3 for 
 amd64-unknown-freebsd8.1...
 2010/11/26 20:55:35 kid3| Starting Squid Cache version 3.2.0.3 for 
 amd64-unknown-freebsd8.1...
 2010/11/26 20:55:35 kid1| Starting Squid Cache version 3.2.0.3 for 
 amd64-unknown-freebsd8.1...
 2010/11/26 20:55:35 kid3| Set Current Directory to /usr/local/squid/var/cache
 2010/11/26 20:55:35 kid2| Set Current Directory to /usr/local/squid/var/cache
 2010/11/26 20:55:35 kid1| Set Current Directory to /usr/local/squid/var/cache

Each worker need their own cache location.

http://www.squid-cache.org/Versions/v3/3.2/RELEASENOTES.html#ss2.1

Regards
Henrik



RE: [squid-users] 304 response preventing site from loading

2010-11-09 Thread Henrik Nordström
tor 2010-09-30 klockan 13:24 +1000 skrev Paul Freeman:

 However on further investigation, I don't think this is the case in this
 instance.  For some reason, the squid GET request to www.mhhe.com (IP
 12.26.55.139) takes a long time to be completed - approx. 2 minutes.  Some
 data is returned quickly but then there is a period where on my squid server
 I see a TCP Previous Segment lost then squid server sending Dup ACKs to
 www.mhhe.com and www.mhhe.com sending TCP Retransmissions for the same
 segment.  The Retransmission RTTs to ACK the one segment are at 0.2,4,8,16,32
 and 60 seconds.  After that segment has finally been received, the rest of
 the data is received OK. 

This smells like TCP window scaling issues in a firewall somewhere.

Try as a test:

  echo 0 /proc/sys/net/ipv4/tcp_window_scaling

note that this is somewhat intrusive and reduces performance of TCP in
general, but is an easy way of testing for the problem.

Regards
Henrik



Re: [squid-users] Today's BZR checkout crashes repeatedly

2010-10-13 Thread Henrik Nordström
Build without --enable-cache-digests, or alternatively with 
--disable-cache-digests


- Ursprungsmeddelande -
 * Henrik Nordström hen...@henriknordstrom.net:
  tis 2010-10-12 klockan 21:48 +0200 skrev Ralf Hildebrandt:
  
   Program received signal SIGSEGV, Segmentation fault.
   0x08183e82 in refreshCheck (entry=value optimized out,
   request=value optimized out, delta=value optimized out) at
   refresh.cc:292 292                   
   request-flags.fail_on_validation_err =
   1; #0   0x08183e82 in refreshCheck (entry=value optimized out,
   request=value optimized out, delta=value optimized out) at
   refresh.cc:292 #1   0x08184a1d in refreshCheckDigest (entry=0x0,
   delta=3600) at refresh.cc:506 #2   0x0819ea29 in storeDigestAddable
   (datanotused=0x0) at store_digest.cc:256
  
  Looks like a problem related to store digests. 
  
  File a bug report, 
 
 Did that simultaneously.
 
  and meanwhile try if the error goes away if you disable store digest
  support.
 
 How?
 
 -- 
 Ralf Hildebrandt
     Geschäftsbereich IT | Abteilung Netzwerk
     Charité - Universitätsmedizin Berlin
     Campus Benjamin Franklin
     Hindenburgdamm 30 | D-12203 Berlin
     Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962
     ralf.hildebra...@charite.de | http://www.charite.de
            



Re: [squid-users] Basic questions - Forward proxy, reverse proxy, squid performance

2010-10-13 Thread Henrik Nordström
ons 2010-10-13 klockan 08:36 -0700 skrev cachenewbie:
 Hi - In a transparent mode, is there any protocol and functional
 difference between squid running in forward mode and reverse mode ? i.e.
 other than talking to a dedicated backend, is reverse proxy doing the same
 thing as forward proxy w.r.t to HTTP caching proxy functionality

There is very little functional difference between the proxy modes.

transparent - NAT integration. Accept origin server requests.

reverse - Accept origin server requests. Require the use of a peer by
default.

normal - Accept proxy requests.

 Also are there any good performance numbers with latest version of Squid
 3.1.6?

None that I know of. 

  I've read posts comparing nginx and Squid citing that former is
 better than latter - but I am curious if the differences still persist with
 the latest event driven model in 3.1.6 ? For forward transparent proxy mode,
 has someone evaluated the pros and cons of Squid/Nginx ? 

Nginx is probably faster than Squid, but lacks in other ways.

Regards
Henrik




Re: [squid-users] Squid for android

2010-10-12 Thread Henrik Nordström
mån 2010-10-11 klockan 17:07 -0500 skrev Luis Daniel Lucio Quiroz:
 Helo
 
 just wondering if someone has packe squid in android phones ARM5+

Quite unlikely. But it should be possible. But Squid currently isn't the
easiest to crosscompile I amafraid.

But I do have Squid running in Nokia N900. And I know it also runs on a
couple other ARM devices such as sheeva plug computers etc.

Regards
Henrik



Re: [squid-users] Squid for android

2010-10-12 Thread Henrik Nordström
ons 2010-10-13 klockan 11:15 +0800 skrev Jeff Peng:
 2010/10/13 Henrik Nordström hen...@henriknordstrom.net:
 
 
  But I do have Squid running in Nokia N900.
 
 How did you make that work Henrik?

I compiled it using gcc as usual. Only had to disable optimizations
using -O0 as the GCC version used for maemo seems to fail if
optimizations is enabled.

Maemo is almost a full debian, so the environment is very familliar to a
linux user/admin.

Regards
Henrik



Re: [squid-users] How does mgr:mem headers match columns?

2010-09-29 Thread Henrik Nordström
sön 2010-09-26 klockan 23:11 +0800 skrev Kaiwang Chen:
 Hi all,
 
 Looks like mgr:mem in squid 3.1.6 mainly contains 19 columns of data.
 What are the corresponding 19 headers? The following is a copy of
 mgr:mem output with HTTP reponse headers removed.
 
 Current memory usage:

Header

 Pool Obj Size   Chunks
  Allocated   In Use
   IdleAllocations
 Saved   Rate
  (bytes)KB/chobj/ch (#)  usedfreepart
   %Frag   (#) (KB)high (KB)   high (hrs)  %Tot
 (#)  (KB)high (KB)   high (hrs)  %alloc (#)  (KB)
   high (KB)  (#)  %cnt%vol   (#)/sec

Data

 mem_node 4136
   1816806 7338193 734352544.61
 96.520  1816591 7337325 7343525 44.61   99.988
  215 869 12687   231439642   0.713   14.164  162.627
 Short Strings  40
   2067947 80780   92855   323.31  1.062
 2067236 80752   92855   323.31  99.966  711 28  1115
  -2147483648 60.643  11.646  15521.504

[...]


If you use cachemgr.cgi then the table will be nicely formatted so
header and data is aligned in a readable table.

The format is tab separated columns.

Regards
Henrik



Re: [squid-users] Re: Again with winbindd_privileged, sometimes Ensure permissions on /var/db/samba/winbindd_privileged are set correctly

2010-09-29 Thread Henrik Nordström
ons 2010-09-29 klockan 15:19 +0400 skrev c0re:

 And that's true. I need to change group to squid to
 winbindd_privileged  AND winbindd_privileged/pipe.
 Trying to figure out on to how to ask winbind to make it's pipe with
 another group like winbind_priv... winbind makes it root:wheel by
 default.

You set the permissions on the folder where the pipe is.

Should be

   750  root:winbind

and your cache_effective_user should be member of the winbind group.

Regards
Henrik



Re: [squid-users] Re: Again with winbindd_privileged, sometimes Ensure permissions on /var/db/samba/winbindd_privileged are set correctly

2010-09-29 Thread Henrik Nordström
ons 2010-09-29 klockan 16:13 +0400 skrev c0re:
 eh...
 
 There is no winbind/samba and etc group. No samba/winbind user.
 
 I guess I need to configure samba to use some different group like
 winbind, add this group to system.

No need to configure samba. Just add the group and assign it group
ownership of the winbind_privileged folder.

 But I can't find configuration setting that forces winbind to use
 winbind as group, not wheel.

wheel is fine for winbind, Permissions is controlled at the
winbind_privileged folder level, not the pipe within it.

Regards
Henrik



  1   2   3   4   5   6   >