Re: [squid-users] This one site (virk.dk) doesn't work through Squid

2012-02-05 Thread Brian Andersen
Damn, that is f'ed up. And it is a huge Danish site used by almost
every single company in Denmark.

Thank you for your breakdown of the problem

Cheers,
Brian

2012/2/4 Amos Jeffries squ...@treenet.co.nz:
 On 2/02/2012 10:27 p.m., Per Jessen wrote:

 Brian Andersen wrote:

 Hi I have squid running on a ubuntu server with shorewall. I am using
 the default squid config files and I have only blocked one site (which
 isn't virk.dk). All sites works perfectly, except http://virk.dk If I
 do not redirect my traffic through Squid it works perfectly

 Can any here please check that site (it is a public company site in
 Denmark), and maybe enlighten me on what settings I have to change to
 get it to work.

 It doesn't work here either - to start with, I've blacklisted it:

 acl virkdk dstdomain .virk.dk
 cache deny virkdk

 I'm not sure if that works, I'm pretty certain I see this message in the
 log on every first attempt to access http://virk.dk:

 Invalid chunk header '#037213#010'


 Aha. That would be one of the problem.

 I've just run a few tests.

 The server seems to be very broken.

 When HTTP/1.1 clients send it an invalid request (missing Host) it works
 fine. WTF?

 When HTTP/1.1 clients send it a valid a request it responds with
 Transfer-Encoding headers stating that the response is chunked encoded twice
 (two layers to decode).
  BUT... the response is only chunked once.

 When HTTP/1.0 clients send it any request it still responds with
 Transfer-Encoding headers.
  * Only one encoding is indicated, BUT HTTP/1.0 clients do not support
 chunked encoding and MUST NOT be sent such headers.
  * On top of that mess, the body is not actually encoded.


 'GET /cms/render/live/da/sites/virk/home.html HTTP/1.0
 Host: virk.dk
 User-Agent: squidclient/3.3
 Accept: */*
 Connection: close

 '
 Resolving... virk.dk
 Connecting... virk.dk(213.174.73.30)
 Connected to: virk.dk (213.174.73.30)
 HTTP/1.1 200 OK
 Set-Cookie: JSESSIONID=E2059352BD9CAA154835BE95F9597AF2; Path=/; HttpOnly
 Server: Apache-Coyote/1.1
 Expires: Wed, 09 May 1979 05:30:00 GMT
 Cache-Control: no-cache, no-store, must-revalidate, proxy-revalidate,
 max-age=0
 Pragma: no-cache
 Transfer-Encoding: chunked ---  Problem #1:  HTTP/1.0 client getting
 chunked header.
 Vary: Accept-Encoding
 Date: Sat, 04 Feb 2012 00:46:04 GMT
 P3P: CP=IDC DSP COR ADM DEVi TAIi PSA PSD IVAi IVDi CONi HIS OUR IND CNT
 Content-Type: text/html;charset=UTF-8
 Connection: close

 --- Problem #2:   no chunked encoding.
 !DOCTYPE html PUBLIC ...
  ...



 'GET /cms/render/live/da/sites/virk/home.html HTTP/1.1
 Host: virk.dk
 User-Agent: squidclient/3.3
 Accept: */*
 Connection: close

 '
 Resolving... virk.dk
 Connecting... virk.dk(213.174.73.30)
 Connected to: virk.dk (213.174.73.30)
 HTTP/1.1 200 OK
 Set-Cookie: JSESSIONID=53C47E3818BC600A142F935214BB8CCA; Path=/; HttpOnly
 Server: Apache-Coyote/1.1
 Expires: Wed, 09 May 1979 05:30:00 GMT
 Cache-Control: no-cache, no-store, must-revalidate, proxy-revalidate,
 max-age=0
 Pragma: no-cache
 Transfer-Encoding: chunked ---  NOTE: first encoding: the body is encoded
 using chunked
 Vary: Accept-Encoding
 Date: Sat, 04 Feb 2012 00:59:54 GMT
 P3P: CP=IDC DSP COR ADM DEVi TAIi PSA PSD IVAi IVDi CONi HIS OUR IND CNT
 Content-Type: text/html;charset=UTF-8
 Transfer-Encoding: chunked ---  NOTE: second encoding: output of the first
 encoding is encoded using chunked.
 -- Problem #3: RFC 2616 requires that chunked MUST NOT have another
 encoding applied on top of it (it must be the last encoding). First encoding
 was chunked.
 Connection: close

 2000 ---  NOTE: this is what chunked encoding looks like in HTTP/1.1
 ---  Problem #4: the inner layer of chunking does not exist
 !DOCTYPE html PUBLIC ...
 ...

 Amos


Re: [squid-users] NTLM with a fall back to anonymous

2012-02-05 Thread Jason Fitzpatrick
Hi Henrik..

it is never easy is it ;0)

Looks like I will be maintaining whitelists for the foreseeable future!

Thanks for the reply

Jay

2012/2/4 Henrik Nordström hen...@henriknordstrom.net:
 lör 2012-02-04 klockan 13:23 + skrev Jason Fitzpatrick:

 I was hoping that if a client failed to authenticate then it would be
 forwarded to the upstream and fall under what ever the default (un
 authorized) ruleset is, known risky sites etc would be getting
 filtered there,

 Unfortunately HTTP do not work in that way.

 Clients not supporting authentication sends requests without any
 credentials at all. Proxies (and servers) wanting to see authentication
 then rejects the request with an error authentication required
 challenging the client to present valid credentials.

 Clients supporting authentication also starts out by sending the request
 without any credentials at all like above. The difference is only how
 the client reacts to the received error. If the client supports
 authentication then it collects the needed user credentials and retries
 the same request but with user credentials this time.

 If the credentials is invalid then the authentication fails, which in
 most cases results in the exact same error as above to challenge the
 user to enter the correct credentials.

 Regards
 Henrik




--

The only difference between saints and sinners is that every saint
has a past while every sinner has a future. 
— Oscar Wilde


[squid-users] SSLBump SSL error

2012-02-05 Thread Alex Crow

Hi Amos/All,

I am running a 3.2 snapshot in production (with a 2.7 as a fallback) 
with ssl-bump and dynamic cert generation. For some SSL sites, we are 
getting the following in cache.log:


2012/02/05 10:23:03 kid1| fwdNegotiateSSL: Error negotiating SSL 
connection on FD 33: error::lib(0):func(0):reason(0) (5/0/0)


and a

The system returned: (71) Protocol error

from squid in the browser.

One example I know can reproduce this every time is:

https://applyonline.abbeynational.co.uk/olaWeb/OLALogonServlet?action=prepareapplication=OnlineBankingRegistrationServletjs=on

which is the Register link from Santander's online banking logon page 
(noone can logon to their Santander banking either, and we see the same 
in the logs).


we have also had to exclude the following domains from bumping for the 
same reason:


.threadneedle.co.uk
.santander.co.uk
.bankline.rbs.com
.socgen.com
.mandg.co.uk

Other SSL sites bump fine so I'm not sure what is happening here.

Cheers

Alex





Re: [squid-users] Capturing HTTPS traffic

2012-02-05 Thread James R. Leu
I do not have sslbump working yet, but as I understand it the packets
on the wire are always encrypted.  The only place the information exists
in a decrypted form is in squids memory.  Just think of squid as a bridge
between two SSL streams.

On Sun, Feb 05, 2012 at 02:12:44PM -0500, PS wrote:
 I tried using ssldump and tshark and I can't seem to get this working. I am 
 using squid's private key to try to decrypt the traffic.
 
 The connection goes from the client (192.168.2.2) to squid server 
 (192.168.2.1) on port 3128. If I understand correctly, the client establishes 
 a connection with squid on port 3128 and then squid establishes a connection 
 with https://www.gmail.com on port 443.
 
 Shouldn't I be able to decrypt the connection between the client and the 
 squid server in order to see the traffic that is being sent to gmail?
 
 On Feb 3, 2012, at 2:08 PM, Alfonso Alejandro Reyes Jimenez 
 aare...@scitum.com.mx wrote:
 
  Sorry. SSLDUMP is like tcpdump but for ssl, it Works on layer 3 and has 
  nothing to do with squid, that what we use.
  
  Regards.
  
  
  
  -Mensaje original-
  De: PS [mailto:packetst...@gmail.com] 
  Enviado el: viernes, 03 de febrero de 2012 12:56 p.m.
  Para: Alfonso Alejandro Reyes Jimenez
  CC: squid-users@squid-cache.org
  Asunto: Re: [squid-users] Capturing HTTPS traffic
  
  Could you please be a little more specific? Is there something else called 
  ssldump that I am supposed to use?
  
  This is what my config looks like. I am currently using ssl_bump.
  
  
  acl localnet src 10.0.0.0/8# RFC1918 possible internal network
  acl localnet src 172.16.0.0/12# RFC1918 possible internal network
  acl localnet src 192.168.0.0/16# RFC1918 possible internal network
  acl localnet src fc00::/7   # RFC 4193 local private network range
  acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged) 
  machines
  acl SSL_ports port 443
  acl Safe_ports port 80# http
  acl Safe_ports port 21# ftp
  acl Safe_ports port 443# https
  acl Safe_ports port 70# gopher
  acl Safe_ports port 210# wais
  acl Safe_ports port 1025-65535# unregistered ports
  acl Safe_ports port 280# http-mgmt
  acl Safe_ports port 488# gss-http
  acl Safe_ports port 591# filemaker
  acl Safe_ports port 777# multiling http
  acl CONNECT method CONNECT
  http_access allow localhost manager
  http_access deny manager
  http_access deny !Safe_ports
  http_access deny CONNECT !SSL_ports
  http_access allow localnet
  http_access allow localhost
  http_access deny all
  http_port 3128 ssl-bump generate-host-certificates=on 
  dynamic_cert_mem_cache_size=4MB  cert=/usr/local/squid/ssl_cert/squid.pem
  always_direct allow all
  ssl_bump allow all
  sslproxy_cert_error allow all
  sslproxy_flags DONT_VERIFY_PEER
  coredump_dir /usr/local/squid/var/cache/squid
  refresh_pattern ^ftp:144020%10080
  refresh_pattern ^gopher:14400%1440
  refresh_pattern -i (/cgi-bin/|\?) 00%0
  refresh_pattern .020%4320
  logformat squid %ts.%03tu %6tr %a %Ss/%03Hs %st %rm %ru %un %Sh/%A %mt 
  access_log /usr/local/squid/var/logs/access.log squid
  
  Thanks for the quick response!
  
  On Feb 3, 2012, at 1:20 PM, Alfonso Alejandro Reyes Jimenez wrote:
  
  Hi.
  
  If you have the certifícate information you may use ssldump to decode the 
  information. I hope this helps.
  
  
  Regards.
  
  -Mensaje original-
  De: PS [mailto:packetst...@gmail.com] Enviado el: viernes, 03 de 
  febrero de 2012 12:11 p.m.
  Para: squid-users@squid-cache.org
  Asunto: [squid-users] Capturing HTTPS traffic
  
  Hello,
  
  I am currently running the following version of Squid:
  
  Squid Cache: Version 3.2.0.14-20120202-r11500 configure options:  
  '--enable-ssl' '--enable-ssl-crtd'
  
  I configured it so that certs are generated on the fly and I am able to 
  get to HTTPS websites without getting a certificate warning.
  
  I want to do a packet capture of all HTTPS traffic while in cleartext. I 
  would think that it can be done on the Squid box. Is that possible?
  
  If I use tcpdump on the Squid box, I only see the encrypted traffic. Do I 
  have to recompile Squid with another configuration option to be able to do 
  what I want to do?
  
  Thanks
  

-- 
James R. Leu
j...@mindspring.com


pgpofuGslaKJf.pgp
Description: PGP signature


[squid-users] squid + sslbump compile errors

2012-02-05 Thread James R. Leu
I'm trying to compile squid with sslbump support.
As I understand it this means adding:

--enable-ssl
--enable-ssl-crtd

to the configure command line.

I'm using:
squid from bzr (11997)
openssl-1.0.0g-1
gcc-4.7.0-0.10

I get the following errors:

ufs/store_dir_ufs.cc: In member function 'virtual void 
UFSSwapDir::statfs(StoreEntry) const':
ufs/store_dir_ufs.cc:321:55: error: unable to find string literal operator 
'operator PRIu64'
ufs/store_dir_ufs.cc: In member function 'virtual void 
UFSSwapDir::dump(StoreEntry) const':
ufs/store_dir_ufs.cc:1348:41: error: unable to find string literal operator 
'operator PRIu64'

I was able to 'resolve' the above by using %jd instead
of the PRIu64

certificate_db.cc: In member function ‘void Ssl::CertificateDb::load()’:
certificate_db.cc:455:1: error: ‘index_serial_hash_LHASH_HASH’ was not declared 
in this scope
certificate_db.cc:455:1: error: ‘index_serial_cmp_LHASH_COMP’ was not declared 
in this scope
certificate_db.cc:458:1: error: ‘index_name_hash_LHASH_HASH’ was not declared 
in this scope
certificate_db.cc:458:1: error: ‘index_name_cmp_LHASH_COMP’ was not declared in 
this scope
certificate_db.cc: In member function ‘void Ssl::CertificateDb::deleteRow(const 
char**, int)’:
certificate_db.cc:490:39: error: cannot convert ‘stack_st_OPENSSL_PSTRING*’ to 
‘_STACK* {aka stack_st*}’ for argument ‘1’ to ‘void* sk_delete(_STACK*, int)’
certificate_db.cc:499:13: error: ‘LHASH’ was not declared in this scope
certificate_db.cc:499:20: error: ‘fieldIndex’ was not declared in this scope
certificate_db.cc: In member function ‘bool 
Ssl::CertificateDb::deleteInvalidCertificate()’:
certificate_db.cc:520:46: error: cannot convert ‘stack_st_OPENSSL_PSTRING*’ to 
‘const _STACK* {aka const stack_st*}’ for argument ‘1’ to ‘int sk_num(const 
_STACK*)’
certificate_db.cc:521:79: error: cannot convert ‘stack_st_OPENSSL_PSTRING*’ to 
‘const _STACK* {aka const stack_st*}’ for argument ‘1’ to ‘void* sk_value(const 
_STACK*, int)’
certificate_db.cc: In member function ‘bool 
Ssl::CertificateDb::deleteOldestCertificate()’:
certificate_db.cc:544:30: error: cannot convert ‘stack_st_OPENSSL_PSTRING*’ to 
‘const _STACK* {aka const stack_st*}’ for argument ‘1’ to ‘int sk_num(const 
_STACK*)’
certificate_db.cc:551:65: error: cannot convert ‘stack_st_OPENSSL_PSTRING*’ to 
‘const _STACK* {aka const stack_st*}’ for argument ‘1’ to ‘void* sk_value(const 
_STACK*, int)’
certificate_db.cc: In member function ‘bool 
Ssl::CertificateDb::deleteByHostname(const string)’:
certificate_db.cc:568:46: error: cannot convert ‘stack_st_OPENSSL_PSTRING*’ to 
‘const _STACK* {aka const stack_st*}’ for argument ‘1’ to ‘int sk_num(const 
_STACK*)’
certificate_db.cc:569:79: error: cannot convert ‘stack_st_OPENSSL_PSTRING*’ to 
‘const _STACK* {aka const stack_st*}’ for argument ‘1’ to ‘void* sk_value(const 
_STACK*, int)’

-- 
James R. Leu
j...@mindspring.com


pgpEsXQHOdy6g.pgp
Description: PGP signature


Re: [squid-users] Capturing HTTPS traffic

2012-02-05 Thread PS
If that's the case, would there be any possible way for me to get the decrypted 
packets?


On Feb 5, 2012, at 2:37 PM, James R. Leu wrote:

 I do not have sslbump working yet, but as I understand it the packets
 on the wire are always encrypted.  The only place the information exists
 in a decrypted form is in squids memory.  Just think of squid as a bridge
 between two SSL streams.
 
 On Sun, Feb 05, 2012 at 02:12:44PM -0500, PS wrote:
 I tried using ssldump and tshark and I can't seem to get this working. I am 
 using squid's private key to try to decrypt the traffic.
 
 The connection goes from the client (192.168.2.2) to squid server 
 (192.168.2.1) on port 3128. If I understand correctly, the client 
 establishes a connection with squid on port 3128 and then squid establishes 
 a connection with https://www.gmail.com on port 443.
 
 Shouldn't I be able to decrypt the connection between the client and the 
 squid server in order to see the traffic that is being sent to gmail?
 
 On Feb 3, 2012, at 2:08 PM, Alfonso Alejandro Reyes Jimenez 
 aare...@scitum.com.mx wrote:
 
 Sorry. SSLDUMP is like tcpdump but for ssl, it Works on layer 3 and has 
 nothing to do with squid, that what we use.
 
 Regards.
 
 
 
 -Mensaje original-
 De: PS [mailto:packetst...@gmail.com] 
 Enviado el: viernes, 03 de febrero de 2012 12:56 p.m.
 Para: Alfonso Alejandro Reyes Jimenez
 CC: squid-users@squid-cache.org
 Asunto: Re: [squid-users] Capturing HTTPS traffic
 
 Could you please be a little more specific? Is there something else called 
 ssldump that I am supposed to use?
 
 This is what my config looks like. I am currently using ssl_bump.
 
 
 acl localnet src 10.0.0.0/8# RFC1918 possible internal network
 acl localnet src 172.16.0.0/12# RFC1918 possible internal network
 acl localnet src 192.168.0.0/16# RFC1918 possible internal network
 acl localnet src fc00::/7   # RFC 4193 local private network range
 acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged) 
 machines
 acl SSL_ports port 443
 acl Safe_ports port 80# http
 acl Safe_ports port 21# ftp
 acl Safe_ports port 443# https
 acl Safe_ports port 70# gopher
 acl Safe_ports port 210# wais
 acl Safe_ports port 1025-65535# unregistered ports
 acl Safe_ports port 280# http-mgmt
 acl Safe_ports port 488# gss-http
 acl Safe_ports port 591# filemaker
 acl Safe_ports port 777# multiling http
 acl CONNECT method CONNECT
 http_access allow localhost manager
 http_access deny manager
 http_access deny !Safe_ports
 http_access deny CONNECT !SSL_ports
 http_access allow localnet
 http_access allow localhost
 http_access deny all
 http_port 3128 ssl-bump generate-host-certificates=on 
 dynamic_cert_mem_cache_size=4MB  cert=/usr/local/squid/ssl_cert/squid.pem
 always_direct allow all
 ssl_bump allow all
 sslproxy_cert_error allow all
 sslproxy_flags DONT_VERIFY_PEER
 coredump_dir /usr/local/squid/var/cache/squid
 refresh_pattern ^ftp:144020%10080
 refresh_pattern ^gopher:14400%1440
 refresh_pattern -i (/cgi-bin/|\?) 00%0
 refresh_pattern .020%4320
 logformat squid %ts.%03tu %6tr %a %Ss/%03Hs %st %rm %ru %un %Sh/%A %mt 
 access_log /usr/local/squid/var/logs/access.log squid
 
 Thanks for the quick response!
 
 On Feb 3, 2012, at 1:20 PM, Alfonso Alejandro Reyes Jimenez wrote:
 
 Hi.
 
 If you have the certifícate information you may use ssldump to decode the 
 information. I hope this helps.
 
 
 Regards.
 
 -Mensaje original-
 De: PS [mailto:packetst...@gmail.com] Enviado el: viernes, 03 de 
 febrero de 2012 12:11 p.m.
 Para: squid-users@squid-cache.org
 Asunto: [squid-users] Capturing HTTPS traffic
 
 Hello,
 
 I am currently running the following version of Squid:
 
 Squid Cache: Version 3.2.0.14-20120202-r11500 configure options:  
 '--enable-ssl' '--enable-ssl-crtd'
 
 I configured it so that certs are generated on the fly and I am able to 
 get to HTTPS websites without getting a certificate warning.
 
 I want to do a packet capture of all HTTPS traffic while in cleartext. I 
 would think that it can be done on the Squid box. Is that possible?
 
 If I use tcpdump on the Squid box, I only see the encrypted traffic. Do I 
 have to recompile Squid with another configuration option to be able to do 
 what I want to do?
 
 Thanks
 
 
 -- 
 James R. Leu
 j...@mindspring.com



Re: [squid-users] SSLBump SSL error

2012-02-05 Thread Henrik Nordström
sön 2012-02-05 klockan 17:52 + skrev Alex Crow:

 One example I know can reproduce this every time is:
 
 https://applyonline.abbeynational.co.uk/olaWeb/OLALogonServlet?action=prepareapplication=OnlineBankingRegistrationServletjs=on

that's a broken server the initial client hello handshake to be SSL2
compatible, but then requires immediate protocol upgrade to SSL3 or
TLSv1, but fails if the initial handshake is SSL3 or TLSv1. OpenSSL in
somewhat current versions by default disable all use pf SSLv2 due to
numerous weaknesses in the SSLv2 protocol and is as result normally
sending an SSL3 client hello handshake.

It's likely to hit problems some newer browsers as well, as SSL/TLS
security is being tightened up.

A workaround is to set ciphers to 'ALL:!COMPLEMENTOFDEFAULT' which
somehow magically enables SSLv2 again. But it's not a very good idea as
it may also enable some SSLv2 related attacks.

Regards
Henrik



Re: [squid-users] squid + sslbump compile errors

2012-02-05 Thread Henrik Nordström
sön 2012-02-05 klockan 14:09 -0600 skrev James R. Leu:

 I get the following errors:
 
 ufs/store_dir_ufs.cc: In member function 'virtual void 
 UFSSwapDir::statfs(StoreEntry) const':
 ufs/store_dir_ufs.cc:321:55: error: unable to find string literal operator 
 'operator PRIu64'

What compiler and operating system are you compiling Squid on?


 I was able to 'resolve' the above by using %jd instead
 of the PRIu64

%jd? Should be %lld

and compat/types.h should automatically define it as suitable if not
defined by the compiler headers.

 certificate_db.cc: In member function ‘void Ssl::CertificateDb::load()’:
 certificate_db.cc:455:1: error: ‘index_serial_hash_LHASH_HASH’ was not 
 declared in this scope

Hm.. fails for me as well. Please try the attached patch.

Regards
Henrik

=== modified file 'src/ssl/certificate_db.cc'
--- src/ssl/certificate_db.cc	2012-01-20 18:55:04 +
+++ src/ssl/certificate_db.cc	2012-02-05 23:35:46 +
@@ -445,7 +445,7 @@
 corrupt = true;
 
 // Create indexes in db.
-#if OPENSSL_VERSION_NUMBER = 0x104fL
+#if OPENSSL_VERSION_NUMBER = 0x1000L
 if (!corrupt  !TXT_DB_create_index(temp_db.get(), cnlSerial, NULL, LHASH_HASH_FN(index_serial), LHASH_COMP_FN(index_serial)))
 corrupt = true;
 
@@ -484,7 +484,7 @@
 void Ssl::CertificateDb::deleteRow(const char **row, int rowIndex)
 {
 const std::string filename(cert_full + / + row[cnlSerial] + .pem);
-#if OPENSSL_VERSION_NUMBER = 0x104fL
+#if OPENSSL_VERSION_NUMBER = 0x1000L
 sk_OPENSSL_PSTRING_delete(db.get()-data, rowIndex);
 #else
 sk_delete(db.get()-data, rowIndex);
@@ -492,7 +492,7 @@
 
 const Columns db_indexes[]={cnlSerial, cnlName};
 for (unsigned int i = 0; i  countof(db_indexes); i++) {
-#if OPENSSL_VERSION_NUMBER = 0x104fL
+#if OPENSSL_VERSION_NUMBER = 0x1000L
 if (LHASH_OF(OPENSSL_STRING) *fieldIndex =  db.get()-index[db_indexes[i]])
 lh_OPENSSL_STRING_delete(fieldIndex, (char **)row);
 #else
@@ -513,7 +513,7 @@
 return false;
 
 bool removed_one = false;
-#if OPENSSL_VERSION_NUMBER = 0x104fL
+#if OPENSSL_VERSION_NUMBER = 0x1000L
 for (int i = 0; i  sk_OPENSSL_PSTRING_num(db.get()-data); i++) {
 const char ** current_row = ((const char **)sk_OPENSSL_PSTRING_value(db.get()-data, i));
 #else
@@ -538,14 +538,14 @@
 if (!db)
 return false;
 
-#if OPENSSL_VERSION_NUMBER = 0x104fL
+#if OPENSSL_VERSION_NUMBER = 0x1000L
 if (sk_OPENSSL_PSTRING_num(db.get()-data) == 0)
 #else
 if (sk_num(db.get()-data) == 0)
 #endif
 return false;
 
-#if OPENSSL_VERSION_NUMBER = 0x104fL
+#if OPENSSL_VERSION_NUMBER = 0x1000L
 const char **row = (const char **)sk_OPENSSL_PSTRING_value(db.get()-data, 0);
 #else
 const char **row = (const char **)sk_value(db.get()-data, 0);
@@ -561,7 +561,7 @@
 if (!db)
 return false;
 
-#if OPENSSL_VERSION_NUMBER = 0x104fL
+#if OPENSSL_VERSION_NUMBER = 0x1000L
 for (int i = 0; i  sk_OPENSSL_PSTRING_num(db.get()-data); i++) {
 const char ** current_row = ((const char **)sk_OPENSSL_PSTRING_value(db.get()-data, i));
 #else



Re: [squid-users] Capturing HTTPS traffic

2012-02-05 Thread Henrik Nordström
sön 2012-02-05 klockan 17:33 -0600 skrev James R. Leu:
 If squid is configure to use ICAP and the ICAP server supports
 RESMOD would the ICAP server be given the full response unencrypted?

In sslbump mode yes.

Regards
Henrik



Re: [squid-users] Capturing HTTPS traffic

2012-02-05 Thread PS
I'm not very familiar with ICAP, but I would think that this could be done via 
ICAP since it can be used to send the unencrypted data to an AV server.

Victor Pineiro


On Feb 5, 2012, at 6:39 PM, Henrik Nordström hen...@henriknordstrom.net wrote:

 sön 2012-02-05 klockan 17:33 -0600 skrev James R. Leu:
 If squid is configure to use ICAP and the ICAP server supports
 RESMOD would the ICAP server be given the full response unencrypted?
 
 In sslbump mode yes.
 
 Regards
 Henrik
 


[squid-users] ncsa_auth credentials issue

2012-02-05 Thread zongo saiba
Hi, 

I have been using squid 3.1.15 with freebsd 8 with ncsa_authen and no issues. 
Updated squid to 3.1.18 and freebsd to version 9. Now squid is complaining that 
my credentials are not valid anymore. Actually credentials from clients using 
either firefox and safari running either OS X or Linux. 

I have no trace in logs telling me that I have an issue anywhere. I have been 
strolling the internet for an answer but to no avail. Any help is much 
appreciated. 
I have tried different form of passwd myself. Though I would give that a try 
but to no avail.

Thanks for your help

PS: I can post config file if required

zongo

Re: [squid-users] ncsa_auth credentials issue

2012-02-05 Thread Amos Jeffries

On 6/02/2012 2:01 p.m., zongo saiba wrote:

Hi,

I have been using squid 3.1.15 with freebsd 8 with ncsa_authen and no issues. 
Updated squid to 3.1.18 and freebsd to version 9. Now squid is complaining that 
my credentials are not valid anymore


How? The particular message and method of delivering it is important 
when dealling with authentication.


Web page error? Popup dialog? and/or cache.log message?

Amos


RE: [squid-users] POST method when using squid_kerb_auth and sending Yahoo mail attachment

2012-02-05 Thread Hank Disuko

Thanks Amos, 

What's happening is quite similar to the details described in the 
aforementioned Firefox bug filing.


When the attach file function is started in the Yahoo Mail compose message 
window and a file is selected from the user's desktop filesystem, a new HTTP 
POST operation is initiated to squid. This is a new tcp session entirely.


The POST operation is a form served by host sp1.attach.mail.yahoo.com using a 
Shockwave Flash user-agent - so I'm assuming the browser itself sits this one 
out. Here's the first little bit of the request, it's followed by form-data 
such as filename and content-type etc.


POST 
http://sp1.attach.mail.yahoo.com/ca.f431.mail.yahoo.com/ya/upload_with_cred?-- 
HTTP/1.1
Accept: text/*
Content-Type: multipart/form-data; 
boundary=--cH2ae0gL6KM7ei4ei4ei4Ij5Ij5KM7
User-Agent: Shockwave Flash
Host: sp1.attach.mail.yahoo.com
Content-Length: 719794
Proxy-Connection: Keep-Alive
Pragma: no-cache
Cookie: 
B=dgrausd7a344rb=4d=vku6LippYFR6PRpZokl3s5qyCUJklnhtfiFfs=pti=A6MbHqjIfHzX9QIh5CDC;
 


 

Squid responds to this initial POST operation with the predictable 
TCP_DENIED/407 Cache Access Denied message:

from access.log:
 
Sun Feb 5 22:29:16 2012 3 172.16.130.22 TCP_DENIED/407 5626 POST 
http://sp1.attach.mail.yahoo.com/ca.f431.mail.yahoo.com/ya/upload_with_cred? - 
NONE/- text/html


HTTP/1.0 407 Proxy Authentication Required

Server: squid/3.1.11

Mime-Version: 1.0

Date: Mon, 06 Feb 2012 03:29:16 GMT

Content-Type: text/html

Content-Length: 5206

X-Squid-Error: ERR_CACHE_ACCESS_DENIED 0

Vary: Accept-Language

Content-Language: en

Proxy-Authenticate: Negotiate

X-Cache: MISS from localhost

X-Cache-Lookup: NONE from localhost:3128

Via: 1.0 localhost (squid/3.1.11)

Connection: keep-alive



 
Squid actually serves up the full 407 Denied webpage, but it's not presented 
to the user.  Instead, the Yahoo Flash user-agent seems to trample on instead 
and attempts to send the attachment anyway, without first re-sending the 
request with the credentials required to access squid.  I can see the pdf being 
uploaded to the squid server, but squid just ignores it.  This manifests as a 
hanging upload window to the user.

 

I guess I need to know where to look in order to find out why the request is 
not re-sent using the proper credentials.  Is it the Yahoo user-agent?  is it 
the browser?  

 

Thanks,

Hank


 


 


 Date: Sat, 4 Feb 2012 18:39:23 +1300
 From: squ...@treenet.co.nz
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] POST method when using squid_kerb_auth and sending 
 Yahoo mail attachment
 
 On 4/02/2012 12:46 p.m., Hank Disuko wrote:
  Hello folks,
 
  I'm using squid 3.1.11-1 on Ubuntu Server 11. I am
  using /usr/lib/squid3/squid_kerb_auth to auth against a Windows 2008
  directory.
 
  I am unable to upload attachments to emails when using the *new* Yahoo! 
  Mail interface. The old interface seems to work fine.
 
  I've seen this problem reported around the internet. These older posts 
  reveals some insight:
 
  http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-NTML-and-auth-problems-with-POST-td2255704.html
 
 This is a well known problem with NTLM design. Kerberos was re-designed 
 to avoid this. Since you are apparently Negotiate protocol with a 
 Negotiate/kerberos helpers it is not relevant.
 
 
  http://www.squid-cache.org/mail-archive/squid-users/200506/0158.html
 
 ditto here.
 
  I made a POST_whitelist.txt for .yahoo.com and uploads work fine. But 
  this is an ugly workaround.
 
  More recently, someone also experiencing this issue filed a Firefox bug. 
  But I'm quite sure it's not a Firefox issue:
 
  https://bugzilla.mozilla.org/show_bug.cgi?id=679519
 
  Any better fix for this out there?
 
 The bug reported to firefox seems to be about Basic authentication. 
 Which is also irelevant.
 
 To provide any more help than that we will need to know exactly what is 
 going on in your system. What is being requested from Squid, what Squid 
 is responding with, anything Squid logs about the transaction, and how 
 it is configured.
 
 Amos

Re: [squid-users] Capturing HTTPS traffic

2012-02-05 Thread Henrik Nordström
sön 2012-02-05 klockan 22:44 -0500 skrev PS:


 Is there a specific place where that temp certificate is located, or
 is it the same certificate that I generated using OpenSSL  and is
 provided to squid in the http_port option of the squid.conf?

See sslcrt_program option,.

Regards
Henrik




Re: [squid-users] POST method when using squid_kerb_auth and sending Yahoo mail attachment

2012-02-05 Thread Amos Jeffries

On 6/02/2012 5:31 p.m., Hank Disuko wrote:

Thanks Amos,

What's happening is quite similar to the details described in the 
aforementioned Firefox bug filing.


Of course. Authentication in HTTP has a flow of 4+ steps:
1)  -- client request
2) -- server challenge 401 or 407 response
3) -- client request w/ credentials
4) -- server success/fail response
...

This is the same for all authentication protocols. Possibly with a loop 
repeating 3  4 until a suitable set of credentials are agreed on or the 
client gives up.


* The firefox bug was about firefox not sending Basic auth protocol 
credentials properly when challenged.
* So far you have been talking around the edges of something that sounds 
like a client not sending Kerberos auth protocol credentials correctly 
when challenged,
  or possibly you misconfiguring a Kerberos helper to validate 
non-Kerberos credentials.


The user watching gets to see only that the auth worked, a popup 
appeared, or the forbidden error page appeared. They are not forced to 
see what protocols are in use or how many retries were made.




When the attach file function is started in the Yahoo Mail compose message 
window and a file is selected from the user's desktop filesystem, a new HTTP POST 
operation is initiated to squid. This is a new tcp session entirely.


This would be step (1) above.




The POST operation is a form served by host sp1.attach.mail.yahoo.com using a Shockwave Flash 
user-agent - so I'm assuming the browser itself sits this one out. Here's the first little bit of 
the request, it's followed by form-data such as filename and content-type 
etc.


Aha. Now we are getting places. First item is to check whether Shockwave 
Flash supports Kerberos protocol which you are requiring of it. Chances 
are shockwave does but the applet it is running does not. It is very 
common to find web apps which cannot do auth even when SDK like Flash 
and Java have long provided API to do all the difficult parts.




POST 
http://sp1.attach.mail.yahoo.com/ca.f431.mail.yahoo.com/ya/upload_with_cred?-- 
HTTP/1.1
Accept: text/*
Content-Type: multipart/form-data; 
boundary=--cH2ae0gL6KM7ei4ei4ei4Ij5Ij5KM7
User-Agent: Shockwave Flash
Host: sp1.attach.mail.yahoo.com
Content-Length: 719794
Proxy-Connection: Keep-Alive
Pragma: no-cache
Cookie: 
B=dgrausd7a344rb=4d=vku6LippYFR6PRpZokl3s5qyCUJklnhtfiFfs=pti=A6MbHqjIfHzX9QIh5CDC;


Yes definitely a step (1) client request with no credentials.

However the thing to note here is that this POST request is using a 
simple HTTP format without the HTTP/1.1 chunked encoding or Expect 
features. That is fine but it means that Content-Length bytes of data 
MUST be transmitted for the body, regardess of what the server rsponds with.





Squid responds to this initial POST operation with the predictable TCP_DENIED/407 
Cache Access Denied message:

from access.log:

Sun Feb 5 22:29:16 2012 3 172.16.130.22 TCP_DENIED/407 5626 POST 
http://sp1.attach.mail.yahoo.com/ca.f431.mail.yahoo.com/ya/upload_with_cred? - 
NONE/- text/html

HTTP/1.0 407 Proxy Authentication Required

Server: squid/3.1.11

Mime-Version: 1.0

Date: Mon, 06 Feb 2012 03:29:16 GMT

Content-Type: text/html

Content-Length: 5206

X-Squid-Error: ERR_CACHE_ACCESS_DENIED 0

Vary: Accept-Language

Content-Language: en

Proxy-Authenticate: Negotiate

X-Cache: MISS from localhost

X-Cache-Lookup: NONE from localhost:3128

Via: 1.0 localhost (squid/3.1.11)

Connection: keep-alive



So far everything is perfectly correct.




Squid actually serves up the full 407 Denied webpage, but it's not presented 
to the user.
That is correct. Displaying the error page is optional. Being kerberos 
authentication it SHOULD be able to locate the Kerberos credentials 
silently in the background without bothering the user at all.



   Instead, the Yahoo Flash user-agent seems to trample on instead and attempts 
to send the attachment anyway, without first re-sending the request with the 
credentials required to access squid.


This is correct. It MUST do so. It has instructed Squid that 
Content-Length: 719794 bytes are following the headers. Squid will 
read and discarded it all then the connection will become free for the 
keep-alive features to re-use.



   I can see the pdf being uploaded to the squid server, but squid just ignores 
it.


Good. That means Squid is working.


   This manifests as a hanging upload window to the user.


Maybe, maybe not.  Flash is expected to locate credentials after the 407 
and repeat the POST request with them attached.
If it cannot locate credentials it is expected to produce some form of 
error about that failure.


A non-changing upload window could just mean the app is waiting while 
the first POST is transmitted and discarded and the Kerberos credentials 
are being located. It may start to display things if it ever gets to the 
point of doing a POST with credentials.






I guess I need to know where to look in order to find out why the request is