RE: [squid-users] Allow normal user to run squid reconfigure command

2012-03-07 Thread Maqsood Ahmad


Hello Sebastian,

Thanks for the help, i have allowed the user to rwx on cachemgr but did not 
have any successes.

squid: ERROR: Could not send signal 1 to process 874: (1) Operation not 
permitted


Also i did not want to allow user su or sudo rights. 
Actually i want to allow one of our night shift person to add / remove ip from 
a file and recon squid process in case of emergency. 
He is not the member of the proxy team.

I think you understand my point.


Maqsood Ahmad



[Beta Tester Badge 3]



 Date: Tue, 6 Mar 2012 15:17:10 -0300
 From: basureroseb...@gmail.com
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] Allow normal user to run squid reconfigure command

 On 3/6/2012 11:11 AM, Maqsood Ahmad wrote:
  Hi All,
 
  Can some body please help , i want to allow a normal user to change a 
  single file in /usr/local/squid/etc/timebaseallow and then run squid -k 
  recon and squid -k parse.
  How can i do this. currently my squid process is running as root.
 
  Merrymax
 Hello Maqsood
 You should use sudo with some custom groups; other option that comes to
 my head is to allow them use cachemgr.cgi to reconfigure squid.
 Regards
 Sebastian.



  

RE: [squid-users] Allow normal user to run squid reconfigure command

2012-03-07 Thread Amos Jeffries

On 07.03.2012 21:04, Maqsood Ahmad wrote:

Hello Sebastian,

Thanks for the help, i have allowed the user to rwx on cachemgr but
did not have any successes.


rwx?  It is not a file permission.

Manager access is set via cachemgr_passwd reconfigure blah in 
squid.conf where blah is the password needed to perform the reconfigure 
action. When Squid receives and HTTP request with the right URL and 
password it reconfigures itself.

  http://wiki.squid-cache.org/Features/CacheManager

The cachemgr.cgi web interface is just a way to send the HTTP request 
to Squid.

 http://wiki.squid-cache.org/ManagerCgiTool

If command line access is easier for them the squidclient tool can 
perform the manager requests.

 http://wiki.squid-cache.org/SquidClientTool

Amos


RE: [squid-users] Allow normal user to run squid reconfigure command

2012-03-07 Thread Maqsood Ahmad

Dear


By rwx means i have assign user read / write / execute rights on 
cachemanager.cgi file but still i am not able to run squid -k recon .it shows 
me an error which i pasted below.


Merrymax









 To: squid-users@squid-cache.org
 Date: Wed, 7 Mar 2012 21:17:12 +1300
 From: squ...@treenet.co.nz
 Subject: RE: [squid-users] Allow normal user to run squid reconfigure command

 On 07.03.2012 21:04, Maqsood Ahmad wrote:
  Hello Sebastian,
 
  Thanks for the help, i have allowed the user to rwx on cachemgr but
  did not have any successes.

 rwx? It is not a file permission.

 Manager access is set via cachemgr_passwd reconfigure blah in
 squid.conf where blah is the password needed to perform the reconfigure
 action. When Squid receives and HTTP request with the right URL and
 password it reconfigures itself.
 http://wiki.squid-cache.org/Features/CacheManager

 The cachemgr.cgi web interface is just a way to send the HTTP request
 to Squid.
 http://wiki.squid-cache.org/ManagerCgiTool

 If command line access is easier for them the squidclient tool can
 perform the manager requests.
 http://wiki.squid-cache.org/SquidClientTool

 Amos
  

RE: [squid-users] https analyze, squid rpc proxy to rpc proxy ii6 exchange2007 with ntlm

2012-03-07 Thread Clem
Thx for your reply Amos,

So the issue is squid doesn't pass through the type-1 message ...

I've check the http version, check this on IIS6 logs, it's 1v1 and same with
squid.
For keepalive, I've used the only squid parameters I know (u gave me them
later) as : 
client_persistent_connections
and
server_persistent_connections

I think the link SQUID - IIS6 RPC PROXY is represented by the cache_peer
line on my squid.conf, and I don't know if client_persistent_connections and
server_persistent_connections parameters affect cache_peer too ?

Dunno what to do now ...

Regards

Clem

-Message d'origine-
De : Amos Jeffries [mailto:squ...@treenet.co.nz] 
Envoyé : vendredi 2 mars 2012 17:46
À : squid-users@squid-cache.org
Objet : Re: [squid-users] https analyze, squid rpc proxy to rpc proxy ii6
exchange2007 with ntlm

[please remember to start your own new thread with new topics.
I only spotted this because I was answering David]

On 3/03/2012 2:33 a.m., Clem wrote:
 If I go to https://www.owasp.org/index.php/Authentication_In_IIS or
 http://www.innovation.ch/personal/ronald/ntlm.html

 NTLM Handshake

 When a client needs to authenticate itself to a proxy or server using the
 NTLM scheme then the following 4-way handshake takes place (only parts of
 the request and status line and the relevant headers are shown here; C
is
 the client, S the server):

  1: C  --  S   GET ...

  2: C--  S   401 Unauthorized
WWW-Authenticate: NTLM

  3: C  --  S   GET ...
Authorization: NTLMbase64-encoded type-1-message

  4: C--  S   401 Unauthorized
WWW-Authenticate: NTLMbase64-encoded type-2-message

  5: C  --  S   GET ...
Authorization: NTLMbase64-encoded type-3-message

  6: C--  S   200 Ok


 I can see there us 3 auth/authorization before le 200 OK, squid seems to
 send only 1 and stop

You have described well what the proper NTLM handshake sequence is.

You need to look at the Connection: keep-alive/close values and HTTP 
version numbers coming into Squid from the client, then going from Squid 
to the server, and the response flow as well coming back from server to 
Squid then Squid to client.

 -Message d'origine-
 De : Clem

 Hello,

 What I can see :

  USER with outlook PROXY RPC enabled with NTLM auth -  PROXY RPC
 IIS6/Exchange 2007

 Outlook sends credentials, the proxy handles them and open exchange
mailbox.

  USER with outlook PROXY RPC enabled with NTLM auth -  SQUID
PROXY
 -  PROXY RPC IIS6/Exchange 2007

 The user sends credentials via squid, squid can't forward them exactly to
 the Exchange/IIS6 RPC Proxy and the proxy denies


 In the https analyzer I can see the NTLM request header is very short when
 we use squid and when we don't use it this header is very long ...

 Like this

 NTLM

TlRMTVNTUAADGAAYAJgAAABkAWQBsBoAGgBYEAAQAHIWABYAggAU

AgAABYKIogYBsR0POq4/lcuCWEXBWP01xOfE7UUAVQBSAE8AUwBJAFQATgBFAFYARQBSAFMA

YQAuAHcAYQBxAHUAZQB0AEEALQBXAEEAUQBVAEUAVAAtAEgAUAAA

AAA4lx3+SYlVeSBpzbj9B93OAQEAAABuLvLQdfjMAYEqGS4sEy38AAIAGgBFAFUA

UgBPAFMASQBUAE4ARQBWAEUAUgBTAAEAFgBFAFUAUgBPAFMASQBUAE0AQQBJAEwABAAgAGUAdQBy

AG8AcwBpAHQAbgBlAHYAZQByAHMALgBmAHIAAwA4AGUAdQByAG8AcwBpAHQAbQBhAGkAbAAuAGUA
 dQByAG8AcwBpAHQAbgBlAHYAZQByAHMALgBmAHIABQAgAGUAdQByA[.]

This is a NTLM type-3 message.

Step (5) in the sequence up top.


 For direct connection

 And whith squid :

 NTLM TlRMTVNTUAABB4IIogAGAbEdDw==

This is a NTLM type-1 message.

Step (3) in the sequence up top.



You can paste the NTLM header blob into this tool to see the packet 
structure inside it.
http://tomeko.net/online_tools/base64.php

NTLM packets start with NTLMSSP 0x00 type 0x00 0x00 0x00 ...

Amos



Re: [squid-users] enabling X-Authenticated-user

2012-03-07 Thread Amos Jeffries

On 7/03/2012 4:49 p.m., Brett Lymn wrote:

On Wed, Mar 07, 2012 at 03:44:23PM +1300, Amos Jeffries wrote:

cache_peer option of login=PASS, with the external_acl_type helper
returning values in both user= and password= parameters.


OK, I must be doing something dumb.  I have the following in the config:

cache_peer upstream.parent parent 8080 7 login=PASS no-query default

external_acl_type user_rewrite_type children=1 ttl=900 %LOGIN 
/opt/local/squid/bin/user_rewrite.pl
acl user_rewrite external user_rewrite_type

cache_peer_access upstream.parent   allow user_rewrite


cache_peer_access is a fast group access list. This is why  suggested 
never/always _direct lists for the testing.


Amos


Re: [squid-users] Allow normal user to run squid reconfigure command

2012-03-07 Thread Amos Jeffries

On 7/03/2012 10:22 p.m., Maqsood Ahmad wrote:

Dear


By rwx means i have assign user read / write / execute rights on 
cachemanager.cgi file but still i am not able to run squid -k recon .it shows 
me an error which i pasted below.


cachemgr.cgi is a web page generator.

I think you need to forget the CGI, and get them to use the squidclient 
tool.


Amos


Re: [squid-users] https analyze, squid rpc proxy to rpc proxy ii6 exchange2007 with ntlm

2012-03-07 Thread Amos Jeffries

On 7/03/2012 11:27 p.m., Clem wrote:

Thx for your reply Amos,

So the issue is squid doesn't pass through the type-1 message ...

I've check the http version, check this on IIS6 logs, it's 1v1 and same with
squid.
For keepalive, I've used the only squid parameters I know (u gave me them
later) as :
client_persistent_connections
and
server_persistent_connections

I think the link SQUID -  IIS6 RPC PROXY is represented by the cache_peer
line on my squid.conf, and I don't know if client_persistent_connections and
server_persistent_connections parameters affect cache_peer too ?

Dunno what to do now ...



My interpretation of your report so far is that the client is not even 
sending type-1 message when using Squid. Instead it appears that they 
are trying to use Kerberos, with NTLM label. Or possibly that you 
overlooked some earlier connection(s) with the other LM message types.


If this is not 3.1.19 you can give it a try with that Squid version.

Amos


Re: [squid-users] SQUID TPROXY not working when URL is hosted on the same machine running SQUID

2012-03-07 Thread Amos Jeffries

On 6/03/2012 6:50 a.m., Vignesh Ramamurthy wrote:

Hello,

We are using squid to transparently proxy the traffic to a captive
portal that is residing on the same machine as the squid server. The
solution was working based on a NAT REDIRECT . We are moving the
solution to TPROXY based now as part of migration to IPv6. The TPROXY
works fine in intercepting traffic and also successfully able to allow
/ deny traffic to IPv6 sites. We are facing a strange issue when we
try to access a URL in the same machine that hosts the squid server.
The acces hangs and squid is not able to connect to the URL. We are
having AOL webserver to host the webpage.


As a workaround you can use the cache_peer no-tproxy option to get 
Squid to use its own IP when contacting that local server. It can still 
use the X-Forwarded-For header to get the client IP.


I'm not too clear on the details, but I think it has something to do 
with the packets not actually going through routing or some layers of 
the handling TPROXY needs when shifting between processes on the same 
machine. If you want to learn the details and get it going please 
contact the netfilter people to find out whats happening to the packets 
once they leave Squid.


Amos


Re: [squid-users] Roadmap Squid 3.2

2012-03-07 Thread Amos Jeffries

On 6/03/2012 10:18 p.m., FredB wrote:

http://www.squid-cache.org/Versions/v3/3.HEAD/ + 
http://bugs.squid-cache.org/attachment.cgi?id=2640action=diff is the most 
stable


Which is now in 3.2.0.16 :)


About rock store, for me, it's not yet ready for production.
The first benefit is the cache disk sharing, without rock the squid's process 
(workers) are independents

- Mail original -

De: Ed W

Is Squid-3.2.0.15 the most stable release to be using for
deployment
on the bleeding edge, or is 3.2.0.12 still the safest bet?  In the
past
you have given some guidance as builds have moved into new
functionality
vs bug squashing phases?

Are you imminently about to release 3.2.016?


Waiting for people to download it now.

Amos


[squid-users] Weird issue with some https pages

2012-03-07 Thread Jaime Gomez
Hi all,
 
First of all I want to apologyse if this question has been solved before but I 
haven't found anything related with this.
 
We have a very weird issue with some https web pages. Some of them are very, 
very slow. After doing some debugging we have this in our cache.log
 
2012/03/07 11:10:41.072| httpParseInit: Request buffer is CONNECT 
ebanking.rbcdexia-is.es:443 HTTP/1.1
Host: ebanking.rbcdexia-is.es
2012/03/07 11:10:41.072| HttpMsg.cc(445) parseRequestFirstLine: parsing 
possible request: CONNECT ebanking.rbcdexia-is.es:443 HTTP/1.1
Host: ebanking.rbcdexia-is.es
2012/03/07 11:10:41.072| urlParse: Split URL 'ebanking.rbcdexia-is.es:443' into 
proto='', host='ebanking.rbcdexia-is.es', port='443', path=''
Host: ebanking.rbcdexia-is.es
2012/03/07 11:10:41.073| aclMatchDomainList: checking 'ebanking.rbcdexia-is.es'
2012/03/07 11:10:41.073| aclMatchDomainList: 'ebanking.rbcdexia-is.es' found
2012/03/07 11:10:41.073| The request CONNECT ebanking.rbcdexia-is.es:443 is 
ALLOWED, because it matched 'allowBancos'
2012/03/07 11:10:41.073| The request CONNECT ebanking.rbcdexia-is.es:443 is 
ALLOWED, because it matched 'allowBancos'
2012/03/07 11:10:41.073| clientProcessRequest: CONNECT 
'ebanking.rbcdexia-is.es:443'
2012/03/07 11:10:41.074| tunnelStart: 'CONNECT ebanking.rbcdexia-is.es:443'
2012/03/07 11:10:41.074| fd_open() FD 20 ebanking.rbcdexia-is.es:443
2012/03/07 11:10:41.074| peerSelectFoo: 'CONNECT ebanking.rbcdexia-is.es'
2012/03/07 11:10:41.075| commConnectStart: FD 20, data 0x5b5fd60, 
ebanking.rbcdexia-is.es:443
2012/03/07 11:10:41.075| commConnectStart: FD 20, cb 0x5ac9200*1, 
ebanking.rbcdexia-is.es:443
2012/03/07 11:10:41.075| ipcache_nbgethostbyname: Name 
'ebanking.rbcdexia-is.es'.
2012/03/07 11:10:41.075| ipcacheRelease: Releasing entry for 
'ebanking.rbcdexia-is.es'
2012/03/07 11:10:41.075| ipcache_nbgethostbyname: MISS for 
'ebanking.rbcdexia-is.es'
2012/03/07 11:10:41.075| idnsALookup: buf is 41 bytes for 
ebanking.rbcdexia-is.es, id = 0x5535
2012/03/07 11:11:41.079| errorConvert: %%U -- 
'https://ebanking.rbcdexia-is.es/*'
2012/03/07 11:11:41.079| errorConvert: %%U -- 
'https://ebanking.rbcdexia-is.es/*'
2012/03/07 11:11:41.079| errorConvert: %%W -- 
'?subject=CacheErrorInfo%20-%20ERR_CONNECT_FAILbody=CacheHost%3A%2010.10.0.96%0D%0AErrPage%3A%20ERR_CONNECT_FAIL%0D%0AErr%3A%20(145)%20Connection%20timed%20out%0D%0ATimeStamp%3A%20Wed,%2007%20Mar%202012%2010%3A11%3A41%20GMT%0D%0A%0D%0AClientIP%3A%2010.11.7.199%0D%0A%0D%0AHTTP%20Request%3A%0D%0ACONNECT%20%2F%20HTTP%2F1.1%0AUser-Agent%3A%20Mozilla%2F5.0%20(Windows%20NT%205.0%3B%20rv%3A8.0.1)%20Gecko%2F20100101%20Firefox%2F8.0.1%0D%0AProxy-Connection%3A%20keep-alive%0D%0AHost%3A%20ebanking.rbcdexia-is.es%0D%0A%0D%0A%0D%0A'
2012/03/07 11:11:41.081| fd_close FD 20 ebanking.rbcdexia-is.es:443
2012/03/07 11:11:41.124| httpParseInit: Request buffer is CONNECT 
ebanking.rbcdexia-is.es:443 HTTP/1.1
Host: ebanking.rbcdexia-is.es
2012/03/07 11:11:41.124| HttpMsg.cc(445) parseRequestFirstLine: parsing 
possible request: CONNECT ebanking.rbcdexia-is.es:443 HTTP/1.1
Host: ebanking.rbcdexia-is.es
2012/03/07 11:11:41.124| urlParse: Split URL 'ebanking.rbcdexia-is.es:443' into 
proto='', host='ebanking.rbcdexia-is.es', port='443', path=''
Host: ebanking.rbcdexia-is.es
2012/03/07 11:11:41.124| aclMatchDomainList: checking 'ebanking.rbcdexia-is.es'
2012/03/07 11:11:41.124| aclMatchDomainList: 'ebanking.rbcdexia-is.es' found
2012/03/07 11:11:41.125| The request CONNECT ebanking.rbcdexia-is.es:443 is 
ALLOWED, because it matched 'allowBancos'
2012/03/07 11:11:41.125| The request CONNECT ebanking.rbcdexia-is.es:443 is 
ALLOWED, because it matched 'allowBancos
 
As you can see there is a one minute timeout and the web page begins to load 
again and succesfully. We have upgraded from Squid 2 to Squid 3.1.16 without 
success. Now we are running version 3.1.19. Here is the output of squid -v 
command:
 
Squid Cache: Version 3.1.19
configure options:  '--prefix=/usr/local/squid' '--enable-poll' 
'--enable-external-acl-helpers=ip_user,unix_group' '--enable-auth=basic' 
'--enable-basic-auth-helpers=NCSA' '--enable-async-io' '--enable-icmp' 
'--enable-useragent-log' '--enable-cache-digests' 
'--enable-follow-x-forwarded-for' '--enable-storeio=diskd,ufs,aufs' 
'--with-pthreads' '--enable-removal-policies=heap,lru' '--with-maxfd=4096' 
'--with-aufs-threads=32' '--enable-http-violations' '--enable-truncate' 
'--enable-snmp' --with-squid=/tmp/squid-3.1.19 --enable-ltdl-convenience
 
Thanks in advance.
 
Regards,
 
Jaime.


[squid-users] Weird issue with some https pages

2012-03-07 Thread Jaime Gomez
Hi all,
 
First of all I want to apologyse if this question has been solved before but I 
haven't found anything related with this.
 
We have a very weird issue with some https web pages. Some of them are very, 
very slow. After doing some debugging we have this in our cache.log
 
2012/03/07 11:10:41.072| httpParseInit: Request buffer is CONNECT 
ebanking.rbcdexia-is.es:443 HTTP/1.1
Host: ebanking.rbcdexia-is.es
2012/03/07 11:10:41.072| HttpMsg.cc(445) parseRequestFirstLine: parsing 
possible request: CONNECT ebanking.rbcdexia-is.es:443 HTTP/1.1
Host: ebanking.rbcdexia-is.es
2012/03/07 11:10:41.072| urlParse: Split URL 'ebanking.rbcdexia-is.es:443' into 
proto='', host='ebanking.rbcdexia-is.es', port='443', path=''
Host: ebanking.rbcdexia-is.es
2012/03/07 11:10:41.073| aclMatchDomainList: checking 'ebanking.rbcdexia-is.es'
2012/03/07 11:10:41.073| aclMatchDomainList: 'ebanking.rbcdexia-is.es' found
2012/03/07 11:10:41.073| The request CONNECT ebanking.rbcdexia-is.es:443 is 
ALLOWED, because it matched 'allowBancos'
2012/03/07 11:10:41.073| The request CONNECT ebanking.rbcdexia-is.es:443 is 
ALLOWED, because it matched 'allowBancos'
2012/03/07 11:10:41.073| clientProcessRequest: CONNECT 
'ebanking.rbcdexia-is.es:443'
2012/03/07 11:10:41.074| tunnelStart: 'CONNECT ebanking.rbcdexia-is.es:443'
2012/03/07 11:10:41.074| fd_open() FD 20 ebanking.rbcdexia-is.es:443
2012/03/07 11:10:41.074| peerSelectFoo: 'CONNECT ebanking.rbcdexia-is.es'
2012/03/07 11:10:41.075| commConnectStart: FD 20, data 0x5b5fd60, 
ebanking.rbcdexia-is.es:443
2012/03/07 11:10:41.075| commConnectStart: FD 20, cb 0x5ac9200*1, 
ebanking.rbcdexia-is.es:443
2012/03/07 11:10:41.075| ipcache_nbgethostbyname: Name 
'ebanking.rbcdexia-is.es'.
2012/03/07 11:10:41.075| ipcacheRelease: Releasing entry for 
'ebanking.rbcdexia-is.es'
2012/03/07 11:10:41.075| ipcache_nbgethostbyname: MISS for 
'ebanking.rbcdexia-is.es'
2012/03/07 11:10:41.075| idnsALookup: buf is 41 bytes for 
ebanking.rbcdexia-is.es, id = 0x5535
2012/03/07 11:11:41.079| errorConvert: %%U -- 
'https://ebanking.rbcdexia-is.es/*'
2012/03/07 11:11:41.079| errorConvert: %%U -- 
'https://ebanking.rbcdexia-is.es/*'
2012/03/07 11:11:41.079| errorConvert: %%W -- 
'?subject=CacheErrorInfo%20-%20ERR_CONNECT_FAILbody=CacheHost%3A%2010.10.0.96%0D%0AErrPage%3A%20ERR_CONNECT_FAIL%0D%0AErr%3A%20(145)%20Connection%20timed%20out%0D%0ATimeStamp%3A%20Wed,%2007%20Mar%202012%2010%3A11%3A41%20GMT%0D%0A%0D%0AClientIP%3A%2010.11.7.199%0D%0A%0D%0AHTTP%20Request%3A%0D%0ACONNECT%20%2F%20HTTP%2F1.1%0AUser-Agent%3A%20Mozilla%2F5.0%20(Windows%20NT%205.0%3B%20rv%3A8.0.1)%20Gecko%2F20100101%20Firefox%2F8.0.1%0D%0AProxy-Connection%3A%20keep-alive%0D%0AHost%3A%20ebanking.rbcdexia-is.es%0D%0A%0D%0A%0D%0A'
2012/03/07 11:11:41.081| fd_close FD 20 ebanking.rbcdexia-is.es:443
2012/03/07 11:11:41.124| httpParseInit: Request buffer is CONNECT 
ebanking.rbcdexia-is.es:443 HTTP/1.1
Host: ebanking.rbcdexia-is.es
2012/03/07 11:11:41.124| HttpMsg.cc(445) parseRequestFirstLine: parsing 
possible request: CONNECT ebanking.rbcdexia-is.es:443 HTTP/1.1
Host: ebanking.rbcdexia-is.es
2012/03/07 11:11:41.124| urlParse: Split URL 'ebanking.rbcdexia-is.es:443' into 
proto='', host='ebanking.rbcdexia-is.es', port='443', path=''
Host: ebanking.rbcdexia-is.es
2012/03/07 11:11:41.124| aclMatchDomainList: checking 'ebanking.rbcdexia-is.es'
2012/03/07 11:11:41.124| aclMatchDomainList: 'ebanking.rbcdexia-is.es' found
2012/03/07 11:11:41.125| The request CONNECT ebanking.rbcdexia-is.es:443 is 
ALLOWED, because it matched 'allowBancos'
2012/03/07 11:11:41.125| The request CONNECT ebanking.rbcdexia-is.es:443 is 
ALLOWED, because it matched 'allowBancos
 
As you can see there is a one minute timeout and the web page begins to load 
again and succesfully. We have upgraded from Squid 2 to Squid 3.1.16 without 
success. Now we are running version 3.1.19. Here is the output of squid -v 
command:
 
Squid Cache: Version 3.1.19
configure options:  '--prefix=/usr/local/squid' '--enable-poll' 
'--enable-external-acl-helpers=ip_user,unix_group' '--enable-auth=basic' 
'--enable-basic-auth-helpers=NCSA' '--enable-async-io' '--enable-icmp' 
'--enable-useragent-log' '--enable-cache-digests' 
'--enable-follow-x-forwarded-for' '--enable-storeio=diskd,ufs,aufs' 
'--with-pthreads' '--enable-removal-policies=heap,lru' '--with-maxfd=4096' 
'--with-aufs-threads=32' '--enable-http-violations' '--enable-truncate' 
'--enable-snmp' --with-squid=/tmp/squid-3.1.19 --enable-ltdl-convenience
 
Thanks in advance.
 
Regards,
 
Jaime.


RE: [squid-users] https analyze, squid rpc proxy to rpc proxy ii6 exchange2007 with ntlm

2012-03-07 Thread Clem
I use only the last 3.2 releases, but I can try with 3.1.19... 

-Message d'origine-
De : Amos Jeffries [mailto:squ...@treenet.co.nz] 
Envoyé : mercredi 7 mars 2012 12:08
À : squid-users@squid-cache.org
Objet : Re: [squid-users] https analyze, squid rpc proxy to rpc proxy ii6
exchange2007 with ntlm

On 7/03/2012 11:27 p.m., Clem wrote:
 Thx for your reply Amos,

 So the issue is squid doesn't pass through the type-1 message ...

 I've check the http version, check this on IIS6 logs, it's 1v1 and same
with
 squid.
 For keepalive, I've used the only squid parameters I know (u gave me them
 later) as :
 client_persistent_connections
 and
 server_persistent_connections

 I think the link SQUID -  IIS6 RPC PROXY is represented by the cache_peer
 line on my squid.conf, and I don't know if client_persistent_connections
and
 server_persistent_connections parameters affect cache_peer too ?

 Dunno what to do now ...


My interpretation of your report so far is that the client is not even 
sending type-1 message when using Squid. Instead it appears that they 
are trying to use Kerberos, with NTLM label. Or possibly that you 
overlooked some earlier connection(s) with the other LM message types.

If this is not 3.1.19 you can give it a try with that Squid version.

Amos



[squid-users] Squid, SNMP/Zenoss and mib.txt?

2012-03-07 Thread Peter Gaughran

First timer, so please be gentle :)

I've got squid installed on RHEL 6, latest version available. I've 
installed snmp but I'm getting


/etc/squid/mib.txt: No such file or directory

when I try to do an snmpwalk? Any references to mib.txt creation refer 
to --enable-snmp when compiling squid, but as I installed it with yum, 
I'm a bit lost...


Also, any one use the Squid Zenpack for Zenoss? 
(http://tanso.net/zenoss/squid/)





Re: [squid-users] Weird issue with some https pages

2012-03-07 Thread Amos Jeffries

On 8/03/2012 1:00 a.m., Jaime Gomez wrote:

Hi all,

First of all I want to apologyse if this question has been solved before but I 
haven't found anything related with this.

We have a very weird issue with some https web pages. Some of them are very, 
very slow. After doing some debugging we have this in our cache.log

2012/03/07 11:10:41.072| httpParseInit: Request buffer is CONNECT 
ebanking.rbcdexia-is.es:443 HTTP/1.1
Host: ebanking.rbcdexia-is.es
2012/03/07 11:10:41.072| HttpMsg.cc(445) parseRequestFirstLine: parsing 
possible request: CONNECT ebanking.rbcdexia-is.es:443 HTTP/1.1
Host: ebanking.rbcdexia-is.es
2012/03/07 11:10:41.072| urlParse: Split URL 'ebanking.rbcdexia-is.es:443' into 
proto='', host='ebanking.rbcdexia-is.es', port='443', path=''
Host: ebanking.rbcdexia-is.es
2012/03/07 11:10:41.073| aclMatchDomainList: checking 'ebanking.rbcdexia-is.es'
2012/03/07 11:10:41.073| aclMatchDomainList: 'ebanking.rbcdexia-is.es' found
2012/03/07 11:10:41.073| The request CONNECT ebanking.rbcdexia-is.es:443 is 
ALLOWED, because it matched 'allowBancos'
2012/03/07 11:10:41.073| The request CONNECT ebanking.rbcdexia-is.es:443 is 
ALLOWED, because it matched 'allowBancos'
2012/03/07 11:10:41.073| clientProcessRequest: CONNECT 
'ebanking.rbcdexia-is.es:443'
2012/03/07 11:10:41.074| tunnelStart: 'CONNECT ebanking.rbcdexia-is.es:443'
2012/03/07 11:10:41.074| fd_open() FD 20 ebanking.rbcdexia-is.es:443
2012/03/07 11:10:41.074| peerSelectFoo: 'CONNECT ebanking.rbcdexia-is.es'
2012/03/07 11:10:41.075| commConnectStart: FD 20, data 0x5b5fd60, 
ebanking.rbcdexia-is.es:443
2012/03/07 11:10:41.075| commConnectStart: FD 20, cb 0x5ac9200*1, 
ebanking.rbcdexia-is.es:443
2012/03/07 11:10:41.075| ipcache_nbgethostbyname: Name 
'ebanking.rbcdexia-is.es'.
2012/03/07 11:10:41.075| ipcacheRelease: Releasing entry for 
'ebanking.rbcdexia-is.es'
2012/03/07 11:10:41.075| ipcache_nbgethostbyname: MISS for 
'ebanking.rbcdexia-is.es'
2012/03/07 11:10:41.075| idnsALookup: buf is 41 bytes for 
ebanking.rbcdexia-is.es, id = 0x5535


The DNS entries TTL period has expired. Squid is looking up new ones 
when the timeout happens.


Can your resolve that domain easily yourself?  If not, that is the problem.

If you can resolve it easily please add debug section 78,5 and make a 
new trace.


Amos


Re: [squid-users] Squid, SNMP/Zenoss and mib.txt?

2012-03-07 Thread Amos Jeffries

On 8/03/2012 1:14 a.m., Peter Gaughran wrote:

First timer, so please be gentle :)

I've got squid installed on RHEL 6, latest version available. I've 
installed snmp but I'm getting


/etc/squid/mib.txt: No such file or directory

when I try to do an snmpwalk? Any references to mib.txt creation refer 
to --enable-snmp when compiling squid, but as I installed it with yum, 
I'm a bit lost...


Also, any one use the Squid Zenpack for Zenoss? 
(http://tanso.net/zenoss/squid/)





SNMP should be enabled unless they disabled it. The squid -v output 
can confirm if there is anything customised in your squid. At the very 
least if Squid allowed you to configure a snmp_port then its available.


The absence of mib.txt is a bit annoying, but it just means you will 
have to work with raw numbers. Everything should still work normally 
without it.
You can download a copy of the 3.1 MIB.txt at 
http://bazaar.launchpad.net/~squid/squid/3.1/view/head:/src/mib.txt


The Squid OID numbers are all listed in a human readable format at 
http://wiki.squid-cache.org/Features/Snmp#Squid_OIDs for reference along 
with the details of how to work snmpwalk with Squid (there are some 
tricky gotchas walking the IP address indexed tables).


Amos


Re: [squid-users] Weird issue with some https pages

2012-03-07 Thread Jaime Gomez
Hi Amos,
 
Thanks for your quick answer. In our squid.conf we have configured:
 
hosts_file /etc/hosts

In that file we have:

213.229.135.77  rbcdexia-is.es

Is this an easy way to resolve the domain?

Thanks,

Jaime.

 El día 07/03/2012 a las 13:20, en el mensaje
4f575280.6030...@treenet.co.nz, Amos Jeffries
squ...@treenet.co.nz escribió:

On 8/03/2012 1:00 a.m., Jaime Gomez wrote:
 Hi all,

 First of all I want to apologyse if this question has been solved
before but I haven't found anything related with this.

 We have a very weird issue with some https web pages. Some of them
are very, very slow. After doing some debugging we have this in our
cache.log

 2012/03/07 11:10:41.072| httpParseInit: Request buffer is CONNECT
ebanking.rbcdexia-is.es:443 HTTP/1.1
 Host: ebanking.rbcdexia-is.es
 2012/03/07 11:10:41.072| HttpMsg.cc(445) parseRequestFirstLine:
parsing possible request: CONNECT ebanking.rbcdexia-is.es:443 HTTP/1.1
 Host: ebanking.rbcdexia-is.es
 2012/03/07 11:10:41.072| urlParse: Split URL
'ebanking.rbcdexia-is.es:443' into proto='',
host='ebanking.rbcdexia-is.es', port='443', path=''
 Host: ebanking.rbcdexia-is.es
 2012/03/07 11:10:41.073| aclMatchDomainList: checking
'ebanking.rbcdexia-is.es'
 2012/03/07 11:10:41.073| aclMatchDomainList:
'ebanking.rbcdexia-is.es' found
 2012/03/07 11:10:41.073| The request CONNECT
ebanking.rbcdexia-is.es:443 is ALLOWED, because it matched
'allowBancos'
 2012/03/07 11:10:41.073| The request CONNECT
ebanking.rbcdexia-is.es:443 is ALLOWED, because it matched
'allowBancos'
 2012/03/07 11:10:41.073| clientProcessRequest: CONNECT
'ebanking.rbcdexia-is.es:443'
 2012/03/07 11:10:41.074| tunnelStart: 'CONNECT
ebanking.rbcdexia-is.es:443'
 2012/03/07 11:10:41.074| fd_open() FD 20 ebanking.rbcdexia-is.es:443
 2012/03/07 11:10:41.074| peerSelectFoo: 'CONNECT
ebanking.rbcdexia-is.es'
 2012/03/07 11:10:41.075| commConnectStart: FD 20, data 0x5b5fd60,
ebanking.rbcdexia-is.es:443
 2012/03/07 11:10:41.075| commConnectStart: FD 20, cb 0x5ac9200*1,
ebanking.rbcdexia-is.es:443
 2012/03/07 11:10:41.075| ipcache_nbgethostbyname: Name
'ebanking.rbcdexia-is.es'.
 2012/03/07 11:10:41.075| ipcacheRelease: Releasing entry for
'ebanking.rbcdexia-is.es'
 2012/03/07 11:10:41.075| ipcache_nbgethostbyname: MISS for
'ebanking.rbcdexia-is.es'
 2012/03/07 11:10:41.075| idnsALookup: buf is 41 bytes for
ebanking.rbcdexia-is.es, id = 0x5535

The DNS entries TTL period has expired. Squid is looking up new ones 
when the timeout happens.

Can your resolve that domain easily yourself?  If not, that is the
problem.

If you can resolve it easily please add debug section 78,5 and make a 
new trace.

Amos


Re: [squid-users] Squid, SNMP/Zenoss and mib.txt?

2012-03-07 Thread Peter Gaughran

Perfect - that's great, thank you!



SNMP should be enabled unless they disabled it. The squid -v output 
can confirm if there is anything customised in your squid. At the very 
least if Squid allowed you to configure a snmp_port then its available.


The absence of mib.txt is a bit annoying, but it just means you will 
have to work with raw numbers. Everything should still work normally 
without it.
You can download a copy of the 3.1 MIB.txt at 
http://bazaar.launchpad.net/~squid/squid/3.1/view/head:/src/mib.txt


The Squid OID numbers are all listed in a human readable format at 
http://wiki.squid-cache.org/Features/Snmp#Squid_OIDs for reference 
along with the details of how to work snmpwalk with Squid (there are 
some tricky gotchas walking the IP address indexed tables).


Amos




RE: [squid-users] Squid, SNMP/Zenoss and mib.txt?

2012-03-07 Thread Baird, Josh
For what it's worth, I have written a ZenPack for Squid.  Contact me off list 
if you want a copy.

Thanks,

Josh

-Original Message-
From: Peter Gaughran [mailto:peter.gaugh...@nuim.ie] 
Sent: Wednesday, March 07, 2012 9:55 AM
To: Amos Jeffries
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Squid, SNMP/Zenoss and mib.txt?

Perfect - that's great, thank you!


 SNMP should be enabled unless they disabled it. The squid -v output 
 can confirm if there is anything customised in your squid. At the very 
 least if Squid allowed you to configure a snmp_port then its available.

 The absence of mib.txt is a bit annoying, but it just means you will 
 have to work with raw numbers. Everything should still work normally 
 without it.
 You can download a copy of the 3.1 MIB.txt at 
 http://bazaar.launchpad.net/~squid/squid/3.1/view/head:/src/mib.txt

 The Squid OID numbers are all listed in a human readable format at 
 http://wiki.squid-cache.org/Features/Snmp#Squid_OIDs for reference 
 along with the details of how to work snmpwalk with Squid (there are 
 some tricky gotchas walking the IP address indexed tables).

 Amos



Re: [squid-users] SQUID TPROXY option does not work when URL is on the same machine as SQUID

2012-03-07 Thread Eliezer Croitoru

you need to add a the first rule such as:
ip6tables -t mangle -A PREROUTING -p tcp -d (IP of the machine) --dport 
80 -j ACCEPT

= here all the other iptables rules =

Regards
Eliezer

On 05/03/2012 20:09, Vignesh Ramamurthy wrote:

Hello,

We are using squid to transparently proxy the traffic to a captive
portal that is residing on the same machine as the squid server. The
solution was working based on a NAT REDIRECT . We are moving the
solution to TPROXY based now as part of migration to IPv6. The TPROXY
works fine in intercepting traffic and also successfully able to allow
/ deny traffic to IPv6 sites. We are facing a strange issue when we
try to access a URL in the same machine that hosts the squid server.
The acces hangs and squid is not able to connect to the URL. We are
having AOL webserver to host the webpage.

All the configurations as recommended by the squid sites are done.
-  Firewall rules with TPROXY and DIVERT chian has been setup as below

ip6tables -t mangle -N DIVERT
ip6tables -t mangle -A DIVERT -j MARK --set-mark 1
ip6tables -t mangle -A DIVERT -j ACCEPT
ip6tables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
ip6tables -t mangle -A PREROUTING -m tos --tos 0x20 -j ACCEPT
ip6tables -t mangle -A PREROUTING  -i eth0.20 -p tcp --dport 80 -j
TPROXY --tproxy-mark 0x1/0x1 --on-port 8085
ip6tables -t mangle -A PREROUTING -j ACCEPT

-  Policy routing to route proxied traffic to the local box is also
done as recommended
16383:  from all fwmark 0x1 lookup 100
16390:  from all lookup local
32766:  from all lookup main

ip -6 route show table 100
local default dev lo  metric 1024
local default dev eth0.20  metric 1024


Squid configuration used is standard and have provided below a
snapshot of cache.log. Running squid in full debug level with max
logging. I have provided the final set of logs for this transaction.
The URL accessed in the test is
http://[2001:4b8:1::549]/sample_page.adp.

Appreciate any assistance / pointers to solve this. Please do let me
know if any additional information is required.

2012/03/05 04:29:26.320 kid1| HTTP Server REQUEST:
-
GET /sample_page.adp HTTP/1.1
User-Agent: w3m/0.5.2
Accept: text/html, text/*;q=0.5, image/*, application/*, audio/*, multipart/*
Accept-Encoding: gzip, compress, bzip, bzip2, deflate
Accept-Language: en;q=1.0
Host: [2001:4b8:1::549]
Via: 1.0 nmd.tst26.aus.wayport.net (squid/3.2.0.15-20120228-r11519)
X-Forwarded-For: 2001:4b8:1:5:250:56ff:feb2:2cfc
Cache-Control: max-age=259200
Connection: keep-alive


--
2012/03/05 04:29:26.320 kid1| Write.cc(21) Write:
local=[2001:4b8:1:5:250:56ff:feb2:2cfc]:43673
remote=[2001:4b8:1::549]:80 FD 13 flags=25: sz 417: asynCall
0x871f6e8*1
2012/03/05 04:29:26.320 kid1| ModPoll.cc(149) SetSelect: FD 13,
type=2, handler=1, client_data=0x84df560, timeout=0
2012/03/05 04:29:26.320 kid1| HttpStateData status out: [ job7]
2012/03/05 04:29:26.321 kid1| leaving AsyncJob::start()
2012/03/05 04:29:26.321 kid1| event.cc(252) checkEvents: checkEvents
2012/03/05 04:29:26.321 kid1| The AsyncCall MaintainSwapSpace
constructed, this=0x871ff48 [call204]
2012/03/05 04:29:26.321 kid1| event.cc(261) will call
MaintainSwapSpace() [call204]
2012/03/05 04:29:26.321 kid1| entering MaintainSwapSpace()
2012/03/05 04:29:26.321 kid1| AsyncCall.cc(34) make: make call
MaintainSwapSpace [call204]
2012/03/05 04:29:26.321 kid1| event.cc(344) schedule: schedule: Adding
'MaintainSwapSpace', in 1.00 seconds
2012/03/05 04:29:26.321 kid1| leaving MaintainSwapSpace()
2012/03/05 04:29:27.149 kid1| event.cc(252) checkEvents: checkEvents
2012/03/05 04:29:27.149 kid1| The AsyncCall memPoolCleanIdlePools
constructed, this=0x871ff48 [call205]
2012/03/05 04:29:27.149 kid1| event.cc(261) will call
memPoolCleanIdlePools() [call205]
2012/03/05 04:29:27.149 kid1| entering memPoolCleanIdlePools()
2012/03/05 04:29:27.149 kid1| AsyncCall.cc(34) make: make call
memPoolCleanIdlePools [call205]
2012/03/05 04:29:27.150 kid1| event.cc(344) schedule: schedule: Adding
'memPoolCleanIdlePools', in 15.00 seconds
2012/03/05 04:29:27.150 kid1| leaving memPoolCleanIdlePools()
2012/03/05 04:29:27.165 kid1| event.cc(252) checkEvents: checkEvents
2012/03/05 04:29:27.165 kid1| The AsyncCall fqdncache_purgelru
constructed, this=0x871ff48 [call206]
2012/03/05 04:29:27.165 kid1| event.cc(261) will call
fqdncache_purgelru() [call206]
2012/03/05 04:29:27.165 kid1| entering fqdncache_purgelru()
2012/03/05 04:29:27.165 kid1| AsyncCall.cc(34) make: make call
fqdncache_purgelru [call206]
2012/03/05 04:29:27.165 kid1| event.cc(344) schedule: schedule: Adding
'fqdncache_purgelru', in 10.00 seconds
2012/03/05 04:29:27.166 kid1| leaving fqdncache_purgelru()




Re: [squid-users] Roadmap Squid 3.2

2012-03-07 Thread Alex Rousskov
On 03/05/2012 03:15 PM, Amos Jeffries wrote:

 The checklist I have to work by is at
 http://wiki.squid-cache.org/ReleaseProcess#Squid-3
 We are looping around at the freeze stage (3), waiting to reach 0
 major+ bugs before we can start the stable release countdown stages (4+).
 
 
 We are intending 3.2 to supersede and obsolete all 3.x and 2.x series
 releases. Which means there are just over 50 bugs rated major or higher
 which need to be confirmed as fixed in 3.2, or downgraded before 3.2 can
 start its stability countdown.
 
   A lot of these bugs, particularly 2.x ones, just need somebody to
 check and verify that the described behaviour is not reproducible in 3.2
 anymore. At which point they can be closed against target of 3.2.
 Another half dozen or so got closed this month, but there are many more
 to go.

I think it is neither reasonable nor practical to make Squid v3.2
stable designation dependent on 2.x bugs, especially those filed years
ago with insufficient information. Squid v3.2 can be stable regardless
of what bugs the old 2.x version had.

Sure, it would be very good to go through and close all bugs, but I do
not see enough folks jumping at such opportunity in the foreseeable
future (at least partially because such exercise would have little to do
with Squid v3.2 actual stability!).

I suggest that v2.x bugs without Squid3 confirmations are ignored when
it comes to deciding whether Squid v3.2 is stable.


 I had reported some problems with rock store but maybe it can be
 consider like an experimental feature for the moment ?
 
 It is experimental until there has been at least one stable cycle of
 wide use to wrinkle out any minor bugs and edge cases. If the bug you
 have reported can be considered normal or lower then it will not block
 the stable release. Keeping in mind that the shared memory change is a
 feature affecting everybody, so the precise location of the bug impacts
 its importance a lot.

FWIW, there are currently no open major+ bugs for Rock Store AFAICT.


Cheers,

Alex.


Re: [squid-users] Weird issue with some https pages

2012-03-07 Thread Amos Jeffries

On 08.03.2012 02:09, Jaime Gomez wrote:

Hi Amos,

Thanks for your quick answer. In our squid.conf we have configured:

hosts_file /etc/hosts

In that file we have:

213.229.135.77  rbcdexia-is.es

Is this an easy way to resolve the domain?



No. resolve in DNS means doing / performing the lookup.

 eg entering this on the command line: host -t A 
ebanking.rbcdexia-is.es



Amos



Re: [squid-users] Roadmap Squid 3.2

2012-03-07 Thread Amos Jeffries

On 08.03.2012 06:35, Alex Rousskov wrote:

On 03/05/2012 03:15 PM, Amos Jeffries wrote:


The checklist I have to work by is at
http://wiki.squid-cache.org/ReleaseProcess#Squid-3
We are looping around at the freeze stage (3), waiting to reach 0
major+ bugs before we can start the stable release countdown stages 
(4+).



We are intending 3.2 to supersede and obsolete all 3.x and 2.x 
series
releases. Which means there are just over 50 bugs rated major or 
higher
which need to be confirmed as fixed in 3.2, or downgraded before 3.2 
can

start its stability countdown.

  A lot of these bugs, particularly 2.x ones, just need somebody to
check and verify that the described behaviour is not reproducible in 
3.2

anymore. At which point they can be closed against target of 3.2.
Another half dozen or so got closed this month, but there are many 
more

to go.


I think it is neither reasonable nor practical to make Squid v3.2
stable designation dependent on 2.x bugs, especially those filed 
years
ago with insufficient information. Squid v3.2 can be stable 
regardless

of what bugs the old 2.x version had.


The 2.x and 3.0 ones updated 6 months ago with insufficient 
information to even reproduce should be closed with a 4-week warning / 
last call for info. They should all have minor/normal status anyway and 
not be what is under discussion here.


Most of the major+ ones should have sufficient info for someone with 
specific environment or software to reproduce. A lot just seem to 
require specific software we do not have easily available within the dev 
team.
 The LDAP special-characters and escaping bugs for instance, just need 
someone with a real LDAP server (not a test script) to configure a dummy 
account and see if login works now. A real server is important there 
because it is the servers interpretation of helper calls which is the 
bug.




Sure, it would be very good to go through and close all bugs, but I 
do

not see enough folks jumping at such opportunity in the foreseeable
future (at least partially because such exercise would have little to 
do

with Squid v3.2 actual stability!).

I suggest that v2.x bugs without Squid3 confirmations are ignored 
when

it comes to deciding whether Squid v3.2 is stable.


all is the dream, major or higher is the release requirement. I 
made a point that they just need to be checked by someone with the 
ability/environment to reproduce.



Thank you for your help in weeding out a few more today. We just hit 45 
:)


Amos



[squid-users] Squid 3.2: segfault at 0 ip (null) sp bfa8e03c using iptables + transparent mode

2012-03-07 Thread David Touzeau

Dear,

I'm using Squid Cache: Version 3.2.0.15-20120306-r11529 in i386 on 
Ubuntu 10.04

iptables v1.4.4 and kernel   2.6.32-38-generic-pae #83-Ubuntu SMP
In transparent mode with iptables.

Each 10 Minutes we are unable to access to Internet and there is a squid 
crash.

Restart squid service solve the issue.

Is there a tip/trick to fix it ?

[   14.445583] [drm] Initialized radeon 2.0.0 20080528 for :04:00.0 
on minor 0

[   14.694306] vga16fb: initializing
[   14.694309] vga16fb: mapped to 0xc00a
[   14.694312] vga16fb: not registering due to another framebuffer present
[   14.883342] Console: switching to colour frame buffer device 128x48
[   16.883375] Loading iSCSI transport class v2.0-870.
[   17.722963] iscsi: registered transport (tcp)
[   18.491243] iscsi: registered transport (iser)
[   25.208015] eth0: no IPv6 routers present
[   44.602329] ip_tables: (C) 2000-2006 Netfilter Core Team
[   44.676368] nf_conntrack version 0.5.0 (16384 buckets, 65536 max)
[   44.676699] CONFIG_NF_CT_ACCT is deprecated and will be removed soon. 
Please use
[   44.676701] nf_conntrack.acct=1 kernel parameter, acct=1 nf_conntrack 
module option or

[   44.676702] sysctl net.netfilter.nf_conntrack_acct=1 to enable it.
[  392.296569] squid[7663]: segfault at 0 ip (null) sp bfa8e03c error 14 
in squid[8048000+415000]
[  658.532544] squid[8352]: segfault at 0 ip (null) sp bfa52cdc error 14 
in squid[8048000+415000]
[  740.928753] squid[8429]: segfault at 0 ip (null) sp bfe9f12c error 14 
in squid[8048000+415000]
[  760.620663] squid[8377]: segfault at 0 ip (null) sp bfc02e2c error 14 
in squid[8048000+415000]
[199121.864727] squid[32681]: segfault at 49 ip 082ab397 sp bfd39740 
error 4 in squid[8048000+415000]


Compilation options :

Squid Cache: Version 3.2.0.15-20120306-r11529
configure options:  '--prefix=/usr' '--includedir=/include' 
'--mandir=/share/man' '--infodir=/share/info' '--localstatedir=/var' 
'--libexecdir=/lib/squid3' '--disable-maintainer-mode' 
'--disable-dependency-tracking' '--srcdir=.' 
'--datadir=/usr/share/squid3' '--sysconfdir=/etc/squid3' 
'--enable-gnuregex' '--enable-forward-log' 
'--enable-removal-policy=heap' '--enable-follow-x-forwarded-for' 
'--enable-http-violations' '--enable-large-cache-files' 
'--enable-removal-policies=lru,heap' '--enable-err-languages=English' 
'--enable-default-err-language=English' '--enable-arp-acl' 
'--with-maxfd=32000' '--with-large-files' '--disable-dlmalloc' 
'--with-pthreads' '--enable-esi' '--enable-storeio=aufs,diskd,ufs,rock' 
'--with-aufs-threads=10' '--with-maxfd=16384' 
'--enable-x-accelerator-vary' '--with-dl' '--enable-truncate' 
'--enable-linux-netfilter' '--with-filedescriptors=16384' 
'--enable-wccpv2' '--enable-eui' '--enable-auth' '--enable-auth-basic' 
'--enable-auth-digest' '--enable-auth-negotiate-helpers' 
'--enable-log-daemon-helpers' '--enable-url-rewrite-helpers' 
'--enable-auth-ntlm' '--with-default-user=squid' '--enable-icap-client' 
'--enable-cache-digests' '--enable-icap-support' '--enable-poll' 
'--enable-epoll' '--enable-async-io' '--enable-delay-pools' 
'--enable-ssl' '--enable-ssl-crtd' 'CFLAGS=-DNUMTHREADS=60 -O3 -pipe 
-fomit-frame-pointer -funroll-loops -ffast-math -fno-exceptions'

r



Re: [squid-users] enabling X-Authenticated-user

2012-03-07 Thread Brett Lymn
On Wed, Mar 07, 2012 at 11:58:31PM +1300, Amos Jeffries wrote:
 
 cache_peer_access is a fast group access list. This is why  suggested 
 never/always _direct lists for the testing.
 

Yep, me being dumb :)  I did as you suggested and used the never_direct
deny and my debug lines in the external helper fire _but_ there are two
dumb things happening:

1) The credentials being passed to the upstream are not rewritten - if I
decode the basic auth it has my real password going to the upstream.

2) For some unknown (to me) reason the upstream is prompting for auth
even though the basic auth header is there.  It does not do this if I
have login=*:password but if I set login=PASS the upstream prompts.

-- 
Brett Lymn
Warning:
The information contained in this email and any attached files is
confidential to BAE Systems Australia. If you are not the intended
recipient, any use, disclosure or copying of this email or any
attachments is expressly prohibited.  If you have received this email
in error, please notify us immediately. VIRUS: Every care has been
taken to ensure this email and its attachments are virus free,
however, any loss or damage incurred in using this email is not the
sender's responsibility.  It is your responsibility to ensure virus
checks are completed before installing any data sent in this email to
your computer.




Re: [squid-users] enabling X-Authenticated-user

2012-03-07 Thread Brett Lymn
On Thu, Mar 08, 2012 at 10:37:01AM +1030, Brett Lymn wrote:
 
 2) For some unknown (to me) reason the upstream is prompting for auth
 even though the basic auth header is there.  It does not do this if I
 have login=*:password but if I set login=PASS the upstream prompts.
 

Scratch this one - the stupid stupid stupid upstream seems to cache the
username/cred pair after the first time it does a ldap lookup and will
reject the authentication if the password.  Once I cleared the caches
this went away - when the auth rewrite works this won't be a problem.
It looks like there is a setting on the upstream that will clear the
cached entry if auth fails, why this isn't on by default is a mystery.

-- 
Brett Lymn
Warning:
The information contained in this email and any attached files is
confidential to BAE Systems Australia. If you are not the intended
recipient, any use, disclosure or copying of this email or any
attachments is expressly prohibited.  If you have received this email
in error, please notify us immediately. VIRUS: Every care has been
taken to ensure this email and its attachments are virus free,
however, any loss or damage incurred in using this email is not the
sender's responsibility.  It is your responsibility to ensure virus
checks are completed before installing any data sent in this email to
your computer.




Re: [squid-users] Roadmap Squid 3.2

2012-03-07 Thread Henrik Nordström
ons 2012-03-07 klockan 10:35 -0700 skrev Alex Rousskov:

 I think it is neither reasonable nor practical to make Squid v3.2
 stable designation dependent on 2.x bugs, especially those filed years
 ago with insufficient information. Squid v3.2 can be stable regardless
 of what bugs the old 2.x version had.

Yes.

3.2 release should not be held by Squid-2 bugs. Only confirmed Squid-3.2
bugs affecting new functionality or indicating a regression from 3.1.
Plus any known significant security issues which may impact 3.2.

We can't aim for having each new release fixing all possibly known bugs
in all earlier releases, but it's reasonable that we do not knowingly
introduce new bugs in old functionality or release new functionality
known to not be working well enough.

It is acceptable to have some known bugs in new functionality, as long
as it do not impose any security issues or make the functionality
useless.

Regards
Henrik



Re: [squid-users] Squid 3.2: segfault at 0 ip (null) sp bfa8e03c using iptables + transparent mode

2012-03-07 Thread Amos Jeffries

On 08.03.2012 12:51, David Touzeau wrote:

Dear,

I'm using Squid Cache: Version 3.2.0.15-20120306-r11529 in i386 on
Ubuntu 10.04
iptables v1.4.4 and kernel   2.6.32-38-generic-pae #83-Ubuntu SMP
In transparent mode with iptables.

Each 10 Minutes we are unable to access to Internet and there is a
squid crash.
Restart squid service solve the issue.

Is there a tip/trick to fix it ?

[   14.445583] [drm] Initialized radeon 2.0.0 20080528 for
:04:00.0 on minor 0
[   14.694306] vga16fb: initializing
[   14.694309] vga16fb: mapped to 0xc00a
[   14.694312] vga16fb: not registering due to another framebuffer 
present
[   14.883342] Console: switching to colour frame buffer device 
128x48

[   16.883375] Loading iSCSI transport class v2.0-870.
[   17.722963] iscsi: registered transport (tcp)
[   18.491243] iscsi: registered transport (iser)
[   25.208015] eth0: no IPv6 routers present
[   44.602329] ip_tables: (C) 2000-2006 Netfilter Core Team
[   44.676368] nf_conntrack version 0.5.0 (16384 buckets, 65536 max)
[   44.676699] CONFIG_NF_CT_ACCT is deprecated and will be removed
soon. Please use
[   44.676701] nf_conntrack.acct=1 kernel parameter, acct=1
nf_conntrack module option or
[   44.676702] sysctl net.netfilter.nf_conntrack_acct=1 to enable it.
[  392.296569] squid[7663]: segfault at 0 ip (null) sp bfa8e03c error
14 in squid[8048000+415000]
[  658.532544] squid[8352]: segfault at 0 ip (null) sp bfa52cdc error
14 in squid[8048000+415000]
[  740.928753] squid[8429]: segfault at 0 ip (null) sp bfe9f12c error
14 in squid[8048000+415000]
[  760.620663] squid[8377]: segfault at 0 ip (null) sp bfc02e2c error
14 in squid[8048000+415000]
[199121.864727] squid[32681]: segfault at 49 ip 082ab397 sp bfd39740
error 4 in squid[8048000+415000]



Any core backtrace info as to what line of code [8048000+415000] is?


Please try the .16 package too. Several more important bug fixes went 
in there.




Compilation options :



Quite a few obsolete options in that list,

http://www.squid-cache.org/Versions/v3/3.0/RELEASENOTES.html#removedoptions


'--disable-dlmalloc'
'--enable-forward-log'
'--enable-large-cache-files'
'--enable-truncate'


http://www.squid-cache.org/Versions/v3/3.1/RELEASENOTES.html#removedoptions


'--enable-default-err-language=English'
'--enable-err-languages=English'


http://www.squid-cache.org/Versions/v3/3.2/RELEASENOTES.html#removedoptions


'--enable-auth-negotiate-helpers'
'--enable-arp-acl'



 ... guess which setting will build with:

'--with-maxfd=32000'
'--with-filedescriptors=16384'
'--with-maxfd=16384'


or

'--with-aufs-threads=10'
CFLAGS=-DNUMTHREADS=60


... and one option that never existed:

'--enable-icap-support'



Amos


[squid-users] NTLM passthru authentication

2012-03-07 Thread 巍俊葛
Hi,

Can someone take a look at it the following issue which I ran into?
Here is the details:
Outline: squid 2.6 as the reverse-proxy for IIS (SharePoint) site.
IIS uses the NTLM  authentication.

Regarding the squid document, squid 2.6+ or squid 3.1+ support
NTLM passthru authentication by Connection Pinning.

My problem is it always shows the 404 error code.
No NTLM prompt window is shown.

16.178.121.18  my desktop IP
 192.57.84.244  squid reverse proxy IP
16.173.232.237  IIS(SharePoint) site.

Red Hat Enterprise Linux Server release 5.7 (Tikanga) (64bit)
/usr/sbin/squid -v
Squid Cache: Version 2.6.STABLE21

The following packets are captured by tshark.

 1   0.00 16.178.121.18 - 192.57.84.244 TCP 64833  http [SYN] Seq=0 Win=8
192 Len=0 MSS=1380 WS=2

  00 50 56 ac 00 c6 00 22 0c d5 bc 00 08 00 45 00   .PV..E.
0010  00 34 3a 59 40 00 76 06 2b 79 10 b2 79 12 c0 39   .4:Y@.v.+y..y..9
0020  54 f4 fd 41 00 50 e8 0d e1 a5 00 00 00 00 80 02   T..A.P..
0030  20 00 e9 2e 00 00 02 04 05 64 01 03 03 02 01 01d..
0040  04 02 ..

 2   0.16 192.57.84.244 - 16.178.121.18 TCP http  64833 [SYN, ACK] Seq=0
Ack=1 Win=5840 Len=0 MSS=1460 WS=7

  00 22 0c d5 bc 00 00 50 56 ac 00 c6 08 00 45 00   ..PV.E.
0010  00 34 00 00 40 00 40 06 9b d2 c0 39 54 f4 10 b2   .4..@.@9T...
0020  79 12 00 50 fd 41 eb ce 13 67 e8 0d e1 a6 80 12   y..P.A...g..
0030  16 d0 f2 c2 00 00 02 04 05 b4 01 01 04 02 01 03   
0040  03 07 ..

 3   0.258861 16.178.121.18 - 192.57.84.244 TCP 64833  http [ACK] Seq=1 Ack=1
 Win=66240 Len=0

  00 50 56 ac 00 c6 00 22 0c d5 bc 00 08 00 45 00   .PV..E.
0010  00 28 3a 5a 40 00 76 06 2b 84 10 b2 79 12 c0 39   .(:Z@.v.+...y..9
0020  54 f4 fd 41 00 50 e8 0d e1 a6 eb ce 13 68 50 10   T..A.P...hP.
0030  40 b0 09 b5 00 00 ff ff ff ff ff ff   @...

 4   0.260075 16.178.121.18 - 192.57.84.244 HTTP GET /SitePages/Square.aspx HT
TP/1.1

  00 50 56 ac 00 c6 00 22 0c d5 bc 00 08 00 45 00   .PV..E.
0010  02 63 3a 5b 40 00 76 06 29 48 10 b2 79 12 c0 39   .c:[@.v.)H..y..9
0020  54 f4 fd 41 00 50 e8 0d e1 a6 eb ce 13 68 50 18   T..A.P...hP.
0030  40 b0 01 21 00 00 47 45 54 20 2f 53 69 74 65 50   @..!..GET /SiteP
0040  61 67 65 73 2f 53 71 75 61 72 65 2e 61 73 70 78   ages/Square.aspx
0050  20 48 54 54 50 2f 31 2e 31 0d 0a 41 63 63 65 70HTTP/1.1..Accep
0060  74 3a 20 61 70 70 6c 69 63 61 74 69 6f 6e 2f 78   t: application/x
0070  2d 6d 73 2d 61 70 70 6c 69 63 61 74 69 6f 6e 2c   -ms-application,
0080  20 69 6d 61 67 65 2f 6a 70 65 67 2c 20 61 70 70image/jpeg, app
0090  6c 69 63 61 74 69 6f 6e 2f 78 61 6d 6c 2b 78 6d   lication/xaml+xm
00a0  6c 2c 20 69 6d 61 67 65 2f 67 69 66 2c 20 69 6d   l, image/gif, im
00b0  61 67 65 2f 70 6a 70 65 67 2c 20 61 70 70 6c 69   age/pjpeg, appli
00c0  63 61 74 69 6f 6e 2f 78 2d 6d 73 2d 78 62 61 70   cation/x-ms-xbap
00d0  2c 20 61 70 70 6c 69 63 61 74 69 6f 6e 2f 76 6e   , application/vn
00e0  64 2e 6d 73 2d 65 78 63 65 6c 2c 20 61 70 70 6c   d.ms-excel, appl
00f0  69 63 61 74 69 6f 6e 2f 76 6e 64 2e 6d 73 2d 70   ication/vnd.ms-p
0100  6f 77 65 72 70 6f 69 6e 74 2c 20 61 70 70 6c 69   owerpoint, appli
0110  63 61 74 69 6f 6e 2f 6d 73 77 6f 72 64 2c 20 2a   cation/msword, *
0120  2f 2a 0d 0a 41 63 63 65 70 74 2d 4c 61 6e 67 75   /*..Accept-Langu
0130  61 67 65 3a 20 65 6e 2d 55 53 0d 0a 55 73 65 72   age: en-US..User
0140  2d 41 67 65 6e 74 3a 20 4d 6f 7a 69 6c 6c 61 2f   -Agent: Mozilla/
0150  34 2e 30 20 28 63 6f 6d 70 61 74 69 62 6c 65 3b   4.0 (compatible;
0160  20 4d 53 49 45 20 37 2e 30 3b 20 57 69 6e 64 6fMSIE 7.0; Windo
0170  77 73 20 4e 54 20 36 2e 31 3b 20 57 4f 57 36 34   ws NT 6.1; WOW64
0180  3b 20 54 72 69 64 65 6e 74 2f 34 2e 30 3b 20 53   ; Trident/4.0; S
0190  4c 43 43 32 3b 20 2e 4e 45 54 20 43 4c 52 20 32   LCC2; .NET CLR 2
01a0  2e 30 2e 35 30 37 32 37 3b 20 2e 4e 45 54 20 43   .0.50727; .NET C
01b0  4c 52 20 33 2e 35 2e 33 30 37 32 39 3b 20 2e 4e   LR 3.5.30729; .N
01c0  45 54 20 43 4c 52 20 33 2e 30 2e 33 30 37 32 39   ET CLR 3.0.30729
01d0  3b 20 4d 65 64 69 61 20 43 65 6e 74 65 72 20 50   ; Media Center P
01e0  43 20 36 2e 30 3b 20 49 6e 66 6f 50 61 74 68 2e   C 6.0; InfoPath.
01f0  32 3b 20 2e 4e 45 54 34 2e 30 43 3b 20 41 73 6b   2; .NET4.0C; Ask
0200  54 62 50 54 56 2f 35 2e 31 34 2e 31 2e 32 30 30   TbPTV/5.14.1.200
0210  30 37 29 0d 0a 41 63 63 65 70 74 2d 45 6e 63 6f   07)..Accept-Enco
0220  64 69 6e 67 3a 20 67 7a 69 70 2c 20 64 65 66 6c   ding: gzip, defl
0230  61 74 65 0d 0a 48 6f 73 74 3a 20 75 6b 77 74 73   ate..Host: ukwts
0240  76 75 6c 78 33 38 30 2e 65 6c 61 62 73 2e 65 64   vulx380.elabs.ed
0250  73 2e 63 6f 6d 0d 0a 43 6f 6e 6e 65 63 74 69 6f   s.com..Connectio
0260  6e 3a 20 4b 65 65 70 2d 41 6c 69 76 65 0d 0a 0d   n: Keep-Alive...
0270  0a.

 5   0.260125 192.57.84.244 - 

[squid-users] Disabling client-initiated renegotiation on https_port

2012-03-07 Thread Marcus Zoller
Hello guys,

I am running squid as an reverse proxy and can't find a way to disable the 
support for client initiated renegotiation. I have tested this using

echo R | openssl s_client -connect :443

which returns

RENEGOTIATING
.

The https_port configuration looks like:

https_port 172.16.0.2:443 accel defaultsite= protocol=https \
    cipher=RC4-SHA:AES256+SHA:AES128+SHA:3DES+SHA:!ADH:!EDH \
    options=ALL:NO_SSLv2,NO_SSLv3,CIPHER_SERVER_PREFERENCE

I have recompiled the centos 6 x86 rpm package using squid 3.1.19. The 
configure line look like the following:
%configure \
   --exec_prefix=/usr \
   --libexecdir=%{_libdir}/squid \
   --localstatedir=/var \
   --datadir=%{_datadir}/squid \
   --sysconfdir=/etc/squid \
   --with-logdir='$(localstatedir)/log/squid' \
   --with-pidfile='$(localstatedir)/run/squid.pid' \
   --disable-dependency-tracking \
   --enable-arp-acl \
   --enable-follow-x-forwarded-for \
   --enable-auth=basic,digest,ntlm,negotiate \
   
--enable-basic-auth-helpers=LDAP,MSNT,NCSA,PAM,SMB,YP,getpwnam,multi-domain-NTLM,SASL,DB,POP3,squid_radius_auth
 \
   --enable-ntlm-auth-helpers=smb_lm,no_check,fakeauth \
   --enable-digest-auth-helpers=password,ldap,eDirectory \
   --enable-negotiate-auth-helpers=squid_kerb_auth \
   
--enable-external-acl-helpers=ip_user,ldap_group,session,unix_group,wbinfo_group
 \
   --enable-cache-digests \
   --enable-cachemgr-hostname=localhost \
   --enable-delay-pools \
   --enable-epoll \
   --enable-icap-client \
   --disable-ident-lookups \
   %ifnarch ppc64 ia64 x86_64 s390x
   --with-large-files \
   %endif
   --enable-linux-netfilter \
   --enable-referer-log \
   --enable-removal-policies=heap,lru \
   --enable-snmp \
   --enable-ssl \
   --enable-storeio=aufs,diskd,ufs \
   --enable-useragent-log \
   --enable-wccpv2 \
   --enable-esi \
   --enable-http-violations \
   --with-aio \
   --with-default-user=squid \
   --with-filedescriptors=16384 \
   --with-dl \
   --with-openssl=/root/rpmbuild/BUILD/openssl-1.0.0 \
   --with-pthreads

The referenced openssl source is the build root of the lates RHEL rpm package : 
openssl-1.0.0-20.el6.2.2.x86_64

rpm -q --changelog openssl | grep CVE

I have found in src/ssl_support.cc that options is initialized with SSL_OP_ALL. 
The changelog from the openssl package says:

rpm -q --changelog openssl | grep CVE-2009-3555

fix CVE-2009-3555 - note that the fix is bypassed if SSL_OP_ALL is used

so I tried to change my https_port options to start with !ALL but this changes 
nothing.

I don't know much about the squid or openssl source but after reading the docs 
for SSL_CTX_set_options I have understand that by default LEGACY_SERVER_CONNECT 
is set, which enables renegotiation but I assume this applies to the client 
part of the code only. 

From all I have seen nearly anyone (ie apache) has special code added to 
prevent this renegotiation like the following from the apache 2.2.14 patch:
int state = SSL_get_state(ssl);
+    
+    if (state == SSL3_ST_SR_CLNT_HELLO_A 
+    || state == SSL23_ST_SR_CLNT_HELLO_A) {
+    scr-reneg_state = RENEG_ABORT;
+    ap_log_cerror(APLOG_MARK, APLOG_ERR, 0, c,
+  rejecting client initiated renegotiation);

I was unable to find anything like this within squids source but from other 
posts I've seen that someone else already fixed this problem but unfortunately 
it is not clear how. 

So now I am wondering what I am doing wrong or if there is no support for 
disabling this functionality available?

BTW: The openssl version that implements the functionality seem to have 
SSL_OP_ALLOW_UNSAFE_LEGACY_RENEGOTIATION as a new constant. To make sure that I 
recompile against this version I have added this constant to the ssl_options 
array in ssl_support.cc and the code still compiled fine. So I assume I am 
using the right version. My next step would be replacing the redhat openssl 
package with a fresh build from the latest openssl source (without all these 
redhat patching) but I would prefer if I don't need a custom build as for 
simple updating.


Many thanks for any ideas / help!

Cheers,
Marcus



Re: [squid-users] Re: squid with squidguard issue

2012-03-07 Thread Ebed
On 06/03/2012 9:55, Amos Jeffries wrote:

  Voila, you are blocking sites using a black list my friend.
 
  btw, just ignore the stupid warning messages. they do not affect the
  functionality of this feature and ive learned
  to just ignore them.
 
 
  Which warning messages? 
   
I guess you misstyped the filename, my last warning message with this
config option was .../etc/squid/blacklist.conf is not found. Which made
me take a break for a while, and suddenly i found the culprit... i typed
the blaclist.conf, forgot the k.

I am using squid v3.1.19...

Hope this help.




Ebed Tang http://www.facebook.com/ebedtang ebed...@gmail.com
mailto:ebed...@gmail.com

attachment: ebedsat.vcf