[squid-users] R: [squid-users] squid 3.1.x with IIS SharePoint as back-end.

2012-01-11 Thread Guido Serassio
Hi,

Look at this bug:
http://bugs.squid-cache.org/show_bug.cgi?id=3141

Likely it's the same problem.
I hope that it will be fixed in the incoming 3.2.

Regards

Guido Serassio
Acme Consulting S.r.l.
Microsoft Silver Certified Partner
VMware Professional Partner
Via Lucia Savarino, 110098 - Rivoli (TO) - ITALY
Tel. : +39.011.9530135   Fax. : +39.011.9781115
Email: guido.seras...@acmeconsulting.it
WWW: http://www.acmeconsulting.it


 -Messaggio originale-
 Da: kimi ge(巍俊葛) [mailto:weiju...@gmail.com]
 Inviato: mercoledì 11 gennaio 2012 8.47
 A: Amos Jeffries
 Cc: squid-users@squid-cache.org
 Oggetto: Re: [squid-users] squid 3.1.x with IIS SharePoint as back-end.
 
 Thanks Amos.
 
 I did the lynx test on back-end web site on squid system like this:
 sudo lynx http://wtestsm1.asiapacific.hpqcorp.net
 
 First, it show the message:
 Alert!: Invalid header 'WWW-Authenticate: NTLM'
 
 Then it show the following message.
 Show the 401 message body? (y/n)
 
 For the domain auth, I mean the back-end web site need corp domain
 user to be accessed.
 I put this in this way, if I log on with my corp domain on my laptop,
 then I could acces IIS Share Point without any credentials window pop
 up. If not, I have to input my domain account on credentials window to
 access the Share Point Site.
 
 
 The following is my squid configuration about this case which I ignore
 some default sections.
 #added by kimi
 acl hpnet src 16.0.0.0/8# RFC1918 possible internal network
 #added by kimi
 acl origin_servers dstdomain ids-ams.elabs.eds.com
 http_access allow origin_servers
 http_access allow hpnet
 
 http_port 192.85.142.88:80 accel defaultsite=ids-ams.elabs.eds.com
 connection-auth=on
 
 forwarded_for on
 
 request_header_access WWW-Authenticate allow all
 
 cache_peer wtestsm1.asiapacific.hpqcorp.net parent 80 0 no-query
 no-digest originserver name=main connection-auth=on login=PASS
 
 cache_peer_domain main .elabs.eds.com
 
 hierarchy_stoplist cgi-bin ?
 
 coredump_dir /var/spool/squid
 
 # Add any of your own refresh_pattern entries above these.
 refresh_pattern ^ftp:   144020% 10080
 refresh_pattern ^gopher:14400%  1440
 refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
 refresh_pattern .   0   20% 4320
 
 cache_dir aufs /data/squid/cache 12000 64 256
 cache_mem 1024 MB
 maximum_object_size_in_memory 1024 KB
 maximum_object_size 51200 KB
 
 visible_hostname ids-ams.elabs.eds.com
 debug_options ALL,5
 http_access deny all
 
 While let squid be running, I do test like this
 http://ids-ams.elabs.eds.com
 
 The 404 error page is shown.
 That's why I am wondering squid could be as reverse-proxy with IIS
 SharePoint as back-end?
 
 Thanks,
 ~Kimi
 
 
 
 On 11/01/2012, Amos Jeffries squ...@treenet.co.nz wrote:
  On 11/01/2012 6:28 p.m., kimi ge(巍俊葛) wrote:
  Hi,
 
I have an issue to make squid 3.1.x to work with IIS SharePoint as
 the
back-end.
  The details are listed below.
 
  1. squid 3.1.x is running as a reverse-proxy.
  2. The back-end is IIS SharePoint Site with domain authentication
  required.
That means only the valid domain user could access this SharePoint
 site.
The issue is it always return 404 error page. And the logon window is
not prompted.
 
  What is this domain authentication you mention? All of the HTTP auth
  mechanisms count as domain auth to a reverse proxy, and none of them
  are named Domain.
 
 
My question is whether squid supports this kind of case or not?
If supports, how should I do configuration on squid.conf file?
 
Thanks in advance.
~Kimi
 
  404 status is about the resource being requested _not existing_. Login
  only operates when there is something to be authorized fetching. So I
  think auth is not relevant at this point in your testing.
 
  Probably the URL being passed to IIS is not what you are expecting to be
  passed and IIS is not setup to handle it. You will need to share your
  squid.conf details for more help.
 
  Amos
 


[squid-users] Re : [squid-users] Re : [squid-users] Anonymous FTP and login pass url based

2012-01-11 Thread Al Batard
Hi,

I tried
 debug_options 9,9 and the first process performed is anonymous login 
(not user / password if exists). User / password are used after if 
anonymous authentication failed. If ftp site used both Anonymous and 
User/ password and anonymous connection is ok, User /password 
authentication is not performed. 

Seeing  in Squid 3.1.11 and 3.1.18.

Thanks,

Guillaume


- Mail original -
De : Amos Jeffries squ...@treenet.co.nz
À : squid-users@squid-cache.org
Cc : 
Envoyé le : Mercredi 28 Décembre 2011 3h39
Objet : [squid-users] Re : [squid-users] Anonymous FTP and login pass url based

On 28/12/2011 1:02 a.m., Al Batard wrote:
 Hi and thanks for your answers,
 
 If I understand this is a bug in the order of ftp authentication ?

Yes, though what is unknown. Which Squid version are you seeing it in?

And can you get an FTP section level-9 debug trace. It should show the exact 
username processing steps performed. With both encoded and decoded user/pass, 
so be careful replying here with anything.

Amos



[squid-users] receiving email from sqid users list

2012-01-11 Thread alex sharaz
Not a squid query but I was under the impression that I should receive  
emails from the squid list having subscribed to it. At the moment the  
only way I can see if anyone has replied to a posting is to use a  
browser to look at the mail archive . who would  I report this to?

rgds
Alex


[squid-users] Silly warning about over disk limits

2012-01-11 Thread alex sharaz

Getting  the following on my 3.2...79 snapshot:-

2012/01/11 10:18:30 kid2| NETDB state saved; 142 entries, 135 msec
2012/01/11 10:18:39 kid1| WARNING: Disk space over limit:  
5258011484356608.00 KB  1048576 KB
2012/01/11 10:18:50 kid1| WARNING: Disk space over limit:  
5258011484356608.00 KB  1048576 KB
2012/01/11 10:19:01 kid1| WARNING: Disk space over limit:  
5258011484356608.00 KB  1048576 KB
2012/01/11 10:19:12 kid1| WARNING: Disk space over limit:  
5258011484356608.00 KB  1048576 KB
2012/01/11 10:19:23 kid1| WARNING: Disk space over limit:  
5258011484356608.00 KB  1048576 KB
2012/01/11 10:19:34 kid1| WARNING: Disk space over limit:  
5258011484356608.00 KB  1048576 KB


Config file has

#
# o.k. create a disk directory for every squid process under /cache
#
cache_dir aufs /usr/local/squid/var/cache/${process_number} 1024 64 256

As this is a test cache, just putting squid cache in a directory off  
root


root@slb-realsrv1-east:/usr/local/squid/etc# df
Filesystem   1K-blocks  Used Available Use% Mounted on
/dev/mapper/slb--realsrv1--east-root
  33285936  19623748  11971360  63% /
tmpfs  4041632 0   4041632   0% /lib/init/rw
varrun 4041632   104   4041528   1% /var/run
varlock4041632 0   4041632   0% /var/lock
udev   4041632   152   4041480   1% /dev
tmpfs  4041632 61272   3980360   2% /dev/shm
/dev/sda5   225806 98192115567  46% /boot
tmpfs  4041632  2560   4039072   1% /lib/modules/ 
2.6.28-19-server/volatile









==
Time for another Macmillan Cancer Support event. This time its the 12  
day Escape to Africa challenge


View route at 
http://maps.google.co.uk/maps/ms?ie=UTF8hl=enmsa=0msid=203779866436035016780.00049e867720273b73c39z=8

Please sponsor me at http://www.justgiving.com/Alex-Sharaz





Re: [squid-users] receiving email from sqid users list

2012-01-11 Thread Amos Jeffries

On 11/01/2012 10:58 p.m., alex sharaz wrote:
Not a squid query but I was under the impression that I should receive 
emails from the squid list having subscribed to it. At the moment the 
only way I can see if anyone has replied to a posting is to use a 
browser to look at the mail archive . who would  I report this to?

rgds
Alex


n...@squid-cache.org or if that has problems i...@squid-cache.org.
 cc'd over there now.

Amos


Re: [squid-users] Silly warning about over disk limits

2012-01-11 Thread Jose-Marcio Martins da Cruz


I had this too. But with a 3.1.18 in a production server (under solaris).

2012/01/10 10:11:54| WARNING: Disk space over limit: 1644356946 KB  5120 KB

Sometimes the server hands and it seems to me that these are related but I haven't yet enough data 
to say anything. I was just waiting more info to post a message here.


alex sharaz wrote:

Getting  the following on my 3.2...79 snapshot:-

2012/01/11 10:18:30 kid2| NETDB state saved; 142 entries, 135 msec
2012/01/11 10:18:39 kid1| WARNING: Disk space over limit:
5258011484356608.00 KB  1048576 KB
2012/01/11 10:18:50 kid1| WARNING: Disk space over limit:
5258011484356608.00 KB  1048576 KB
2012/01/11 10:19:01 kid1| WARNING: Disk space over limit:
5258011484356608.00 KB  1048576 KB
2012/01/11 10:19:12 kid1| WARNING: Disk space over limit:
5258011484356608.00 KB  1048576 KB
2012/01/11 10:19:23 kid1| WARNING: Disk space over limit:
5258011484356608.00 KB  1048576 KB
2012/01/11 10:19:34 kid1| WARNING: Disk space over limit:
5258011484356608.00 KB  1048576 KB

Config file has

#
# o.k. create a disk directory for every squid process under /cache
#
cache_dir aufs /usr/local/squid/var/cache/${process_number} 1024 64 256

As this is a test cache, just putting squid cache in a directory off root

root@slb-realsrv1-east:/usr/local/squid/etc# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/slb--realsrv1--east-root
33285936 19623748 11971360 63% /
tmpfs 4041632 0 4041632 0% /lib/init/rw
varrun 4041632 104 4041528 1% /var/run
varlock 4041632 0 4041632 0% /var/lock
udev 4041632 152 4041480 1% /dev
tmpfs 4041632 61272 3980360 2% /dev/shm
/dev/sda5 225806 98192 115567 46% /boot
tmpfs 4041632 2560 4039072 1% /lib/modules/2.6.28-19-server/volatile








==
Time for another Macmillan Cancer Support event. This time its the 12
day Escape to Africa challenge

View route at
http://maps.google.co.uk/maps/ms?ie=UTF8hl=enmsa=0msid=203779866436035016780.00049e867720273b73c39z=8


Please sponsor me at http://www.justgiving.com/Alex-Sharaz







Re: [squid-users] Silly warning about over disk limits

2012-01-11 Thread Amos Jeffries

On 11/01/2012 11:24 p.m., alex sharaz wrote:

Getting  the following on my 3.2...79 snapshot:-

2012/01/11 10:18:30 kid2| NETDB state saved; 142 entries, 135 msec
2012/01/11 10:18:39 kid1| WARNING: Disk space over limit: 
5258011484356608.00 KB  1048576 KB
2012/01/11 10:18:50 kid1| WARNING: Disk space over limit: 
5258011484356608.00 KB  1048576 KB
2012/01/11 10:19:01 kid1| WARNING: Disk space over limit: 
5258011484356608.00 KB  1048576 KB
2012/01/11 10:19:12 kid1| WARNING: Disk space over limit: 
5258011484356608.00 KB  1048576 KB
2012/01/11 10:19:23 kid1| WARNING: Disk space over limit: 
5258011484356608.00 KB  1048576 KB
2012/01/11 10:19:34 kid1| WARNING: Disk space over limit: 
5258011484356608.00 KB  1048576 KB




This is already being tracked as 
http://bugs.squid-cache.org/show_bug.cgi?id=3441


Amos


[squid-users] Error validating user via Negotiate. Error returned 'BH received type 1 NTLM token'

2012-01-11 Thread Muhammet Can
Hi all,

I have been trying to get squid running with kerberos auth for a few
days but I'm in some trouble. The problem has been asked and replied
many times on both the squid-users list and on the web, I have read
them all, and tried to solve the problem. But still no luck.

Here is some of my log files and tests.
(config files are prepared with using wiki;
http://wiki.squid-cache.org/ConfigExamples/Authenticate/Kerberos)

-- tail -f cache.log
2012/01/11 11:54:06| squid_kerb_auth: DEBUG: Got 'YR
TlRMTVNTUAABl4II4gAGAbEdDw==' from squid
(length: 59).
2012/01/11 11:54:06| squid_kerb_auth: DEBUG: Decode
'TlRMTVNTUAABl4II4gAGAbEdDw==' (decoded
length: 40).
2012/01/11 11:54:06| squid_kerb_auth: WARNING: received type 1 NTLM token
2012/01/11 11:54:06| authenticateNegotiateHandleReply: Error
validating user via Negotiate. Error returned 'BH received type 1 NTLM
token'

-- tail -f access.log
192.168.0.147 - - [11/Jan/2012:11:54:08 +0200] GET
http://www.google.com.tr/ HTTP/1.1; 407 1524 TCP_DENIED:NONE
192.168.0.147 - - [11/Jan/2012:11:54:08 +0200] GET
http://www.google.com.tr/ HTTP/1.1; 407 1524 TCP_DENIED:NONE

I have tested kerberos on the server side with;

-- klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: administra...@labristest.com

-- kinit -V -k -t /opt/labris/etc/labris-webcache/HTTP.keytab
HTTP/test2008.labristest.com
Authenticated to Kerberos v5

And, on the client side, I have used kerbtray, it seems client has the tickets.

I have captured the packets with wireshark as suggested some of the
earlier messages, it looks like client still tries to authenticate
with NTLM while we want to use kerberos.

Here is the some of the parts of wireshark log;
(if needed, you can get the full log from here: http://pastebin.com/btp9PzYu )

client to server;
Hypertext Transfer Protocol
    GET http://www.google.com.tr/ HTTP/1.1\r\n
        [Expert Info (Chat/Sequence): GET http://www.google.com.tr/
HTTP/1.1\r\n]
        Request Method: GET
        Request URI: http://www.google.com.tr/
        Request Version: HTTP/1.1
    Host: www.google.com.tr\r\n
    User-Agent: Mozilla/5.0 (Windows NT 6.1; rv:8.0) Gecko/20100101
Firefox/8.0\r\n
    Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8\r\n
    Accept-Language: tr-tr,tr;q=0.8,en-us;q=0.5,en;q=0.3\r\n
    Accept-Encoding: gzip, deflate\r\n
    Accept-Charset: ISO-8859-9,utf-8;q=0.7,*;q=0.7\r\n
    Proxy-Connection: keep-alive\r\n


server reply;
Hypertext Transfer Protocol
    HTTP/1.0 407 Proxy Authentication Required\r\n
        [Expert Info (Chat/Sequence): HTTP/1.0 407 Proxy
Authentication Required\r\n]
        Request Version: HTTP/1.0
        Status Code: 407
        Response Phrase: Proxy Authentication Required
    Server: squid/3.1.12\r\n
    Mime-Version: 1.0\r\n
    Date: Wed, 11 Jan 2012 11:28:01 GMT\r\n
    Content-Type: text/html\r\n
    Content-Length: 1152\r\n
    X-Squid-Error: ERR_CACHE_ACCESS_DENIED 0\r\n
    Proxy-Authenticate: Negotiate\r\n
    X-Cache: MISS from labris-1\r\n
    X-Cache-Lookup: NONE from labris-1:3128\r\n
    Via: 1.0 labris-1 (squid/3.1.12)\r\n
    Connection: keep-alive\r\n
    \r\n


client tries authentication;
Hypertext Transfer Protocol
    GET http://www.google.com.tr/ HTTP/1.1\r\n
        [Expert Info (Chat/Sequence): GET http://www.google.com.tr/
HTTP/1.1\r\n]
        Request Method: GET
        Request URI: http://www.google.com.tr/
        Request Version: HTTP/1.1
    Host: www.google.com.tr\r\n
    User-Agent: Mozilla/5.0 (Windows NT 6.1; rv:8.0) Gecko/20100101
Firefox/8.0\r\n
    Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8\r\n
    Accept-Language: tr-tr,tr;q=0.8,en-us;q=0.5,en;q=0.3\r\n
    Accept-Encoding: gzip, deflate\r\n
    Accept-Charset: ISO-8859-9,utf-8;q=0.7,*;q=0.7\r\n
    Proxy-Connection: keep-alive\r\n
    Proxy-Authorization: Negotiate
TlRMTVNTUAABl4II4gAGAbEdDw==\r\n
        NTLM Secure Service Provider
            NTLMSSP identifier: NTLMSSP
            NTLM Message Type: NTLMSSP_NEGOTIATE (0x0001)
            Flags: 0xe2088297
            Calling workstation domain: NULL
            Calling workstation name: NULL
            Version 6.1 (Build 7601); NTLM Current Revision 15
                Major Version: 6
                Minor Version: 1
                Build Number: 7601
                NTLM Current Revision: 15


Please see me as a newbie,
I'd really appreciate a detailed solution to get squid working with
kerberos and what may cause the problem.

Thanks in advance.

-- 
code is poetry!
muhammetcan.net


Re: [squid-users] squid 3.1.x with IIS SharePoint as back-end.

2012-01-11 Thread Amos Jeffries

On 11/01/2012 8:46 p.m., kimi ge(巍俊葛) wrote:

Thanks Amos.

I did the lynx test on back-end web site on squid system like this:
sudo lynx http://wtestsm1.asiapacific.hpqcorp.net

First, it show the message:
Alert!: Invalid header 'WWW-Authenticate: NTLM'

Then it show the following message.
Show the 401 message body? (y/n)


Aha. NTLM authentication. Very probaby that login=PASS then.



For the domain auth, I mean the back-end web site need corp domain
user to be accessed.
I put this in this way, if I log on with my corp domain on my laptop,
then I could acces IIS Share Point without any credentials window pop
up. If not, I have to input my domain account on credentials window to
access the Share Point Site.


The following is my squid configuration about this case which I ignore
some default sections.
#added by kimi
acl hpnet src 16.0.0.0/8# RFC1918 possible internal network
#added by kimi
acl origin_servers dstdomain ids-ams.elabs.eds.com
http_access allow origin_servers
http_access allow hpnet

http_port 192.85.142.88:80 accel defaultsite=ids-ams.elabs.eds.com
connection-auth=on

forwarded_for on

request_header_access WWW-Authenticate allow all


This is not needed. The Squid default is to relay www-auth headers 
through. www-authenticate is a reply header anyway, to inform the client 
agent what types of auth it can use.




cache_peer wtestsm1.asiapacific.hpqcorp.net parent 80 0 no-query
no-digest originserver name=main connection-auth=on login=PASS


connection-auth=on should be enough. Try without login=PASS.



cache_peer_domain main .elabs.eds.com

hierarchy_stoplist cgi-bin ?

coredump_dir /var/spool/squid

# Add any of your own refresh_pattern entries above these.
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320

cache_dir aufs /data/squid/cache 12000 64 256
cache_mem 1024 MB
maximum_object_size_in_memory 1024 KB
maximum_object_size 51200 KB

visible_hostname ids-ams.elabs.eds.com
debug_options ALL,5
http_access deny all

While let squid be running, I do test like this
http://ids-ams.elabs.eds.com

The 404 error page is shown.


Okay. Which error page?  Squid sends three different ones with that 
status code. Invalid request or Invalid URL or something else?



That's why I am wondering squid could be as reverse-proxy with IIS
SharePoint as back-end?


It can be. There is normally no trouble. But the newer features MS have 
been adding for IPv6 and cloud support recently are not widely tested yet.


Amos


Re: [squid-users] squid 3.1.x with IIS SharePoint as back-end.

2012-01-11 Thread 巍俊葛
Hi Amos,

Really appreciate your help.

I did changes with your sugguestion.

Some debug logs are here:

2012/01/11 13:21:58.167| The request GET http://ids-ams.elabs.eds.com/
is ALLOWED, because it matched 'origin_servers'

2012/01/11 13:21:58.168| client_side_request.cc(547)
clientAccessCheck2: No adapted_http_access configuration.

2012/01/11 13:21:58.168| The request GET http://ids-ams.elabs.eds.com/
is ALLOWED, because it matched 'origin_servers'

2012/01/11 13:21:58.170| ipcacheMarkBadAddr:
wtestsm1.asiapacific.hpqcorp.net 16.173.232.237:80

2012/01/11 13:21:58.171| TCP connection to
wtestsm1.asiapacific.hpqcorp.net/80 failed

2012/01/11 13:21:58.171| fwdServerClosed: FD 11 http://ids-ams.elabs.eds.com/

2012/01/11 13:21:58.177| ipcacheMarkBadAddr:
wtestsm1.asiapacific.hpqcorp.net 16.173.232.237:80

2012/01/11 13:21:58.177| TCP connection to
wtestsm1.asiapacific.hpqcorp.net/80 failed

2012/01/11 13:21:58.177| fwdServerClosed: FD 11 http://ids-ams.elabs.eds.com/

2012/01/11 13:21:58.183| ipcacheMarkBadAddr:
wtestsm1.asiapacific.hpqcorp.net 16.173.232.237:80

2012/01/11 13:21:58.184| TCP connection to
wtestsm1.asiapacific.hpqcorp.net/80 failed

2012/01/11 13:21:58.184| fwdServerClosed: FD 11 http://ids-ams.elabs.eds.com/

2012/01/11 13:21:58.190| ipcacheMarkBadAddr:
wtestsm1.asiapacific.hpqcorp.net 16.173.232.237:80

2012/01/11 13:21:58.191| TCP connection to
wtestsm1.asiapacific.hpqcorp.net/80 failed

2012/01/11 13:21:58.191| fwdServerClosed: FD 11 http://ids-ams.elabs.eds.com/

2012/01/11 13:21:58.197| ipcacheMarkBadAddr:
wtestsm1.asiapacific.hpqcorp.net 16.173.232.237:80

2012/01/11 13:21:58.197| TCP connection to
wtestsm1.asiapacific.hpqcorp.net/80 failed

2012/01/11 13:21:58.197| fwdServerClosed: FD 11 http://ids-ams.elabs.eds.com/

2012/01/11 13:21:58.203| ipcacheMarkBadAddr:
wtestsm1.asiapacific.hpqcorp.net 16.173.232.237:80

2012/01/11 13:21:58.204| TCP connection to
wtestsm1.asiapacific.hpqcorp.net/80 failed

2012/01/11 13:21:58.204| fwdServerClosed: FD 11 http://ids-ams.elabs.eds.com/

2012/01/11 13:21:58.210| ipcacheMarkBadAddr:
wtestsm1.asiapacific.hpqcorp.net 16.173.232.237:80

2012/01/11 13:21:58.210| TCP connection to
wtestsm1.asiapacific.hpqcorp.net/80 failed

2012/01/11 13:21:58.210| fwdServerClosed: FD 11 http://ids-ams.elabs.eds.com/

2012/01/11 13:21:58.216| ipcacheMarkBadAddr:
wtestsm1.asiapacific.hpqcorp.net 16.173.232.237:80

2012/01/11 13:21:58.216| TCP connection to
wtestsm1.asiapacific.hpqcorp.net/80 failed

2012/01/11 13:21:58.217| fwdServerClosed: FD 11 http://ids-ams.elabs.eds.com/

2012/01/11 13:21:58.222| ipcacheMarkBadAddr:
wtestsm1.asiapacific.hpqcorp.net 16.173.232.237:80

2012/01/11 13:21:58.223| TCP connection to
wtestsm1.asiapacific.hpqcorp.net/80 failed

2012/01/11 13:21:58.223| fwdServerClosed: FD 11 http://ids-ams.elabs.eds.com/

2012/01/11 13:21:58.229| ipcacheMarkBadAddr:
wtestsm1.asiapacific.hpqcorp.net 16.173.232.237:80

2012/01/11 13:21:58.229| TCP connection to
wtestsm1.asiapacific.hpqcorp.net/80 failed

2012/01/11 13:21:58.229| Detected DEAD Parent: main

2012/01/11 13:21:58.229| fwdServerClosed: FD 11 http://ids-ams.elabs.eds.com/

2012/01/11 13:21:58.235| ipcacheMarkBadAddr:
wtestsm1.asiapacific.hpqcorp.net 16.173.232.237:80

2012/01/11 13:21:58.236| TCP connection to
wtestsm1.asiapacific.hpqcorp.net/80 failed

2012/01/11 13:21:58.236| TCP connection to
wtestsm1.asiapacific.hpqcorp.net/80 dead

2012/01/11 13:21:58.236| fwdServerClosed: FD 11 http://ids-ams.elabs.eds.com/

2012/01/11 13:21:58.238| The reply for GET
http://ids-ams.elabs.eds.com/ is ALLOWED, because it matched 'all'

2012/01/11 13:21:58.240| ConnStateData::swanSong: FD 9

2012/01/11 13:22:07.406| The request GET http://ids-ams.elabs.eds.com/
is ALLOWED, because it matched 'origin_servers'

2012/01/11 13:22:07.406| client_side_request.cc(547)
clientAccessCheck2: No adapted_http_access configuration.

2012/01/11 13:22:07.406| The request GET http://ids-ams.elabs.eds.com/
is ALLOWED, because it matched 'origin_servers'

2012/01/11 13:22:07.407| ipcacheMarkBadAddr:
wtestsm1.asiapacific.hpqcorp.net 16.173.232.237:80

2012/01/11 13:22:07.408| Failed to select source for
'http://ids-ams.elabs.eds.com/'

2012/01/11 13:22:07.408|   always_direct = 0

2012/01/11 13:22:07.408|never_direct = 0

2012/01/11 13:22:07.408|timedout = 0

2012/01/11 13:22:07.410| The reply for GET
http://ids-ams.elabs.eds.com/ is ALLOWED, because it matched 'all'

2012/01/11 13:22:07.410| TCP connection to
wtestsm1.asiapacific.hpqcorp.net/80 dead

2012/01/11 13:22:07.412| ConnStateData::swanSong: FD 9

2012/01/11 13:22:09.381| The request GET http://ids-ams.elabs.eds.com/
is ALLOWED, because it matched 'origin_servers'

2012/01/11 13:22:09.381| client_side_request.cc(547)
clientAccessCheck2: No adapted_http_access configuration.

2012/01/11 13:22:09.381| The request GET http://ids-ams.elabs.eds.com/
is ALLOWED, because it matched 'origin_servers'

2012/01/11 13:22:09.383| ipcacheMarkBadAddr:

[squid-users] Performanceproblem Squid with one URL - strange behaviour ...

2012-01-11 Thread Andreas Schulz
Hi folks,

we have a very special problem with our proxy environment. It concerns
only ONE uri http://www.mediaassetbox.com/client/escada. Other uris are
working properly.

Unfortunately this is a very bad uri because it works only with flash.
Nevertheless our customer is working with it and we have a performance
issue.

If page starts to load it need approx. 60-70 seconds until the blue
progress bar under the login field disappears.

If I use another proxy product - eg. IWSS - the page loads in about 30
seconds. Also with direct internet connection we have this value ...

So far so good - strange behaviour starts after working on the problem.
Starting strace on the squid process - the performance increases to direct
internet connection speed.

Next we started debugging in squid itself - ALL,3 - without strace - the
performance increases again. Starting with debug section 0 we found out
that 'debug_options 5,3' (or 5,5 ...) increases the performance as fast
as a direct connection.

What we already did without success
- disable ipv6 in os
- strip configuration to minimum
- using a cache_peer parent configuration (the IWSS proxy)
- tried to find out, which systemcalls 'increases' the squid (see
  statistics below)

Now some details about the system:
- OS - Debian Squeeze - Linux xxx 2.6.32-5-amd64 #1 SMP Thu Nov 3 03:41:26 UTC 
2011 x86_64 GNU/Linux
- Squid - 3.1.6-1.2+squeeze2
  Squid Cache: Version 3.1.6
  configure options:  '--build=x86_64-linux-gnu' '--prefix=/usr'
  '--includedir=${prefix}/include' '--mandir=${prefix}/share/man'
  '--infodir=${prefix}/share/info' '--sysconfdir=/etc'
  '--localstatedir=/var' '--libexecdir=${prefix}/lib/squid3'
  '--disable-maintainer-mode' '--disable-dependency-tracking'
  '--disable-silent-rules' '--srcdir=.' '--datadir=/usr/share/squid3'
  '--sysconfdir=/etc/squid3' '--mandir=/usr/share/man'
  '--with-cppunit-basedir=/usr' '--enable-inline' '--enable-async-io=8'
  '--enable-storeio=ufs,aufs,diskd' '--enable-removal-policies=lru,heap'
  '--enable-delay-pools' '--enable-cache-digests' '--enable-underscores'
  '--enable-icap-client' '--enable-follow-x-forwarded-for'
  '--enable-auth=basic,digest,ntlm,negotiate'
  
'--enable-basic-auth-helpers=LDAP,MSNT,NCSA,PAM,SASL,SMB,YP,DB,POP3,getpwnam,squid_radius_auth,multi-domain-NTLM'
  '--enable-ntlm-auth-helpers=smb_lm,'
  '--enable-digest-auth-helpers=ldap,password'
  '--enable-negotiate-auth-helpers=squid_kerb_auth'
  
'--enable-external-acl-helpers=ip_user,ldap_group,session,unix_group,wbinfo_group'
  '--enable-arp-acl' '--enable-esi' '--disable-translation'
  '--with-logdir=/var/log/squid3' '--with-pidfile=/var/run/squid3.pid'
  '--with-filedescriptors=65536' '--with-large-files'
  '--with-default-user=proxy' '--enable-linux-netfilter'
  'build_alias=x86_64-linux-gnu' 'CFLAGS=-g -O2 -g -Wall -O2' 'LDFLAGS='
  'CPPFLAGS=' 'CXXFLAGS=-g -O2 -g -Wall -O2'
  --with-squid=/tmp/buildd/squid3-3.1.6

We can also provide HTTPFox (Firefox extension) lines for fast and slow
connections.

We searched the mailing list and found 
http://www.mail-archive.com/squid-users@squid-cache.org/msg33267.html -
but there was no really helpful information. Other entries doesn't
match.

We collected the strace statistics only for this session:

% time seconds  usecs/call callserrors syscall
-- --- --- - - 
 97.660.004015   1  3209   epoll_wait
  1.240.51   0   145   145 connect
  0.710.29   0   870   recvmsg
  0.220.09   0   912   epoll_ctl
  0.170.07   0   299   getsockname
  0.000.00   0   484 2 read
  0.000.00   0   494   write
  0.000.00   0   444   close
  0.000.00   0   435   socket
  0.000.00   016 7 accept
  0.000.00   0   290   sendto
  0.000.00   0   290   bind
  0.000.00   0   290   setsockopt
  0.000.00   0   145   getsockopt
  0.000.00   0   616   fcntl
  0.000.00   0 1   getrusage
-- --- --- - - 
100.000.004111  8940   154 total

Our squid config:

***
pid_filename /var/run/squid3-special.pid
http_port 8081

hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
no_cache deny QUERY

refresh_pattern ^ftp:144020%10080
refresh_pattern ^gopher:14400%1440
refresh_pattern .020%4320

#acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/32

acl to_localhost dst 127.0.0.0/8

acl SSL_ports port 443 
acl SSL_ports port 8443
acl SSL_ports port 4643
acl Safe_ports port 80 

Re: [squid-users] Silly warning about over disk limits

2012-01-11 Thread FredB


- Mail original -
 De: Amos Jeffries squ...@treenet.co.nz
 À: squid-users@squid-cache.org
 Envoyé: Mercredi 11 Janvier 2012 11:45:33
 Objet: Re: [squid-users] Silly warning about over disk limits

 On 11/01/2012 11:24 p.m., alex sharaz wrote:
  Getting  the following on my 3.2...79 snapshot:-
 
  2012/01/11 10:18:30 kid2| NETDB state saved; 142 entries, 135 msec
  2012/01/11 10:18:39 kid1| WARNING: Disk space over limit:
  5258011484356608.00 KB  1048576 KB
  2012/01/11 10:18:50 kid1| WARNING: Disk space over limit:
  5258011484356608.00 KB  1048576 KB
  2012/01/11 10:19:01 kid1| WARNING: Disk space over limit:
  5258011484356608.00 KB  1048576 KB
  2012/01/11 10:19:12 kid1| WARNING: Disk space over limit:
  5258011484356608.00 KB  1048576 KB
  2012/01/11 10:19:23 kid1| WARNING: Disk space over limit:
  5258011484356608.00 KB  1048576 KB
  2012/01/11 10:19:34 kid1| WARNING: Disk space over limit:
  5258011484356608.00 KB  1048576 KB
 

 This is already being tracked as
 http://bugs.squid-cache.org/show_bug.cgi?id=3441

 Amos


For the moment you can add -S option in your start script, for example I'm 
using SQUID_ARGS=-SsYC


Re: [squid-users] Delay pools and ICAP issue in 3.2

2012-01-11 Thread FredB


- Mail original -
 De: Alex Crow a...@nanogherkin.com
 À: squid-users@squid-cache.org
 Envoyé: Dimanche 8 Janvier 2012 20:04:46
 Objet: [squid-users] Delay pools and ICAP issue in 3.2

 Hi Amos, all,

 I continue testing 3.2 as promised after a brief hiatus (XP clients,
 NTLM auth, external ACLs on NT groups).

 I am pleased to say that in squid-3.2.0.14-20120106-r11479 that
 previous
 issues with external acls deciding users were in a group that they
 weren't (or the opposite) appear to be resolved, at least from
 testing
 on one or two client machines. I will try to extend this to some more
 users ASAP.

 However I have also seen that with both ICAP (to c-icap/clamav) and
 delay pools that browsing stalls on certain sites, especially on a
 class
 3 delay pool with conservative per-client limits (eg 200kB/s
 perclient
 rate, 100kB/s refill). For instance, if I load http://bbc.co.uk/news
 and
 then play a video from that site, then attempt to load a main page
 from
 that site in another tab in Firefox, that tab will just remain a
 blank
 page although the logs do show a few items being processed. If I
 either
 turn off ICAP, or turn of delay pools, all seems well. In fact, if I
 just use a single class 1 pool limiting to 100MB/s it also seems
 fine.

 Delay pools also fail with ICAP unless I exclude streaming media, in
 particular mime type application/x-fcs from ICAP. If I don't exclude
 such things, it is very rare that I can load bbc.co.uk/news at all in
 Firefox. I used some reasonable debug_options settings to try to
 detect
 the problem but I don't see any errors, but the browser just shows
 Waiting for hostname for an hour (as long as I left it) and squid
 cache/access.log show nothing happening.

 I notice a couple of other posts about this. Is this a known problem
 in
 3.2? If not, please provide appropriate debug_options settings and
 I'll
 try to get logs for you in the next 2-4 weeks (I'm afraid I have a
 lot
 on over this time).

 Thanks

 Alex


Maybe there is a link with this http://bugs.squid-cache.org/show_bug.cgi?id=3462


[squid-users] Configuring Squid LDAP Authentication

2012-01-11 Thread berry guru
I used the following tutorial online to configure Squid to
authenticate with AD, but I still can't get this working.  As most
have seen, I also used a tutorial written by one of our mailing list
members and that didn't work.  Are others having this much trouble
getting Squid to authenticate with there Active Directory server?  So
frustrating!

Configuring Squid LDAP Authentication

The first step is to configure Squid to authenticate
usernames/passwords with the Active Directory. You will need to open
your Squid configuration file (squid.conf) and make the following
changes:

Find the auth param section of the config file (TAG: auth_param), and
change the auth param basic program line to look like this. (Indented
text indicates one line)

auth_param basic program /usr/lib/squid/ldap_auth -R
-b dc=vm-domain,dc=papercut,dc=com
-D cn=Administrator,cn=Users,dc=your,dc=domain,dc=com
-w password -f sAMAccountName=%s -h 192.168.1.75
auth_param basic children 5
auth_param basic realm Your Organisation Name
auth_param basic credentialsttl 5 minutes

These settings tell Squid authenticate names/passwords in the Active Directory.

The -b option indicated the LDAP base distinguished name of your
domain. E.g. your.domain.com would be dc=your,dc=domain,dc=com
The –D option indicates the user that is used to perform the LDAP
query. (e.g an Administrator. This example uses the built-in
Administrator user, however you can use another user of your choice.
The –w option is the password for the user specified in the –D
option. For better security you can store the password in a file and
use the –W /path/to/password_file syntax instead
-h is used to indicate the LDAP server to connect to. E.g. your
domain controller.
-R is needed to make Squid authenticate against Windows AD
The –f option is the LDAP query used to lookup the user. In the
above example, sAMAccountName=%s, will match if the user’s Windows
logon name matches the username entered when prompted by Squid. You
can search any value in the LDAP filter query. You may need to use an
LDAP search query tool to help get the syntax correct for the –f
search filter.
The %s is replaced with what the user enters as their username.

Remember to restart Squid to make these changes to come into effect.


Re: [squid-users] Configuring Squid LDAP Authentication

2012-01-11 Thread Carlos Manuel Trepeu Pupo
With that tutorial from papercut I just configure my LDAP auth and
everything work great, post you .conf and the version of squid.

On Wed, Jan 11, 2012 at 1:30 PM, berry guru berryg...@gmail.com wrote:
 first s


Re: [squid-users] Configuring Squid LDAP Authentication

2012-01-11 Thread berry guru
Thanks for the response Carlos!  So I've copied and pasted the part of
the configuration I modified.  Let me know if I should post all the
config.  I'm running Squid 2.7

auth_param basic program /usr/lib/squid/ldap_auth -R -b
dc=cyberdyne,dc=local -D
cn=Administrator,cn=Users,dc=cyberdyne,dc=local -w passwordhere -f
sAMAccountName=%s -h 192.168.100.237
auth_param basic children 5
auth_param basic realm CYBERDYNE.LOCAL
auth_param basic credentialsttl 5 minutes



On Wed, Jan 11, 2012 at 10:35 AM, Carlos Manuel Trepeu Pupo
charlie@gmail.com wrote:
 With that tutorial from papercut I just configure my LDAP auth and
 everything work great, post you .conf and the version of squid.

 On Wed, Jan 11, 2012 at 1:30 PM, berry guru berryg...@gmail.com wrote:
 first s


[squid-users] Re: Configuring Squid LDAP Authentication

2012-01-11 Thread berry guru
I wanted to test something, but not quite sure how to do it.  I want
to see if my Intranet users can authenticate when they go to
'companyname-intranet' and are prompted for a login.  When I enable
the proxy I'm unable to login to the Intranet, but when I disable the
proxy I can login.  So I'm thinking its an issue with Squid and I need
to add something to Squid to allow authentication.  I'm I incorrect in
this assessment?  If so, how do I go about allowing access to that
site.  Do I do this via an ACL?

On Wed, Jan 11, 2012 at 10:30 AM, berry guru berryg...@gmail.com wrote:
 I used the following tutorial online to configure Squid to
 authenticate with AD, but I still can't get this working.  As most
 have seen, I also used a tutorial written by one of our mailing list
 members and that didn't work.  Are others having this much trouble
 getting Squid to authenticate with there Active Directory server?  So
 frustrating!

 Configuring Squid LDAP Authentication

 The first step is to configure Squid to authenticate
 usernames/passwords with the Active Directory. You will need to open
 your Squid configuration file (squid.conf) and make the following
 changes:

 Find the auth param section of the config file (TAG: auth_param), and
 change the auth param basic program line to look like this. (Indented
 text indicates one line)

    auth_param basic program /usr/lib/squid/ldap_auth -R
        -b dc=vm-domain,dc=papercut,dc=com
        -D cn=Administrator,cn=Users,dc=your,dc=domain,dc=com
        -w password -f sAMAccountName=%s -h 192.168.1.75
    auth_param basic children 5
    auth_param basic realm Your Organisation Name
    auth_param basic credentialsttl 5 minutes

 These settings tell Squid authenticate names/passwords in the Active 
 Directory.

    The -b option indicated the LDAP base distinguished name of your
 domain. E.g. your.domain.com would be dc=your,dc=domain,dc=com
    The –D option indicates the user that is used to perform the LDAP
 query. (e.g an Administrator. This example uses the built-in
 Administrator user, however you can use another user of your choice.
    The –w option is the password for the user specified in the –D
 option. For better security you can store the password in a file and
 use the –W /path/to/password_file syntax instead
    -h is used to indicate the LDAP server to connect to. E.g. your
 domain controller.
    -R is needed to make Squid authenticate against Windows AD
    The –f option is the LDAP query used to lookup the user. In the
 above example, sAMAccountName=%s, will match if the user’s Windows
 logon name matches the username entered when prompted by Squid. You
 can search any value in the LDAP filter query. You may need to use an
 LDAP search query tool to help get the syntax correct for the –f
 search filter.
    The %s is replaced with what the user enters as their username.

 Remember to restart Squid to make these changes to come into effect.


[squid-users] Unsupported versions of Squid

2012-01-11 Thread Joshua Brown
Greetings, 

I am looking for a list stating the supported vs. unsupported version of Squid, 
i.e., which versions of Squid are no longer receiving bug fixes or updates. Can 
anyone provide that information or a link to such a list?

Thanks!
jmb


[squid-users] Filtering access.log

2012-01-11 Thread Momen, Mazdak
We have a couple of common requests we would like to not have logged in our 
access.log file to save space. Is there a way to filter the access.log through 
the squid.conf?


Re: [squid-users] Unsupported versions of Squid

2012-01-11 Thread Amos Jeffries

On 12.01.2012 09:07, Joshua Brown wrote:

Greetings,

I am looking for a list stating the supported vs. unsupported version
of Squid, i.e., which versions of Squid are no longer receiving bug
fixes or updates. Can anyone provide that information or a link to
such a list?

Thanks!
jmb



The Squid Project supported versions are detailed at 
http://www.squid-cache.org/Versions/. Each of these series has a 
number of bug fix releases which can be found linked off the series 
number on that page.


If you need updates on releases and major Squid Project events sign up 
to the announce@ mailing list.


Right now, the status of Squid in active use are:
 * 2.5 - old.
 * 2.6 - old.
 * 2.7 - deprecated stable.
 * 3.0 - old.
 * 3.1 - stable.
 * 3.2 - beta development.
 * 3.3 (aka 3.HEAD or trunk) - alpha development.

key:
 old = OS vendors and commercial paid support only (if any). Security 
vulnerability fixes only if there are vendor distributions needing them.


 deprecated stable = configuration support and security vulnerability 
fixes only.


 stable = close to full Squid Project community support. Paid 
commercial vendors take up then niche support areas.


 beta =  Project support through squid-dev, and limited configuration 
support from squid-users.


 alpha = Project support via squid-dev only. This is *only* for people 
wanting cutting edge features and willing to put up with sometimes major 
bugs.



NOTE:
 compile issues and identifiable bugs are supported only through 
bugs.squid-cache.org and squid-dev (contrary to popular belief anyone 
can post, it just passes a moderator first). As the current maintainer I 
have a very bad habit of jumping on problems mentioned here as well.


 Most of the OS distributions do their own cycle of package support 
including versions we don't support here. Paid commercial vendors 
likewise. You will have to find the particular vendors you are 
interested in to find out their support schedules.



HTH
Amos


Re: [squid-users] squid 3.1.x with IIS SharePoint as back-end.

2012-01-11 Thread Amos Jeffries

On 12.01.2012 02:28, kimi ge wrote:

Hi Amos,

Really appreciate your help.

I did changes with your sugguestion.

Some debug logs are here:

2012/01/11 13:21:58.167| The request GET 
http://ids-ams.elabs.eds.com/

is ALLOWED, because it matched 'origin_servers'

2012/01/11 13:21:58.168| client_side_request.cc(547)
clientAccessCheck2: No adapted_http_access configuration.

2012/01/11 13:21:58.168| The request GET 
http://ids-ams.elabs.eds.com/

is ALLOWED, because it matched 'origin_servers'

2012/01/11 13:21:58.170| ipcacheMarkBadAddr:
wtestsm1.asiapacific.hpqcorp.net 16.173.232.237:80

2012/01/11 13:21:58.171| TCP connection to
wtestsm1.asiapacific.hpqcorp.net/80 failed



There you go. Squid unable to even connect to the IIS server using TCP.

Bit strange that it should use 404 instead of 500 status. But that TCP 
connection failure is the problem.


snip

My squid environment information:
RHEL6.0 64bit.
squid v 3.1.4


A very outdated Squid release version, even for RHEL (which are on 
3.1.8 or so now).


* start with checking your firewall and packet routing configurations 
to ensure that Squid outgoing traffic is actually allowed and able to 
connect to IIS.


 * if that does not resolve the problem, please try a newer 3.1 
release. You will likely have to self-build or use non-RHEL RPM, there 
seem to be no recent packages for RHEL.



Amos



Re: [squid-users] Filtering access.log

2012-01-11 Thread Amos Jeffries

On 12.01.2012 09:30, Momen, Mazdak wrote:

We have a couple of common requests we would like to not have logged
in our access.log file to save space. Is there a way to filter the
access.log through the squid.conf?


Several ways:

* using ACL on the individual log output:
  http://www.squid-cache.org/Doc/config/access_log/

* using ACL on all access log outputs:
  http://www.squid-cache.org/Doc/config/log_access/

Amos


[squid-users] Re: Configuring Squid LDAP Authentication

2012-01-11 Thread berry guru
I came across this configuration online, but it still doesn't work.  I
really thought I would of had it on this one, but still not go.

acl lan src 192.168.1.0/25
acl Intranet dstdomain intranet.int
acl lan-intranet dst 192.168.2.2
http_access allow lan
http_access allow Intranet
http_access allow lan-intranet

On Wed, Jan 11, 2012 at 11:37 AM, berry guru berryg...@gmail.com wrote:
 I wanted to test something, but not quite sure how to do it.  I want
 to see if my Intranet users can authenticate when they go to
 'companyname-intranet' and are prompted for a login.  When I enable
 the proxy I'm unable to login to the Intranet, but when I disable the
 proxy I can login.  So I'm thinking its an issue with Squid and I need
 to add something to Squid to allow authentication.  I'm I incorrect in
 this assessment?  If so, how do I go about allowing access to that
 site.  Do I do this via an ACL?

 On Wed, Jan 11, 2012 at 10:30 AM, berry guru berryg...@gmail.com wrote:
 I used the following tutorial online to configure Squid to
 authenticate with AD, but I still can't get this working.  As most
 have seen, I also used a tutorial written by one of our mailing list
 members and that didn't work.  Are others having this much trouble
 getting Squid to authenticate with there Active Directory server?  So
 frustrating!

 Configuring Squid LDAP Authentication

 The first step is to configure Squid to authenticate
 usernames/passwords with the Active Directory. You will need to open
 your Squid configuration file (squid.conf) and make the following
 changes:

 Find the auth param section of the config file (TAG: auth_param), and
 change the auth param basic program line to look like this. (Indented
 text indicates one line)

    auth_param basic program /usr/lib/squid/ldap_auth -R
        -b dc=vm-domain,dc=papercut,dc=com
        -D cn=Administrator,cn=Users,dc=your,dc=domain,dc=com
        -w password -f sAMAccountName=%s -h 192.168.1.75
    auth_param basic children 5
    auth_param basic realm Your Organisation Name
    auth_param basic credentialsttl 5 minutes

 These settings tell Squid authenticate names/passwords in the Active 
 Directory.

    The -b option indicated the LDAP base distinguished name of your
 domain. E.g. your.domain.com would be dc=your,dc=domain,dc=com
    The –D option indicates the user that is used to perform the LDAP
 query. (e.g an Administrator. This example uses the built-in
 Administrator user, however you can use another user of your choice.
    The –w option is the password for the user specified in the –D
 option. For better security you can store the password in a file and
 use the –W /path/to/password_file syntax instead
    -h is used to indicate the LDAP server to connect to. E.g. your
 domain controller.
    -R is needed to make Squid authenticate against Windows AD
    The –f option is the LDAP query used to lookup the user. In the
 above example, sAMAccountName=%s, will match if the user’s Windows
 logon name matches the username entered when prompted by Squid. You
 can search any value in the LDAP filter query. You may need to use an
 LDAP search query tool to help get the syntax correct for the –f
 search filter.
    The %s is replaced with what the user enters as their username.

 Remember to restart Squid to make these changes to come into effect.


RE: [squid-users] Filtering access.log

2012-01-11 Thread Momen, Mazdak
Thanks, looking into it though I think I'm limited by the way I can set up 
ACLs. Here is what I'm trying to filter:

1326325020.543  0 *.*.*.* NONE/400 3502 GET / - NONE/- text/html

The starred IP, is the same for every request (all requests pass through a load 
balancer). I don't want filter out by that IP but maybe by the string of text 
GET / - NONE/-. Would this be possible?
 

-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Wednesday, January 11, 2012 5:37 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Filtering access.log

On 12.01.2012 09:30, Momen, Mazdak wrote:
 We have a couple of common requests we would like to not have logged
 in our access.log file to save space. Is there a way to filter the
 access.log through the squid.conf?

Several ways:

* using ACL on the individual log output:
   http://www.squid-cache.org/Doc/config/access_log/

* using ACL on all access log outputs:
   http://www.squid-cache.org/Doc/config/log_access/

Amos


Re: [squid-users] Performanceproblem Squid with one URL - strange behaviour ...

2012-01-11 Thread Amos Jeffries

On 12.01.2012 04:25, Andreas Schulz wrote:

Hi folks,

we have a very special problem with our proxy environment. It 
concerns
only ONE uri http://www.mediaassetbox.com/client/escada. Other uris 
are

working properly.

Unfortunately this is a very bad uri because it works only with 
flash.
Nevertheless our customer is working with it and we have a 
performance

issue.

If page starts to load it need approx. 60-70 seconds until the blue
progress bar under the login field disappears.

If I use another proxy product - eg. IWSS - the page loads in about 
30

seconds. Also with direct internet connection we have this value ...

So far so good - strange behaviour starts after working on the 
problem.
Starting strace on the squid process - the performance increases to 
direct

internet connection speed.

Next we started debugging in squid itself - ALL,3 - without strace - 
the
performance increases again. Starting with debug section 0 we found 
out
that 'debug_options 5,3' (or 5,5 ...) increases the performance as 
fast

as a direct connection.


So doing I/O to a disk log somehow speeds up TCP throughput? strange

This sounds a bit like the speed problems we see with very low traffic 
rates. When the I/O loops get very few requests through they end up 
pausing in 10ms time chunks each processing cycle to prevent CPU 
overload doing lots of processing on very small amounts of bytes.


Taking a wild guess; the debug log I/O might be raising the number of 
total I/O being handled each second over that low-speed bump. Unlikely 
but possible.




What we already did without success
- disable ipv6 in os
- strip configuration to minimum
- using a cache_peer parent configuration (the IWSS proxy)
- tried to find out, which systemcalls 'increases' the squid (see
  statistics below)

Now some details about the system:
- OS - Debian Squeeze - Linux xxx 2.6.32-5-amd64 #1 SMP Thu Nov 3
03:41:26 UTC 2011 x86_64 GNU/Linux
- Squid - 3.1.6-1.2+squeeze2


This release is getting a bit old now and has a few I/O buffering bugs 
in it that may be related.
Please try the 3.1.18 Debian package from Wheezy / testing repositories 
(may require some dependency updates as well).



We can also provide HTTPFox (Firefox extension) lines for fast and 
slow

connections.

We searched the mailing list and found
http://www.mail-archive.com/squid-users@squid-cache.org/msg33267.html
-
but there was no really helpful information. Other entries doesn't
match.

We collected the strace statistics only for this session:

% time seconds  usecs/call callserrors syscall
-- --- --- - - 
 97.660.004015   1  3209   epoll_wait
  1.240.51   0   145   145 connect


145 connect() calls in 0.05 ms, all failing? does not seem right at 
all.


Given the time measure I don't think its related, but probably worth 
knowing and fixing. Did the section 5 trace show what was going on here?




  0.710.29   0   870   recvmsg
  0.220.09   0   912   epoll_ctl
  0.170.07   0   299   getsockname
  0.000.00   0   484 2 read
  0.000.00   0   494   write
  0.000.00   0   444   close
  0.000.00   0   435   socket
  0.000.00   016 7 accept
  0.000.00   0   290   sendto
  0.000.00   0   290   bind
  0.000.00   0   290   setsockopt
  0.000.00   0   145   getsockopt
  0.000.00   0   616   fcntl
  0.000.00   0 1   getrusage
-- --- --- - - 
100.000.004111  8940   154 total

Our squid config:

***
pid_filename /var/run/squid3-special.pid
http_port 8081

hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
no_cache deny QUERY


QUERY can die.



refresh_pattern ^ftp:144020%10080
refresh_pattern ^gopher:14400%1440


Add here:
  refresh_pattern -i (/cgi-bin/|\?)  0 0% 0


refresh_pattern .020%4320


snip


http_access allow manager localhost
http_access deny manager

http_access allow purge localhost
http_access deny purge

http_access deny CONNECT !SSL_ports
http_access deny !CONNECT !Safe_ports

http_access allow all


Eeek! nearly unlimited access to the whole Internet. Why?


icp_access deny all

#debug_options 5,5
***

network layout is:

client - firewall - proxy - firewall - internet

Does anyone has an idea what could be the cause for this strange
behaviour?

--
Andreas Schulz




HTH
Amos


RE: [squid-users] Filtering access.log

2012-01-11 Thread Amos Jeffries

On 12.01.2012 12:49, Momen, Mazdak wrote:

Thanks, looking into it though I think I'm limited by the way I can
set up ACLs. Here is what I'm trying to filter:

1326325020.543  0 *.*.*.* NONE/400 3502 GET / - NONE/- text/html

The starred IP, is the same for every request (all requests pass
through a load balancer). I don't want filter out by that IP but 
maybe

by the string of text GET / - NONE/-. Would this be possible?


Not like that. Depending on your squid version http_status ACL testing 
for status 400 may be possible. But that would catch all other status 
400 events as well, which you may not want.


The NONE/400 part shows that these are Squid rejecting non-HTTP traffic 
arriving at its port. Essentially a slow DoS against Squid. If you can 
prevent that happening in the first place it would be better.


Amos



Re: [squid-users] Re: Configuring Squid LDAP Authentication

2012-01-11 Thread James Robertson
 I came across this configuration online, but it still doesn't work.  I
 really thought I would of had it on this one, but still not go.

 acl lan src 192.168.1.0/25
 acl Intranet dstdomain intranet.int
 acl lan-intranet dst 192.168.2.2
 http_access allow lan
 http_access allow Intranet
 http_access allow lan-intranet

You need to post your full squid.conf.


Re: [squid-users] Re: Configuring Squid LDAP Authentication

2012-01-11 Thread James Robertson
 My configuration shown below -

To make it easier to view, can you please run this command to remove
the spaces and comments.

grep -v -e '^$' -e '#'  /etc/squid/squid.conf


Re: [squid-users] Re: Configuring Squid LDAP Authentication

2012-01-11 Thread berry guru
That is an awesome command to know!  I definitely need to remember
that command.  Here is my cleaned up configuration -

auth_param basic program /usr/lib/squid/ldap_auth -R -b
dc=cyberdyne,dc=local -D
cn=Administrator,cn=users,dc=cyberdyne,dc=local -w passwordhere -f
sAMAccountName=%s -h 192.168.100.237
auth_param basic children 5
auth_param basic realm CYBERDYNE.LOCAL
auth_param basic credentialsttl 5 minutes
acl all src all
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32
acl purge method PURGE
acl CONNECT method CONNECT
acl intranet dstdomain cyberdyne-intranet
acl lan-intranet dst 192.168.100.222
http_access allow intranet
acl block_websites dstdomain .facebook.com .myspace.com .twitter.com .hulu.com
http_access deny block_websites
http_access allow manager localhost
http_access deny manager
http_access allow purge localhost
http_access deny purge
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localnet
http_access allow localhost
http_access allow all
icp_access allow localnet
icp_access deny all
http_port 3128
hierarchy_stoplist cgi-bin ?
access_log /var/log/squid/access.log squid
 debug_options ALL,0,1,34,78
  TAG: log_fqdn on
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern (Release|Packages(.gz)*)$   0   20% 2880
refresh_pattern .   0   20% 4320
acl shoutcast rep_header X-HTTP09-First-Line ^ICY.[0-9]
upgrade_http0.9 deny shoutcast
acl apache rep_header Server ^Apache
broken_vary_encoding allow apache
extension_methods REPORT MERGE MKACTIVITY CHECKOUT
visible_hostname Squid
dns_defnames on
  TAG: dns_nameservers
hosts_file /etc/hosts
coredump_dir /var/spool/squid

On Wed, Jan 11, 2012 at 5:25 PM, James Robertson j...@mesrobertson.com wrote:
 My configuration shown below -

 To make it easier to view, can you please run this command to remove
 the spaces and comments.

 grep -v -e '^$' -e '#'  /etc/squid/squid.conf


Re: [squid-users] Re: Configuring Squid LDAP Authentication

2012-01-11 Thread Amos Jeffries

On 12.01.2012 14:32, berry guru wrote:

That is an awesome command to know!  I definitely need to remember
that command.  Here is my cleaned up configuration -

auth_param basic program /usr/lib/squid/ldap_auth -R -b
dc=cyberdyne,dc=local -D
cn=Administrator,cn=users,dc=cyberdyne,dc=local -w passwordhere 
-f

sAMAccountName=%s -h 192.168.100.237


That should be a single line. Is it actually spread over multiple in 
your squid.conf? that may be the problem right there.



auth_param basic children 5
auth_param basic realm CYBERDYNE.LOCAL
auth_param basic credentialsttl 5 minutes
acl all src all
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32
acl purge method PURGE
acl CONNECT method CONNECT
acl intranet dstdomain cyberdyne-intranet


The above will only match invalid URLs which start with 
http://cyberdyne-intranet/;. If the client does the right thing and 
adds .local or some other internal domain FQDN suffix this ACL will 
fail.


You should have a proper domain name for internal use in both clients 
and configs like this (ie cyberdyne.local is a valid FQDN).



acl lan-intranet dst 192.168.100.222
http_access allow intranet
acl block_websites dstdomain .facebook.com .myspace.com .twitter.com
.hulu.com


same wrap problem for this one.


http_access deny block_websites
http_access allow manager localhost
http_access deny manager
http_access allow purge localhost
http_access deny purge
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localnet
http_access allow localhost
http_access allow all


Er. not good, for two reasons.

 1) all means the entire Internet.

 2) this sits before any proxy_auth ACLs are tested (dont see one below 
either). Which means your auth will never happen.


Exactly what access control policies is this config meant to be 
enforcing?





icp_access allow localnet
icp_access deny all
http_port 3128
hierarchy_stoplist cgi-bin ?
access_log /var/log/squid/access.log squid
 debug_options ALL,0,1,34,78


Your Squid version does not accept config lines indented with 
whitespace like that.
The debug_options directive takes a series of number *pairs*  as in:  
section,level section,level section,level

 eg debug_options ALL,0 1,?? 34,?? 78,??

level 1-6 cover most useful debug info when you need a details action 
report.




  TAG: log_fqdn on


That is a piece of documentation. Check that it is not actually in your 
file.



refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern (Release|Packages(.gz)*)$   0   20% 2880
refresh_pattern .   0   20% 4320
acl shoutcast rep_header X-HTTP09-First-Line ^ICY.[0-9]
upgrade_http0.9 deny shoutcast
acl apache rep_header Server ^Apache
broken_vary_encoding allow apache
extension_methods REPORT MERGE MKACTIVITY CHECKOUT
visible_hostname Squid
dns_defnames on
  TAG: dns_nameservers


same as above.


hosts_file /etc/hosts
coredump_dir /var/spool/squid




Amos


Re: [squid-users] Error validating user via Negotiate. Error returned 'BH received type 1 NTLM token'

2012-01-11 Thread Amos Jeffries

On 12/01/2012 1:18 a.m., Muhammet Can wrote:

Hi all,

I have been trying to get squid running with kerberos auth for a few
days but I'm in some trouble. The problem has been asked and replied
many times on both the squid-users list and on the web, I have read
them all, and tried to solve the problem. But still no luck.

Here is some of my log files and tests.
(config files are prepared with using wiki;
http://wiki.squid-cache.org/ConfigExamples/Authenticate/Kerberos)

--  tail -f cache.log
2012/01/11 11:54:06| squid_kerb_auth: DEBUG: Got 'YR
TlRMTVNTUAABl4II4gAGAbEdDw==' from squid
(length: 59).
2012/01/11 11:54:06| squid_kerb_auth: DEBUG: Decode
'TlRMTVNTUAABl4II4gAGAbEdDw==' (decoded
length: 40).
2012/01/11 11:54:06| squid_kerb_auth: WARNING: received type 1 NTLM token
2012/01/11 11:54:06| authenticateNegotiateHandleReply: Error
validating user via Negotiate. Error returned 'BH received type 1 NTLM
token'


As no doubt you have seen in those earlier posts type 1 is 
Negotiate/NTLM. The easiest solution is to use the negotiate_wrapper 
Marcus developed last year. That should get things working for the users 
while the details about why NTLM is being used get more of a look at.





--  tail -f access.log
192.168.0.147 - - [11/Jan/2012:11:54:08 +0200] GET
http://www.google.com.tr/ HTTP/1.1 407 1524 TCP_DENIED:NONE
192.168.0.147 - - [11/Jan/2012:11:54:08 +0200] GET
http://www.google.com.tr/ HTTP/1.1 407 1524 TCP_DENIED:NONE

I have tested kerberos on the server side with;

--  klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: administra...@labristest.com

--  kinit -V -k -t /opt/labris/etc/labris-webcache/HTTP.keytab
HTTP/test2008.labristest.com
Authenticated to Kerberos v5

And, on the client side, I have used kerbtray, it seems client has the tickets.

I have captured the packets with wireshark as suggested some of the
earlier messages, it looks like client still tries to authenticate
with NTLM while we want to use kerberos.

Here is the some of the parts of wireshark log;
(if needed, you can get the full log from here: http://pastebin.com/btp9PzYu )

client to server;
Hypertext Transfer Protocol
 GET http://www.google.com.tr/ HTTP/1.1\r\n
 [Expert Info (Chat/Sequence): GET http://www.google.com.tr/
HTTP/1.1\r\n]
 Request Method: GET
 Request URI: http://www.google.com.tr/
 Request Version: HTTP/1.1
 Host: www.google.com.tr\r\n
 User-Agent: Mozilla/5.0 (Windows NT 6.1; rv:8.0) Gecko/20100101
Firefox/8.0\r\n
 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8\r\n
 Accept-Language: tr-tr,tr;q=0.8,en-us;q=0.5,en;q=0.3\r\n
 Accept-Encoding: gzip, deflate\r\n
 Accept-Charset: ISO-8859-9,utf-8;q=0.7,*;q=0.7\r\n
 Proxy-Connection: keep-alive\r\n


server reply;
Hypertext Transfer Protocol
 HTTP/1.0 407 Proxy Authentication Required\r\n
 [Expert Info (Chat/Sequence): HTTP/1.0 407 Proxy
Authentication Required\r\n]
 Request Version: HTTP/1.0
 Status Code: 407
 Response Phrase: Proxy Authentication Required
 Server: squid/3.1.12\r\n
 Mime-Version: 1.0\r\n
 Date: Wed, 11 Jan 2012 11:28:01 GMT\r\n
 Content-Type: text/html\r\n
 Content-Length: 1152\r\n
 X-Squid-Error: ERR_CACHE_ACCESS_DENIED 0\r\n
 Proxy-Authenticate: Negotiate\r\n
 X-Cache: MISS from labris-1\r\n
 X-Cache-Lookup: NONE from labris-1:3128\r\n
 Via: 1.0 labris-1 (squid/3.1.12)\r\n
 Connection: keep-alive\r\n
 \r\n


client tries authentication;
Hypertext Transfer Protocol
 GET http://www.google.com.tr/ HTTP/1.1\r\n
 [Expert Info (Chat/Sequence): GET http://www.google.com.tr/
HTTP/1.1\r\n]
 Request Method: GET
 Request URI: http://www.google.com.tr/
 Request Version: HTTP/1.1
 Host: www.google.com.tr\r\n
 User-Agent: Mozilla/5.0 (Windows NT 6.1; rv:8.0) Gecko/20100101
Firefox/8.0\r\n
 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8\r\n
 Accept-Language: tr-tr,tr;q=0.8,en-us;q=0.5,en;q=0.3\r\n
 Accept-Encoding: gzip, deflate\r\n
 Accept-Charset: ISO-8859-9,utf-8;q=0.7,*;q=0.7\r\n
 Proxy-Connection: keep-alive\r\n
 Proxy-Authorization: Negotiate
TlRMTVNTUAABl4II4gAGAbEdDw==\r\n
 NTLM Secure Service Provider
 NTLMSSP identifier: NTLMSSP
 NTLM Message Type: NTLMSSP_NEGOTIATE (0x0001)
 Flags: 0xe2088297
 Calling workstation domain: NULL
 Calling workstation name: NULL


That might be important. If the browser is not aware for some reason 
that it has a Windows domain.



 Version 6.1 (Build 7601); NTLM Current Revision 15
 Major Version: 6
 Minor Version: 1
 Build Number: 7601
 NTLM Current Revision: 15


Please see me as a newbie,
I'd really appreciate a detailed 

RE: [squid-users] Problem compiling Squid 3.1.18 on Ubuntu 10.04 LTS - store.cc

2012-01-11 Thread Justin Lawler
Hi,

Any time line for the 3.1.19 release, or any beta releases :-)

Thanks and regards,
Justin


-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Friday, December 09, 2011 7:23 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Problem compiling Squid 3.1.18 on Ubuntu 10.04 LTS - 
store.cc

On 9/12/2011 9:19 p.m., Justin Lawler wrote:
 Hi Amos,

 Is there a beta testing process where we can be notified before a release is 
 planned - so we can do some pre-release testing on these patches?

 Thanks and regards,
 Justin

Notifications are processed through bugzilla. With applied to squid-X 
updates going out to everyone subscribed to the relevant bug. At that time or 
shortly after the patch is available on the changesets page. For changes and 
fixes without specific bugs there is no explicit notifications, usually just 
feedback to the discussion thread which brought it to our attention for fixing.

Pre-release snapshots of everything (tarballs, checkpoints, dailies, nightlies, 
bundles, whatever you call them) are released for testing on a daily basis 
provided they build on a test machine. Those who want to beta-test everything 
on an ongoing basis usually rsync the sources or follow the series bzr branch 
then create bug reports of issues found there. The reports prevent me thinking 
the state is stable enough to tag the snapshot revision for release and creates 
a point for notifications back to the tester when fixed.

HTH
AYJ

This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp