[squid-users] Can I rewrite URL on browser?

2009-04-21 Thread Oleg

Hi2All.

Can Squid redirect user's request to another URL on browser concordance?
For example, if user use MSIE  6.0 redirect him to page with browser 
update from IT site page.

I'm found only access rules for browser string (User-Agent), but not I mean.

Oleg.


RE: [squid-users] Getting error msgs when trying to start squid

2009-04-21 Thread joost.deheer
 I have made I few changes to squid.conf based on what you 
 told me, but proxy still doesn't work.  

Define doesn't work. Clients get an error? Won't start? Something else?

If you get denies, you could try to add a deny_info for every ACL you have, to 
see which ACL is stopping you:
- create a file ERR_ACL_NAME (replace 'ACL_NAME' with the ACL name you use, 
e.g. ERR_LOCALNET for the localnet ACL) in the errors directory (you can find 
the exact path by grepping for error_directory in the default squid config). 
Give it as only content The ACL 'ACL_NAME' gave a deny.
- deny_info ERR_ACL_NAME aclname (e.g. deny_info ERR_LOCALNET localnet)
- Start the browser, and see which errorpage you get.

If it doesn't start, the error log is your friend. You could also try to start 
the proxy with 'squid -N' to start squid as a console application instead of in 
daemon mode. The errors should then appear on your screen.

Joost

Re: [squid-users] Memory leak?

2009-04-21 Thread Amos Jeffries

Bin Liu wrote:

Thanks for your reply.

# /usr/local/squid/sbin/squid -v
Squid Cache: Version 2.7.STABLE6
configure options:  '--prefix=/usr/local/squid' '--with-pthreads'
'--with-aio' '--with-dl' '--with-large-files'
'--enable-storeio=ufs,aufs,diskd,coss,null'
'--enable-removal-policies=lru,heap' '--enable-htcp'
'--enable-kill-parent-hack' '--enable-snmp' '--enable-freebsd-tproxy'
'--disable-poll' '--disable-select' '--enable-kqueue'
'--disable-epoll' '--disable-ident-lookups' '--enable-stacktraces'
'--enable-cache-digests' '--enable-err-languages=English'


The squid process grows without bounds here. I've read the FAQ, and
tried lowering cache_mem setting, decreasing cache_dir size. That
server has 4GB physical memory, and with total cache_dir size setting
to 60G, squid resident size still can grow beyond bound and start
eating swap.


Note that cache_mem is not a bound on squid memry usage. Merely the RAM 
cache_dir.




The OS is FreeBSD 7.1-RELEASE.


Thanks.

Do you have access to any memory-tracing software (valgrind or similar?)
tracking an actual memory usage while live can be done when built 
against valgrind and certain cachemgr reports. I'll have to look them up.



Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE14
  Current Beta Squid 3.1.0.7


Re: [squid-users] Squid BUG?

2009-04-21 Thread Amos Jeffries

Herbert Faleiros wrote:

On Tue, 21 Apr 2009 14:58:22 +1200 (NZST), Amos Jeffries
squ...@treenet.co.nz wrote:
[cut]

As a side issue: know who the maintainer is for slackware? I'm trying to
get in touch with them all.


Sorry, here Squid was build from sources. The distro maintainer and more
info (does not provide a binary Squid package) can be found here:
http://bluewhite64.com (I'm still waiting for an official 64 bits Slackware
port)



 does deleting the swap.state file(s) when squid is stopped fix things?


Apparently yes:

/dev/sdb1 276G  225G   37G  87% /var/cache/proxy/cache1
/dev/sdc1 276G  225G   53G  87% /var/cache/proxy/cache2
/dev/sdd1 276G  225G   37G  87% /var/cache/proxy/cache3
/dev/sde1 276G  225G   37G  87% /var/cache/proxy/cache4

It's running OK again.

Now, another strange log:

2009/04/21 00:26:25| commonUfsDirRebuildFromDirectory: Swap data buffer
length is not sane.

Should I decrease cache_dir sizes?


No. This seems to occur when either the stored object is corrupted, 
incompletely written, or the size of object is apparently larger than 
the size of the file.


At a blind guess, I'd say its a 64-bit build reading a file stored by a 
32-bit build.


The result is that squid immediately dumps the file out of cache. So if 
it repeats for any given object or for any newly stored ones, its a 
problem, but once per existing object after a cache format upgrade may 
be acceptable.





The stranger think was store rebuild reporting  100%.

Yes, we have seen a similar thing long ago in testing. I'm trying to
remember and research what came of those. At present I'm thinking maybe

it

had something to do with 32-bit/64-bit changes in distro build vs what

the

cache was built with.



Similar logs found here about memory usage (via mallinfo):

Total in use:  1845425 KB 173%

and sometimes negative values:

total space in arena:  -1922544 KB
Ordinary blocks:   -1922682 KB 49 blks

Total in use:  -1139886 KB 59%


Ah, these seems to be regular popups. It's a counter overflow on the 
reporting. We try to fix in the latest release as discovered, there may 
be a patch already in later releases or HEAD. If not bug report time for 
that.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE14
  Current Beta Squid 3.1.0.7


Re: [squid-users] Can I rewrite URL on browser?

2009-04-21 Thread Amos Jeffries

Oleg wrote:

Hi2All.

Can Squid redirect user's request to another URL on browser concordance?
For example, if user use MSIE  6.0 redirect him to page with browser 
update from IT site page.
I'm found only access rules for browser string (User-Agent), but not I 
mean.



I'd use a custom deny_info redirect for this.

  acl msie6 browser .. whatever the pattern for IE6 is ...
  deny_info http://example.com/meis_update.html msie6
  http_access deny msie6


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE14
  Current Beta Squid 3.1.0.7


Re: [squid-users] Can I rewrite URL on browser?

2009-04-21 Thread Oleg

Ya, is what I need! Thank you.

Amos Jeffries пишет:

Oleg wrote:

Hi2All.

Can Squid redirect user's request to another URL on browser concordance?
For example, if user use MSIE  6.0 redirect him to page with browser 
update from IT site page.
I'm found only access rules for browser string (User-Agent), but not I 
mean.



I'd use a custom deny_info redirect for this.

  acl msie6 browser .. whatever the pattern for IE6 is ...
  deny_info http://example.com/meis_update.html msie6
  http_access deny msie6


Amos


Re: [squid-users] Can I rewrite URL on browser?

2009-04-21 Thread Amos Jeffries

Oleg wrote:

Ya, is what I need! Thank you.

Amos Jeffries пишет:

Oleg wrote:

Hi2All.

Can Squid redirect user's request to another URL on browser concordance?
For example, if user use MSIE  6.0 redirect him to page with browser 
update from IT site page.
I'm found only access rules for browser string (User-Agent), but not 
I mean.



I'd use a custom deny_info redirect for this.

  acl msie6 browser .. whatever the pattern for IE6 is ...
  deny_info http://example.com/meis_update.html msie6
  http_access deny msie6


Amos


PS. putting my web developers hat on;
   please, please bump them to IE 7 :)
   IE6 is a royal pain in the visuals. 7 is at least a bit better.

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE14
  Current Beta Squid 3.1.0.7


[squid-users] caching cgi_bin in 3.0

2009-04-21 Thread Matus UHLAR - fantomas
Hello,

I'm upgrading to 3.0 (finally) and I see that the new refresh_pattern
default was added in the config file:

refresh_pattern (cgi-bin|\?)   0   0%  0

I hope this is just to always verify the dynamic content, and should not
have any impact of caching it, if it's cacheable, correct?
-- 
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Support bacteria - they're the only culture some people have. 


[squid-users] allowedURL don't work

2009-04-21 Thread Phibee Network Operation Center



Hi

i have a new problems with my Squid Server (NTLM AD)

My configuration:

auth_param ntlm program /usr/bin/ntlm_auth 
--helper-protocol=squid-2.5-ntlmssp

auth_param ntlm children 15
auth_param ntlm keep_alive on
auth_param basic program /usr/bin/ntlm_auth 
--helper-protocol=squid-2.5-basic

auth_param basic children 15
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours
#external_acl_type AD_Group children=50 concurrency=50 %LOGIN 
/usr/lib/squid/wbinfo_group.pl
external_acl_type AD_Group children=50 concurrency=50 ttl=1800 
negative_ttl=900 %LOGIN /usr/lib/squid/wbinfo_group.pl


cache_peer 127.0.0.1parent  80810   proxy-only no-query 
weight=100 connect-timeout=5 login=*:password


## ACL des droits d'accès
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8
acl Lan src 10.0.0.0/8 # RFC1918 possible internal network
acl Lan src 172.16.0.0/12  # RFC1918 possible internal network
acl Lan src 192.168.0.0/16 # RFC1918 possible internal network


##
## ACL pour les sites web consultable sans authentification
##
acl URL_Authorises dstdomain /etc/squid-ntlm/allowedURL
http_access allow URL_Authorises
##

acl SSL_ports port 443 563 1 1494 2598
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 563 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

#http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports

##
# ACL pour definir les groupes AD autorisés a ce connecter
##
acl AllowedADUsers external AD_Group /etc/squid-ntlm/allowedntgroups
acl Winbind proxy_auth REQUIRED
##


##
# ACL pour les Droits d'accès d'apres l'Active Directory
##
# Droits d'accès d'apres l'Active Directory
http_access allow AllowedADUsers
http_access deny !AllowedADUsers
http_access deny !Winbind
##

http_access deny all


##
# Parametre Systeme
##
http_port 8080
hierarchy_stoplist cgi-bin ?
cache_mem 16 MB
#cache_dir ufs /var/spool/squid-ntlm 5000 16 256
cache_dir null /dev/null
#logformat squid %ts.%03tu %6tr %a %Ss/%03Hs %st %rm %ru %un %Sh/%A %mt
#logformat squidmime %ts.%03tu %6tr %a %Ss/%03Hs %st %rm %ru %un 
%Sh/%A %mt [%h] [%h]

#logformat common %a %ui %un [%tl] %rm %ru HTTP/%rv %Hs %st %Ss:%Sh
logformat combined %a %ui %un [%tl] %rm %ru HTTP/%rv %Hs %st 
%{Referer}h %{User-Agent}h %Ss:%Sh

access_log /var/log/squid-ntlm/access.log squid
cache_log /var/log/squid-ntlm/cache.log
cache_store_log /var/log/squid-ntlm/store.log
# emulate_httpd_log off
mime_table /etc/squid-ntlm/mime.conf
pid_filename /var/run/squid-ntlm.pid
# debug_options ALL,1
log_fqdn off
ftp_user pr...@gw.phibee.net
ftp_passive on
ftp_sanitycheck on
ftp_telnet_protocol on
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern (cgi-bin|\?)0   0%  0
refresh_pattern .   0   20% 4320
icp_port 3130
error_directory /usr/share/squid/errors/French
icp_access allow Lan
icp_access deny all
htcp_access allow Lan
htcp_access deny all





Into my allowedURL, i have:

pagesjaunes.fr
estat.com
societe.com
quidonc.fr



when i want access to www.pagejaunes.fr, he request a authentification 
... i want no authentification

and no limitation of surf.

Anyone see where is my error ?
the correct synthaxe are pagesjaunes.fr or .pagesjaunes.fr for 
*.pagesjaunes.fr ?


thanks
jerome




Re: [squid-users] caching cgi_bin in 3.0

2009-04-21 Thread Chris Robertson

Matus UHLAR - fantomas wrote:

Hello,

I'm upgrading to 3.0 (finally) and I see that the new refresh_pattern
default was added in the config file:

refresh_pattern (cgi-bin|\?)   0   0%  0

I hope this is just to always verify the dynamic content, and should not
have any impact of caching it, if it's cacheable, correct?
  


Correct.  If the dynamic content gives a Cache-Control: max-age and/or 
a Expires header that allows caching, the refresh pattern will not 
prevent caching it.


Chris


Re: [squid-users] allowedURL don't work

2009-04-21 Thread Chris Robertson

Phibee Network Operation Center wrote:

Hi

i have a new problems with my Squid Server (NTLM AD)

My configuration:

auth_param ntlm program /usr/bin/ntlm_auth 
--helper-protocol=squid-2.5-ntlmssp

auth_param ntlm children 15
auth_param ntlm keep_alive on
auth_param basic program /usr/bin/ntlm_auth 
--helper-protocol=squid-2.5-basic

auth_param basic children 15
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours
#external_acl_type AD_Group children=50 concurrency=50 %LOGIN 
/usr/lib/squid/wbinfo_group.pl
external_acl_type AD_Group children=50 concurrency=50 ttl=1800 
negative_ttl=900 %LOGIN /usr/lib/squid/wbinfo_group.pl


cache_peer 127.0.0.1parent  80810   proxy-only no-query 
weight=100 connect-timeout=5 login=*:password


## ACL des droits d'accès
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8
acl Lan src 10.0.0.0/8 # RFC1918 possible internal network
acl Lan src 172.16.0.0/12  # RFC1918 possible internal network
acl Lan src 192.168.0.0/16 # RFC1918 possible internal network


##
## ACL pour les sites web consultable sans authentification
##
acl URL_Authorises dstdomain /etc/squid-ntlm/allowedURL
http_access allow URL_Authorises


Are  you sure you don't want to add additional restrictions to the 
http_access allow (such as a limitation on the source IP, or something)?



##

acl SSL_ports port 443 563 1 1494 2598
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 563 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

#http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports

##
# ACL pour definir les groupes AD autorisés a ce connecter
##
acl AllowedADUsers external AD_Group /etc/squid-ntlm/allowedntgroups
acl Winbind proxy_auth REQUIRED
##


##
# ACL pour les Droits d'accès d'apres l'Active Directory
##
# Droits d'accès d'apres l'Active Directory
http_access allow AllowedADUsers
http_access deny !AllowedADUsers
http_access deny !Winbind


These two deny lines are redundant, as everything is denied by the next 
line...



##

http_access deny all


##
# Parametre Systeme
##
http_port 8080
hierarchy_stoplist cgi-bin ?
cache_mem 16 MB
#cache_dir ufs /var/spool/squid-ntlm 5000 16 256
cache_dir null /dev/null
#logformat squid %ts.%03tu %6tr %a %Ss/%03Hs %st %rm %ru %un %Sh/%A 
%mt
#logformat squidmime %ts.%03tu %6tr %a %Ss/%03Hs %st %rm %ru %un 
%Sh/%A %mt [%h] [%h]

#logformat common %a %ui %un [%tl] %rm %ru HTTP/%rv %Hs %st %Ss:%Sh
logformat combined %a %ui %un [%tl] %rm %ru HTTP/%rv %Hs %st 
%{Referer}h %{User-Agent}h %Ss:%Sh

access_log /var/log/squid-ntlm/access.log squid
cache_log /var/log/squid-ntlm/cache.log
cache_store_log /var/log/squid-ntlm/store.log
# emulate_httpd_log off
mime_table /etc/squid-ntlm/mime.conf
pid_filename /var/run/squid-ntlm.pid
# debug_options ALL,1
log_fqdn off
ftp_user pr...@gw.phibee.net
ftp_passive on
ftp_sanitycheck on
ftp_telnet_protocol on
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern (cgi-bin|\?)0   0%  0
refresh_pattern .   0   20% 4320
icp_port 3130
error_directory /usr/share/squid/errors/French
icp_access allow Lan
icp_access deny all
htcp_access allow Lan
htcp_access deny all


Into my allowedURL, i have:

pagesjaunes.fr
estat.com
societe.com
quidonc.fr



when i want access to www.pagejaunes.fr, he request a authentification 
... i want no authentification

and no limitation of surf.

Anyone see where is my error ?
the correct synthaxe are pagesjaunes.fr or .pagesjaunes.fr for 
*.pagesjaunes.fr ?


The second option .pagesjaunes.fr will match http://pagesjaunes.fr, 
http://www.pagesjaunes.fr and any other hostname in front of pagesjaunes.fr.



thanks
jerome



RE: [squid-users] allowedURL don't work

2009-04-21 Thread Dustin Hane
I'm trying to work with regex's and have a quick question in response to your 
response. Wouldn't you also be able to do just a url_regex -I pagesjuanes and 
allow that? That should theoretically work yes?

If you are doing a url_allow and if you have the period infront of the domain, 
that allows anything from the tld.pagesjuanes.fr correct?

---Paste
 when i want access to www.pagejaunes.fr, he request a authentification 
 ... i want no authentification
 and no limitation of surf.

 Anyone see where is my error ?
 the correct synthaxe are pagesjaunes.fr or .pagesjaunes.fr for 
 *.pagesjaunes.fr ?

The second option .pagesjaunes.fr will match http://pagesjaunes.fr, 
http://www.pagesjaunes.fr and any other hostname in front of pagesjaunes.fr.

 thanks
 jerome

Chris
End Paste

-Original Message-
From: crobert...@gci.net [mailto:crobert...@gci.net] 
Sent: Tuesday, April 21, 2009 12:59 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] allowedURL don't work

Phibee Network Operation Center wrote:
 Hi

 i have a new problems with my Squid Server (NTLM AD)

 My configuration:

 auth_param ntlm program /usr/bin/ntlm_auth 
 --helper-protocol=squid-2.5-ntlmssp
 auth_param ntlm children 15
 auth_param ntlm keep_alive on
 auth_param basic program /usr/bin/ntlm_auth 
 --helper-protocol=squid-2.5-basic
 auth_param basic children 15
 auth_param basic realm Squid proxy-caching web server
 auth_param basic credentialsttl 2 hours
 #external_acl_type AD_Group children=50 concurrency=50 %LOGIN 
 /usr/lib/squid/wbinfo_group.pl
 external_acl_type AD_Group children=50 concurrency=50 ttl=1800 
 negative_ttl=900 %LOGIN /usr/lib/squid/wbinfo_group.pl

 cache_peer 127.0.0.1parent  80810   proxy-only no-query 
 weight=100 connect-timeout=5 login=*:password

 ## ACL des droits d'accès
 acl manager proto cache_object
 acl localhost src 127.0.0.1/32
 acl to_localhost dst 127.0.0.0/8
 acl Lan src 10.0.0.0/8 # RFC1918 possible internal network
 acl Lan src 172.16.0.0/12  # RFC1918 possible internal network
 acl Lan src 192.168.0.0/16 # RFC1918 possible internal network


 ##
 ## ACL pour les sites web consultable sans authentification
 ##
 acl URL_Authorises dstdomain /etc/squid-ntlm/allowedURL
 http_access allow URL_Authorises

Are  you sure you don't want to add additional restrictions to the 
http_access allow (such as a limitation on the source IP, or something)?

 ##

 acl SSL_ports port 443 563 1 1494 2598
 acl Safe_ports port 80  # http
 acl Safe_ports port 21  # ftp
 acl Safe_ports port 443 # https
 acl Safe_ports port 563 # https
 acl Safe_ports port 70  # gopher
 acl Safe_ports port 210 # wais
 acl Safe_ports port 1025-65535  # unregistered ports
 acl Safe_ports port 280 # http-mgmt
 acl Safe_ports port 488 # gss-http
 acl Safe_ports port 591 # filemaker
 acl Safe_ports port 777 # multiling http
 acl CONNECT method CONNECT

 #http_access allow manager localhost
 http_access deny manager
 http_access deny !Safe_ports
 http_access deny CONNECT !SSL_ports

 ##
 # ACL pour definir les groupes AD autorisés a ce connecter
 ##
 acl AllowedADUsers external AD_Group /etc/squid-ntlm/allowedntgroups
 acl Winbind proxy_auth REQUIRED
 ##


 ##
 # ACL pour les Droits d'accès d'apres l'Active Directory
 ##
 # Droits d'accès d'apres l'Active Directory
 http_access allow AllowedADUsers
 http_access deny !AllowedADUsers
 http_access deny !Winbind

These two deny lines are redundant, as everything is denied by the next 
line...

 ##

 http_access deny all


 ##
 # Parametre Systeme
 ##
 http_port 8080
 hierarchy_stoplist cgi-bin ?
 cache_mem 16 MB
 #cache_dir ufs /var/spool/squid-ntlm 5000 16 256
 cache_dir null /dev/null
 #logformat squid %ts.%03tu %6tr %a %Ss/%03Hs %st %rm %ru %un %Sh/%A 
 %mt
 #logformat squidmime %ts.%03tu %6tr %a %Ss/%03Hs %st %rm %ru %un 
 %Sh/%A %mt [%h] [%h]
 #logformat common %a %ui %un [%tl] %rm %ru HTTP/%rv %Hs %st %Ss:%Sh
 logformat combined %a %ui %un [%tl] %rm %ru HTTP/%rv %Hs %st 
 %{Referer}h %{User-Agent}h %Ss:%Sh
 access_log /var/log/squid-ntlm/access.log squid
 cache_log /var/log/squid-ntlm/cache.log
 cache_store_log /var/log/squid-ntlm/store.log

[squid-users] logging changes 2.6 - 2.7

2009-04-21 Thread Ross J. Reedstrom
Hi all -
Recently upgraded a proxy-accelerator setup to using 2.7 (Debian
2.7.STABLE3-4.1, specifically) from 2.6 (2.6.20-1~bpo40+1). In this
setup, I'm using an external rewriter script to add virtual rooting bits
to the requested URL. (It's a zope system, using ther VirtualHostMonster
rewriter, like so: 
Incoming request:
GET http://example.com/someimage.gif

Rewritten to:

GET 
http://example.com/VirtualHostBase/http/example.com:80/somepath/VirtualHostRoot/someimage.gif

These are then farmed out to multiple cache_peer origin servers.

The change I'm seeing is that the access.log using a custom format
line:

logformat custom %ts.%03tu %6tr %a %ui %un [%tl] %rm %ru HTTP/%rv %Hs %st 
%{Referer}h %{User-Agent}h %Ss:%Sh/%A %%{X-Forwarded-For}h

The change is that in 2.6 %ru logged the requested URL as seen on the
wire. In 2.7, we get the rewritten URL.

Is this intentional? Is there a way around it? Since referer (sic) url
is not similarly rewritten, it gives log analysis software (that
attempts  to determine click-traces and page views) fits. I can
post-process my logs, but I'd rather fix them at generation time. I can
understand the need to have the rewritten version available: just not at
the cost of missing what was actually on the wire that Squid read.

Ross
-- 
Ross Reedstrom, Ph.D. reeds...@rice.edu
Systems Engineer  Admin, Research Scientistphone: 713-348-6166
The Connexions Project  http://cnx.orgfax: 713-348-3665
Rice University MS-375, Houston, TX 77005
GPG Key fingerprint = F023 82C8 9B0E 2CC6 0D8E  F888 D3AE 810E 88F0 BEDE


[squid-users] Squid and TC - Traffic Shaping

2009-04-21 Thread Wilson Hernandez - MSD, S. A.

Hello.

I was writing a script to control traffic on our network. I created my
rules with tc and noticed that it wasn't working correctly.

I tried this traffic shaping on a linux router that has squid doing
transparent cache.

When measuring the download speed on speedtest.net the download speed is
70kbps when is supposed to be over 300kbps. I found it strange since
I've done traffic shaping in the past and worked but not on a box with
squid. I stopped the squid server and ran the test again and it gave me
the speed I assigned to that machine. I assigned different bw and the
test gave the correct speed.

Have anybody used traffic shaping (TC in linux) on a box with squid? Is
there a way to combine both a have them work side by side?

Thanks in advanced for your help and input.



[squid-users] squid AND ssl

2009-04-21 Thread joe ryan
Hi,
I have a simple webserver that listens on port 80 for requests. I
would like to secure access to this webserver using squid and SSL. I
can access the simple website through http without any issue. When I
try and access it using https: I get a message in the cache file. See
attached.
The web page error show up as Connection to 192.168.0.1 Failed
The system returned:
(13) Permission denied

I am running Squid stable 2.7 and I used openssl to generate the cert and key.
I have attached my conf file and cache errors.
Can squid secure an unsecure webserver the way i am trying to do do
http_port 192.168.0.1:8080 
cache_mgr administra...@server2003
visible_hostname server2003
cache_dir ufs c:/squid/var/cache 512 16 256
acl Query urlpath_regex cgi-bin \?
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl PURGE method PURGE
acl to_localhost dst 127.0.0.1/8
acl SSL_ports port 441 443
https_port 192.168.0.1:443 accel cert=c:/squid/etc/ssl/mycert.pem 
key=c:/squid/etc/ssl/mykey.pem vhost
cache_peer 192.168.0.1  parent 443 0 no-query originserver default ssl 
sslflags=DONT_VERIFY_PEER
acl Safe_ports port 80 21 441 443 563 70 210 210 1025-65535 280 488 591 777
# acl CONNECT method CONNECT
acl all src 0.0.0.0/0.0.0.0
url_rewrite_host_header off
collapsed_forwarding on
acl webSrv dst 192.168.0.1
acl webPrt port 80
http_access allow webSrv webprt
http_access allow all
always_direct allow all
acl localnetwork1 src 192.168.0.0/255.255.255.0
hierarchy_stoplist cgi-bin ?
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320
coredump_dir c:/squid/var/cache
cache_mem 64 MB
dns_testnames localhost
http_access allow manager localhost
# http_access deny manager
# http_access deny !Safe_ports
# http_access allow PURGE localhost
http_access allow localnetwork1
# http_access deny PURGE
access_log c:/squid/var/logs/access.log squid
# no_cache deny QUERY
http_reply_access allow all



[squid-users] SQUID and SSL

2009-04-21 Thread joeR

Hi,
I have a simple webserver that listens on port 80 for requests. I would like
to secure access to this webserver using squid and SSL. I can access the
simple website through http without any issue. When I try and access it
using https: I get a message in the cache file. See attached. 
The web page error show up as Connection to 192.168.0.1 Failed 
The system returned: 
(13) Permission denied
 
I am running Squid stable 2.7 and I used openssl to generate the cert and
key.
I have attached my conf file and cache errors.
Can squid secure an unsecure webserver the way i am trying to do do
http://www.nabble.com/file/p23166622/squid.conf squid.conf 
http://www.nabble.com/file/p23166622/cache.txt cache.txt 
-- 
View this message in context: 
http://www.nabble.com/SQUID-and-SSL-tp23166622p23166622.html
Sent from the Squid - Users mailing list archive at Nabble.com.



Re: [squid-users] logging changes 2.6 - 2.7

2009-04-21 Thread Mark Nottingham

That was fixed in STABLE4;
  
http://www.squid-cache.org/Versions/v2/2.7/squid-2.7.STABLE6-RELEASENOTES.html#s7

See also:
  http://www.squid-cache.org/bugs/show_bug.cgi?id=2406

Cheers,


On 22/04/2009, at 5:46 AM, Ross J. Reedstrom wrote:


Hi all -
Recently upgraded a proxy-accelerator setup to using 2.7 (Debian
2.7.STABLE3-4.1, specifically) from 2.6 (2.6.20-1~bpo40+1). In this
setup, I'm using an external rewriter script to add virtual rooting  
bits
to the requested URL. (It's a zope system, using ther  
VirtualHostMonster

rewriter, like so:
Incoming request:
GET http://example.com/someimage.gif

Rewritten to:

GET 
http://example.com/VirtualHostBase/http/example.com:80/somepath/VirtualHostRoot/someimage.gif

These are then farmed out to multiple cache_peer origin servers.

The change I'm seeing is that the access.log using a custom format
line:

logformat custom %ts.%03tu %6tr %a %ui %un [%tl] %rm %ru HTTP/%rv  
%Hs %st %{Referer}h %{User-Agent}h %Ss:%Sh/%A %%{X-Forwarded- 
For}h


The change is that in 2.6 %ru logged the requested URL as seen on the
wire. In 2.7, we get the rewritten URL.

Is this intentional? Is there a way around it? Since referer (sic) url
is not similarly rewritten, it gives log analysis software (that
attempts  to determine click-traces and page views) fits. I can
post-process my logs, but I'd rather fix them at generation time. I  
can
understand the need to have the rewritten version available: just  
not at

the cost of missing what was actually on the wire that Squid read.

Ross
--
Ross Reedstrom, Ph.D.  
reeds...@rice.edu
Systems Engineer  Admin, Research Scientistphone:  
713-348-6166
The Connexions Project  http://cnx.orgfax:  
713-348-3665

Rice University MS-375, Houston, TX 77005
GPG Key fingerprint = F023 82C8 9B0E 2CC6 0D8E  F888 D3AE 810E 88F0  
BEDE


--
Mark Nottingham   m...@yahoo-inc.com




[squid-users] caching behavior during COSS rebuild

2009-04-21 Thread Chris Woodfield
So I'm running with COSS under 2.7STABLE6, we've noticed (as I can see  
others have, teh Googles tell me so) that the COSS rebuild a. happens  
every time squid is restarted, and b. takes quite a while if the COSS  
stripes are large. However, I've noticed that while the stripes are  
being rebuilt, squid still listens for and handles requests - it just  
SO_FAILs on every object that would normally get saved to a COSS  
stripe. So much for that hit ratio.


SO - the questions are:

1. Is there *any* way to prevent the COSS rebuild if squid is  
terminated normally?
2. Is there a way to prevent squid from handling requests until the  
COSS stripe is fully rebuilt (this is obviously not good if you don't  
have redundant squids, but that's not a problem for us) ?


Thanks,

-C


Re: [squid-users] logging changes 2.6 - 2.7

2009-04-21 Thread Ross J. Reedstrom
Ah thanks for the pointer, Mark. I'll take a look at backporting the
debian squeeze (testing) version back to lenny.

Ross
-- 
Ross Reedstrom, Ph.D. reeds...@rice.edu
Systems Engineer  Admin, Research Scientistphone: 713-348-6166
The Connexions Project  http://cnx.orgfax: 713-348-3665
Rice University MS-375, Houston, TX 77005
GPG Key fingerprint = F023 82C8 9B0E 2CC6 0D8E  F888 D3AE 810E 88F0 BEDE


On Wed, Apr 22, 2009 at 11:50:09AM +1000, Mark Nottingham wrote:
 That was fixed in STABLE4;
   
 http://www.squid-cache.org/Versions/v2/2.7/squid-2.7.STABLE6-RELEASENOTES.html#s7
 
 See also:
   http://www.squid-cache.org/bugs/show_bug.cgi?id=2406
 
 Cheers,
 
 
 On 22/04/2009, at 5:46 AM, Ross J. Reedstrom wrote:
 
 Hi all -
 Recently upgraded a proxy-accelerator setup to using 2.7 (Debian
 2.7.STABLE3-4.1, specifically) from 2.6 (2.6.20-1~bpo40+1). In this
 setup, I'm using an external rewriter script to add virtual rooting  
 bits
 to the requested URL. (It's a zope system, using ther  
 VirtualHostMonster
 rewriter, like so:
 Incoming request:
 GET http://example.com/someimage.gif
 
 Rewritten to:
 
 GET 
 http://example.com/VirtualHostBase/http/example.com:80/somepath/VirtualHostRoot/someimage.gif
 
 These are then farmed out to multiple cache_peer origin servers.
 
 The change I'm seeing is that the access.log using a custom format
 line:
 
 logformat custom %ts.%03tu %6tr %a %ui %un [%tl] %rm %ru HTTP/%rv  
 %Hs %st %{Referer}h %{User-Agent}h %Ss:%Sh/%A %%{X-Forwarded- 
 For}h
 
 The change is that in 2.6 %ru logged the requested URL as seen on the
 wire. In 2.7, we get the rewritten URL.
 
 Is this intentional? Is there a way around it? Since referer (sic) url
 is not similarly rewritten, it gives log analysis software (that
 attempts  to determine click-traces and page views) fits. I can
 post-process my logs, but I'd rather fix them at generation time. I  
 can
 understand the need to have the rewritten version available: just  
 not at
 the cost of missing what was actually on the wire that Squid read.
 
 Ross
 -- 
 Ross Reedstrom, Ph.D.  
 reeds...@rice.edu
 Systems Engineer  Admin, Research Scientistphone:  
 713-348-6166
 The Connexions Project  http://cnx.orgfax:  
 713-348-3665
 Rice University MS-375, Houston, TX 77005
 GPG Key fingerprint = F023 82C8 9B0E 2CC6 0D8E  F888 D3AE 810E 88F0  
 BEDE
 
 --
 Mark Nottingham   m...@yahoo-inc.com