Re: [squid-users] Problem with upload size limit in squid

2021-02-25 Thread Raj Nagar
Hi Alex,

Thanks for your response. Is there any way by which I can enforce these
limits on other protocols as https ?

On Thu, Feb 25, 2021, 23:33 Alex Rousskov 
wrote:

> On 2/24/21 11:51 PM, Raj Nagar wrote:
>
> > I am using squid as forward proxy and want to restrict upload of files
> > larger than 1 MB. I have used following configuration for
> > same: *request_body_max_size 1 MB*.
> > But this is not working for me and I am able to upload larger files.
> > Can someone please help for same. Thanks in advance
>
> Does your Squid have access to the HTTP request information? For
> example, if it is an HTTPS request, and you are not bumping the
> corresponding TLS connection, then Squid would not be working at HTTP
> level and, hence, would not be able to limit individual HTTP request sizes.
>
> The corresponding access.log record may tell us more about the
> problematic transaction.
>
>
> HTH,
>
> Alex.
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Problem with upload size limit in squid

2021-02-24 Thread Raj Nagar
Hi,

I am using squid as forward proxy and want to restrict upload of files
larger than 1 MB. I have used following configuration for same:
*request_body_max_size
1 MB*.
But this is not working for me and I am able to upload larger files.
Can someone please help for same. Thanks in advance

-- 
Regards,
Raj Nagar
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] WCCPv2 and HTTPS problems

2007-11-08 Thread Hemant Raj Chhetri

On Fri, 09 Nov 2007 00:04:46 +0100
 Dalibor Dukic [EMAIL PROTECTED] wrote:

Hi Tek,

On Thu, 2007-11-08 at 13:09 +0545, Tek Bahadur Limbu 
wrote:

Hi Dalibor,

Dalibor Dukic wrote:
 On Wed, 2007-11-07 at 17:15 +0545, Tek Bahadur Limbu 
wrote:

 Hi Adrian,

 Adrian Chadd wrote:
 On Wed, Nov 07, 2007, Hemant Raj Chhetri wrote:

 Hi Adrian,
   I am also facing the same problem with 
https 
 sites. Yahoo works fine with me but I am having 
problem 
 with hotmail. Please advice me on how do I handle 
this or 
 is there any guide which I can refer to.
 I don't know of an easy way to handle this, I'm 
sorry. I know how I'd handle
 it in Squid-2.6 but it'd require a couple weeks of 
work and another few weeks

 of testing.
 I have 2 FreeBSD-6.2 transparent Squid proxies using 
WCCP2 with a Cisco 
 3620 router. Up till now, I am not facing any HTTPS 
problem. At least, 
 nobody is complaining about Hotmail and Yahoo web 
mail services.
 
 Are clients on private address space? If You NATed 
clients and squid on
 same address web server see just one address. 


My clients are all using public IP addresses.

 
 (Considering how much of a problem this has caused 
people in the past I'm
 surprised a solution hasn't been contributed back to 
the project..)
 Maybe, the solution lies on the setup of the 
Operating System, Squid and 
 Router itself.
 
 I don't think so. HTTPS request are not forwarded to 
squid box in

 web-cache service group only port HTTP.

Yes I know that Squid does not handle HTTPS requests 
which leads to 
another question. If HTTPS does not go through Squid, 
then does WCCP see 
them or how does WCCP handle them if at all?


We all know since the beginning when we started learning 
and using Squid 
that intercepting or transparent proxy servers will 
cause some problems 
down the way. In fact, all softwares will cause some 
problems. Maybe 
this is one of the problems.


I totally agree with You, but I think that most problems 
with
transparent proxy-ing with WCCP lies in cisco wccp 
implementation.
Yesterday I move redirection point to Catalyst 6506 
(Version
12.2(18)SXD7bRELEASE SOFTWARE ) and for now everything 
looks good, even
HTTPS. :) 
I hope it will stay like this.


In fact, I had been facing this Hotmail and Yahoo HTTPS 
problem with 
Squid-2.5 in the past. I can't remember exactly how I 
got it solved. On 
one occasion, routing solved the problem and in another 
case, a firewall 
modification solved the problem.


Maybe the problem still exists now but somehow it has 
not caught my 
attention for which I am happy :)


But sooner or later, I'm sure this problem will again 
pop up on my 
proxies too and users will be banging my phone! I guess 
somebody or one 
of us on this list has to do some really complete 
analysis and study 
using whatever tools is required to solve this problem 
once and for all.



Thanking you...



Best regards, Dalibor



 
 Thanking you...





 Adrian


 
 
 
 








Hi All,
   There is no problem while browsing hotmail through 
windows vista for me. If I use a different OS then I am 
not able to login. Is there a way to bypass hotmail 
through ipfw.


Thanking you all in advance.

Regards,
Hemant.
++
This footer space is available to carry your advertisements unobtrusively. 
Please contact 02-3226999 or email [EMAIL PROTECTED] for advertisement programs 
available.
++


Re: [squid-users] WCCPv2 and HTTPS problems

2007-11-08 Thread Hemant Raj Chhetri

On Fri, 9 Nov 2007 12:38:45 +0900
 Adrian Chadd [EMAIL PROTECTED] wrote:

On Fri, Nov 09, 2007, Hemant Raj Chhetri wrote:


Hi All,
   There is no problem while browsing hotmail 
through 
windows vista for me. If I use a different OS then I am 
not able to login. Is there a way to bypass hotmail 
through ipfw.


(What is it with people not posting enough detail?)

You need to provide more detail - cache OS, cisco 
platform, IOS version, etc.
Some IOS versions, for example, have WCCPv2 bugs with 
things like fragments..




Adrian

--
- Xenion - http://www.xenion.com.au/ - VPS Hosting - 
Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges 
available in WA -



Hi Adrian,
  I have installed squid2.6STABLE16 in 
transparent mode on freebsd 6.2. The router which I am 
using is cisco 1841 series router with IOS 12.4. I have 
implemented ipfw on my freebsd cache server .

Regards,
Hemant.
++
This footer space is available to carry your advertisements unobtrusively. 
Please contact 02-3226999 or email [EMAIL PROTECTED] for advertisement programs 
available.
++


Re: [squid-users] WCCPv2 and HTTPS problems

2007-11-06 Thread Hemant Raj Chhetri

On Wed, 7 Nov 2007 12:45:11 +0900
 Adrian Chadd [EMAIL PROTECTED] wrote:

On Tue, Nov 06, 2007, Dalibor Dukic wrote:

Hi,

I configured transparent squid box and WCCPv2 with CISCO 
6k5. After some
time I noticed that clients have problems with HTTPS 
sites. If I
manually configure proxy setting in browser and bypass 
WCCP everything
goes OK. 

I'm using standard service group (web-cache). Maybe some 
web server
check that HTTP and HTTPS request are coming with same 
source address
and block HTTPS access. Clients and squid are on public 
addresses and
this requests come with different source IPs. I can't 
change this and

put clients and squid boxes behind NAT machine. :(
Is anyone notice that same behavior? 
Maybe I can setup service-group with 80 and 443 port so 
I can resolve

issues with different IPs, is this correct?


Squid doesn't currently handle transparently 
intercepting SSL, even for

the situation you require above.

You should investigate the TPROXY Squid integration 
which, when combined
with a correct WCCPv2 implementation and compatible 
network design,
will allow your requests to look like they're coming 
from your client

IPs.

The other alternative is to write or use a very basic 
TCP connection proxy
which will handle transparently intercepted connections 
and just connect
to the original destination server. This will let the 
requests come from

the same IP as the proxy.

(Yes, I've done the above in the lab and verified the 
concept works fine.)




Adrian

--
- Xenion - http://www.xenion.com.au/ - VPS Hosting - 
Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges 
available in WA -



Hi Adrian,
  I am also facing the same problem with https 
sites. Yahoo works fine with me but I am having problem 
with hotmail. Please advice me on how do I handle this or 
is there any guide which I can refer to.


Thanking you,
Hemant.
++
This footer space is available to carry your advertisements unobtrusively. 
Please contact 02-3226999 or email [EMAIL PROTECTED] for advertisement programs 
available.
++


[squid-users] hotmail problem

2007-10-25 Thread Hemant Raj Chhetri

Hi Masters,
I have implemented squid2.6STABLE16 as a 
transparent proxy on freebsd6.2. The proxy is working fine 
but I am not able to login to hotmail. It works fine with 
windows vista but there is a problem with other O.S like 
windows xp, ubuntu, fedora. Is there a way by to bypass 
proxy in order to make hotmail work.


Thanking You in advance

Regards,
Hemant.
++
This footer space is available to carry your advertisements unobtrusively. 
Please contact 02-3226999 or email [EMAIL PROTECTED] for advertisement programs 
available.
++


[squid-users] Dynamic Routing with cisco

2007-10-17 Thread Hemant Raj Chhetri

Hi Masters,
   I was able to make squid work in a transparent 
proxy mode for static routing only. I am not able to do 
the same for dynamic routing. Please guide me with that.


Thanking you

Yours Sincerely,
Hemant
++
This footer space is available to carry your advertisements unobtrusively. 
Please contact 02-3226999 or email [EMAIL PROTECTED] for advertisement programs 
available.
++


[squid-users] transparent proxying

2007-10-09 Thread Hemant Raj Chhetri

Hi Masters,
I am trying to implement squid as
transparent proxy. I have installed squid on freebsd 6.1.
The router which I am using is cisco 1841 series router. I
am using wccpv2. Could you please help me out with how
could I make it a transparent proxy.

Thanking you,

Hemant.
++
This footer space is available to carry your advertisements unobtrusively. 
Please contact 02-3226999 or email [EMAIL PROTECTED] for advertisement programs 
available.
++


Re: [squid-users] Squid returns 404 for URLs with anchors

2006-09-11 Thread Ritu Raj Tiwari

Not a squid issue. Thanks for your help.

On 9/9/06, Ritu Raj Tiwari [EMAIL PROTECTED] wrote:

Henrik,
Thanks for the super prompt response. I am in the process of
installing squid 2.6 and trying this out for myself. One of my
customers who uses squid reported that an earlier version of my
product written using Java JDK 1.3 did not cause any problems. After
upgrading to a newer version of my product written using JDK 5 the
product had been getting 404s when accessing HTTP URLs through squid
2.5. Network monitoring traces showed fragmentids in URLs to be one
differentiating factor.

We believe Java's HTTP client changed between JDK 1.3 and 5 to send
fragids to proxy.
I will let this group know what I find after trying this out first hand.

Once again, many thanks for the quick response.

-Raj

On 9/8/06, Henrik Nordstrom [EMAIL PROTECTED] wrote:
 fre 2006-09-08 klockan 14:39 -0700 skrev Ritu Raj Tiwari:
  Hi,
 
  When we request a URL with a fragment id (anchor:
  http://foo.com/page#bar) through Squid, we get a 404 back immediately.

 Works here..

 What does access.log say?

 Do you use any redirectors?

 Does the web server you query handle requests with anchor URLs?

 Regards
 Henrik





--
-Raj




--
-Raj


Re: [squid-users] Squid returns 404 for URLs with anchors

2006-09-09 Thread Ritu Raj Tiwari

Henrik,
Thanks for the super prompt response. I am in the process of
installing squid 2.6 and trying this out for myself. One of my
customers who uses squid reported that an earlier version of my
product written using Java JDK 1.3 did not cause any problems. After
upgrading to a newer version of my product written using JDK 5 the
product had been getting 404s when accessing HTTP URLs through squid
2.5. Network monitoring traces showed fragmentids in URLs to be one
differentiating factor.

We believe Java's HTTP client changed between JDK 1.3 and 5 to send
fragids to proxy.
I will let this group know what I find after trying this out first hand.

Once again, many thanks for the quick response.

-Raj

On 9/8/06, Henrik Nordstrom [EMAIL PROTECTED] wrote:

fre 2006-09-08 klockan 14:39 -0700 skrev Ritu Raj Tiwari:
 Hi,

 When we request a URL with a fragment id (anchor:
 http://foo.com/page#bar) through Squid, we get a 404 back immediately.

Works here..

What does access.log say?

Do you use any redirectors?

Does the web server you query handle requests with anchor URLs?

Regards
Henrik






--
-Raj


[squid-users] Squid returns 404 for URLs with anchors

2006-09-08 Thread Ritu Raj Tiwari

Hi,

When we request a URL with a fragment id (anchor:
http://foo.com/page#bar) through Squid, we get a 404 back immediately.
Is my client in violation of the HTTP spec or is this a Squid
limitation? My HTTP client is Java JDK 5.

Thanks.
--
-Raj


[squid-users] proxy.pac file problem

2006-09-01 Thread Raj

Hi All,

I am running Version 2.5.STABLE10 on an Open BSD operating system. I
am having problems with proxy.pac file. I have the following proxy.pac
file.

if (isInNet(myIpAddress(), 172.26.96.0, 255.255.240.0)) return PROXY 172.
26.11.50:3128; PROXY 172.26.11.150:3128;

if (isInNet(myIpAddress(), 172.26.112.0, 255.255.240.0)) return PROXY 172
.26.11.150:3128; PROXY 172.26.11.50:3128;

else
   return PROXY 172.26.11.50:3128; PROXY 172.26.11.150:3128;
   return PROXY 172.26.11.150:3128; PROXY 172.26.11.50:3128;

So when the proxy server 172.26.11.50 goes down, all the clients
failover to 172.26.11.150. But when the proxy server 172.26.11.150
goes down, clients are not failing over to 172.26.11.50.

Why is it failing over from 172.26.11.50 to 172.26.11.150 but not vice versa.
Could someone help me if there is any syntax error in my proxy.pac
file. I would really appreciate it.

Thanks.


[squid-users] proxy.pac file

2006-08-29 Thread Raj

Hi all,

I am running squid version 2.5.stable10. All the users use the
following proxy.pac file (browser pointing to the following proxy.pac
file).

p3 = PROXY proxy03.domain.com:3128;
p4 = PROXY proxy04.domain.com:3128;

p34 = p3 + ;  + p4;
p43 = p4 + ;  + p3;

function FindProxyForURL(url, host)
{
 // All unqualified host names are to go via the GAN (no proxy )
 if (isPlainHostName(host)) return DIRECT;

 // Any direct LAN IP connections are allowed
if (shExpMatch(url, *://172.*)  ||
 shExpMatch(url, *://10.*)   ||
 shExpMatch(url, *://192.168.*)  ||
 shExpMatch(url, *://127.0.0.1*)) return DIRECT;


// Assign Proxy based on IP Address of Client
 // VLAN's 96 -- 111
 if (isInNet(myIpAddress(), 172.26.96.0, 255.255.240.0)) return p34;


// VLAN's 112 -- 128
 if (isInNet(myIpAddress(), 172.26.112.0, 255.255.240.0)) return p43;

 else
   return p34;
}


All the users from 172.26.96.0 - 172.26.111.0 subnet goes to
proxy03.domain.com first. If proxy03 is down the client should
automatically try proxy04.domain.com. But that's not happening. If
proxy03 is down, the clients are not failing over to proxy04. Is there
any syntax error in p34.

Should I have some thing like this to work.

if (isInNet(myIpAddress(), 172.26.96.0, 255.255.240.0))

return PROXY proxy03.domain.com:3128; PROXY proxy04.domain.com:3128;

Or can I add the following 'A' records to my DNS server

proxyIN A172.16.0.1 ; IP address of proxy03
   IN A172.16.0.2 ; IP address of proxy04

and

in the proxy.pac file

if (isInNet(myIpAddress(), 172.26.96.0, 255.255.240.0))

return PROXY proxy.domain.com:3128;

Any suggestions would be really appreciated.

Thanks


[squid-users] Strang Problem

2006-07-18 Thread Raj

Hi,

I am running Version 2.5.STABLE10. I have a strange problem with one
of the web sites. If I access the web site https://66.227.81.53/, it
doesn't work. But if I access the same web site with http instead of
https, http://66.227.81.53:443/ it works fine.

Below are the access logs:

1153199709.750  2 172.26.101.76 TCP_DENIED/407 1683 CONNECT
66.227.81.53:443 - NONE/- text/html
1153199709.756  1 172.26.101.76 TCP_DENIED/407 1753 CONNECT
66.227.81.53:443 - NONE/- text/html
1153199716.230   6473 172.26.101.76 TCP_MISS/000 420 CONNECT
66.227.81.53:443 auchoa FIRST_UP_PARENT/172.26.1.67 -
1153199716.249  0 172.26.101.76 TCP_DENIED/407 1683 CONNECT
66.227.81.53:443 - NONE/- text/html
1153199716.252  1 172.26.101.76 TCP_DENIED/407 1753 CONNECT
66.227.81.53:443 - NONE/- text/html
1153199716.714461 172.26.101.76 TCP_MISS/000 387 CONNECT
66.227.81.53:443 auchoa FIRST_UP_PARENT/172.26.1.67 -
1153199741.052  0 172.26.101.76 TCP_DENIED/407 1707 GET
http://66.227.81.53:443/ - NONE/- text/html
1153199741.056  1 172.26.101.76 TCP_DENIED/407 1777 GET
http://66.227.81.53:443/ - NONE/- text/html
1153199741.865808 172.26.101.76 TCP_MISS/600 9 GET
http://66.227.81.53:443/ auchoa FIRST_UP_PARENT/172.26.1.67 -


I have the following ACL for port 443

acl SSL_ports port 443
http_access deny CONNECT !SSL_ports

I am not sure why it works if I use http but not https.

Thanks


[squid-users] shockwave problem

2006-07-17 Thread Raj

Hello all,

I'm using Squid  Version 2.5.STABLE10 to do the Internet access, which means the
internet access request will go thru 2 proxies as one is the child and the
other is the parent.
PCSquidproxy1Squidproxy2-Internet
Here im encountering an problem. when a user tries to access
http://www.adobe.com/shockwave/welcome, it should install a shockwave
player from the Internet (www.macromedia.com). But it failed to
install shockwave. It displays a message When you see the animation
playing below the labeled box, then your installation was successful.
I am not sure why it coulnd't install shockwave. I am also having
problems streaming Windows Media player.

I would appreciate it if someone help me fix this issue.

Thanks.


[squid-users] java script web page problems

2006-07-16 Thread Raj

Hi All,

We are running Squid Cache: Version 2.5.STABLE10. We are having issues
in accessing web sites if there is any Java scripts on the web page.
Do I need to add any ACL's to allow JAVA? Any help would be really appreciated.

We have the following ACL's:

acl java_jvm browser Java/1.4
http_access allow java_jvm

Regards,
Raj.


[squid-users] One web site doesn't work if I use proxy.pac file

2006-04-10 Thread Raj
Hi All,

I am running Squid Cache: Version 2.5.STABLE10. I am having a problem
with one web site. If I use proxy.pac file it says page cannot be
displayed. If I use manual proxy server settings it works fine. I dont
have any rules in proxy.pac file for this web site. I couldn't figure
out why it doesn't work if I use proxy.pac file.

When I access http://abnamro.compliance.bdw.com/ it redirects to
http://abnamro.compliance.bdw.com/Login.aspx?ReturnUrl=/Default.aspx.
I am not sure why it doesn't work if I use proxy.pac file. It works
fine even if I use the IP address instead of domain name. Any help be
really appreciated.

Thanks.


[squid-users] proxy.pac help

2006-03-18 Thread Raj
Hi All,

I am running Squid  2.5.STABLE10. All the clients in our company use
proxy.pac file in the browser settings. I need some help with the
proxy.pac file. At the moment I have the following configuration:

// Assign Proxy based on IP Address of Client
  if (isInNet(myIpAddress(), 172.16.96.0, 255.255.240.0)) return PROXY prox
y03.au.ap.abnamro.com:3128; PROXY proxy04.au.ap.abnamro.com:3128;

If the source IP address is from that IP range, it should go to
proxy03 first and if proxy03 is down it should go to proxy04. But that
is not happening. If proxy03 is down, it is not going to proxy04. Is
there any syntax error in the above config.

What is the correct syntax in proxy.pac file so that if proxy03 is
down it will go to proxy04?

Thanks.


Re: [squid-users] cachemgr.cgi - working yahoooooooo

2006-03-06 Thread Raj
Thank you for the help. I had the below ACL before cachemgr ACL's.
After I moved these ACL's below cachemgr ACL's it's working fine.


acl deny_web_group external wbinfo_group_helper restricted1
http_access deny deny_web_group


On 3/6/06, Mark Elsen [EMAIL PROTECTED] wrote:
  Hi All,
 
  I have been struggling to configure cachemgr.cgi on my squid
  (2.5STABLE 10) server. It works fine if I disable NTLM authentication.
  If I enable NTLM authentication I am not able to access the
  cachemgr.cgi web page. It says access denied. Then I did a diff on
  squid.auth.conf file (NTLM enabled config) and squid.noauth.conf (NTLM
  disabled config). I am not able to figure which ACL is denying me
  access when I try to access the cachemgr.cgi web page:
 

  - Make sure cachemgr is allowed before any NTLM auth stuff.
  - http://www.squid-cache.org/Doc/FAQ/FAQ-9.html

  M.



[squid-users] cachemgr.cgi problem

2006-03-05 Thread Raj
Hi All,

I have been struggling to configure cachemgr.cgi on my squid
(2.5STABLE 10) server. It works fine if I disable NTLM authentication.
If I enable NTLM authentication I am not able to access the
cachemgr.cgi web page. It says access denied. Then I did a diff on
squid.auth.conf file (NTLM enabled config) and squid.noauth.conf (NTLM
disabled config). I am not able to figure which ACL is denying me
access when I try to access the cachemgr.cgi web page:

diff squid.auth.conf squid.noauth.conf

 #cache_peer 172.161.195 parent 3128 0 weight=15 no-digest proxy-only
 #cache_peer 172.161.67 parent 3128 0 weight=10 no-digest proxy-only
446d443
 no_cache deny ifrmarkets
448a446
 no_cache deny ifrmarkets
485a484
 #cache_mem 256 MB
682a682
 #cache_dir ufs /var/squid/cache 5120 16 256
1191c1191
 auth_param basic children 50
---
 auth_param basic children 50
1296,1297c1296
 #external_acl_type wbinfo_group_helper ttl=900 children=125 %LOGIN
/usr/local/squid/libexec/wbinfo_group.pl
 external_acl_type wbinfo_group_helper ttl=900 children=125 %LOGIN
/opt/squid/libexec/wbinfo_group.pl
---
 external_acl_type wbinfo_group_helper ttl=900 children=125 %LOGIN 
 /usr/local/squid/libexec/wbinfo_group.pl
1711,1713d1709
 acl deny_web_group external wbinfo_group_helper restricted1
 http_access deny deny_web_group

1730d1725
 acl Safe_ports port 20001 # TBGW
1732a1728
 acl Safe_ports port 20001 # TBGW server
1743,1744c1739
 # NOT
 # on default values:
---
 # NOTE on default values:
1764,1765c1759,1760
 acl ausv-2 src 172.16.11.150/255.255.255.255
 acl CMGR src 172.16.0.0/255.255.0.0
---
 acl ausv-2 src 172.16.11.150/32
 acl CMGR src 172.16.0.0/16
1767d1761
 http_access deny manager !localhost !ausv-2 !CMGR !ausv-1
1768a1763,1767
 http_access allow manager localhost
 http_access allow manager ausv-1
 http_access allow manager ausv-2
 http_access allow manager CMGR
 http_access deny manager
1769a1769

1774,1780d1773


 http_access allow localhost
 http_access allow ausv-1
 http_access allow ausv-2
 http_access allow CMGR

1792c1785
 acl NOAUTH src 172.16.69.14/32 172.16.70.204/32 172.16.78.20/32
172.16.78.37/32 172.16.78.39/32 172.16.100.68/32 172.16.70.64/32
172.16.117.192/32 172.16.11.150/32 10.185.234.13/32
---
 acl NOAUTH src 172.16.69.14/32 172.16.70.204/32 172.16.78.20/32 
 172.16.78.37/32 172.16.78.39/32 172.16.100.68/32
1835d1827
 acl jesse src 172.16.117.192/32
1880a1873,1875
 acl NetOMS-ip5 dst 66.227.81.53/32
 acl NetOMS-ip6 dst 66.227.81.51/32
 acl NetOMS-ip7 dst 66.227.81.52/32
1888d1882
 #acl CAAML dst 62.17.163.240/32
 http_access allow ECI
 http_access allow APH
1904,1906d1892
 #http_access allow CAAML
 http_access allow jesse
 http_access allow TBGW
1919a1906
 ### JVM NTLM ISSUE RECTIFICATION
1920a1908,1909
 acl java_jvm browser Java/1.4
 http_access allow java_jvm
1936a1926,1928
 http_access allow NetOMS-ip5
 http_access allow NetOMS-ip6
 http_access allow NetOMS-ip7
1943a1936
 http_access allow AME-3
1954c1947,1948

---
 http_access allow TBGW

1959a1954

1968d1962
 http_access allow Internet
1974,1975c1968,1969
 acl msnoverhttp url_regex -i /opt/squid/etc/msnoverhttp.txt
 http_access deny mimeblockq
---
 acl msnoverhttp url_regex -i /opt/squid/etc/msnoverhttp.txt
 http_access deny mimeblockq
1995,1997c1989,1991
 http_access allow Allowed-ABC-AU
 http_access allow Allowed-ABC-NZ
 http_access allow au-company AuthorisedUsers
---
 #http_access allow Allowed-ABC-AU
 #http_access allow Allowed-ABC-NZ
 #http_access allow au-company AuthorisedUsers
1999c1993
 #http_access allow au-company
---
 http_access allow au-company
 #http_access allow localhost
---
 http_access allow localhost
2016c2010
 http_access deny all
---
 http_access deny All
2050,2052c2044
 icp_access allow CMGR
 icp_access deny all
 #icp_access deny all
---
 # icp_access deny all
2206d2197
 cache_mgr [EMAIL PROTECTED]

Any help would be really appreciated.


[squid-users] cachemgr problem - authentication problem

2006-03-04 Thread Raj
Hi Henrik,

As you mentioned in your previous email to me (please refer to below
email), cachemgr works fine if I disable authentication (NTLM
authentication). Below is my authentication acl

acl AUTEAM src 172.26.0.0/16
acl AuthorisedUsers proxy_auth REQUIRED
http_access allow AUTEAM AuthorisedUsers

Even if I add the below ACL on top of the above three ACL's it is
still not working:

acl AUSV 172.26.x.x/255.255.255.255
http_access allow

Only with the below configuration I am able to logon to cachemgr page
(Disable NTLM authentication)


acl AUTEAM src 172.26.0.0/16
http_access allow AUTEAM

Is there a way I can configure squid to allow one IP address
172.26.0.1 to access cachemgr page without using NTLM authentication.

I just want to access cachemgr page from 172.26.0.1.

Thanks.


 While trying to retrieve the URL: cache_object://172.26.11.150/
 The following error was encountered:
 Cache Access Denied.
 Sorry, you are not currently allowed to request:
 cache_object://172.26.11.150/from this cache until you
haveauthenticated yourself.
 You need to use Netscape version 2.0 or greater, or Microsoft 
 InternetExplorer 3.0, or an HTTP/1.1 compliant browser for this to 
 work.Please contact the cache administrator if you have 
 difficultiesauthenticating yourself or change your default password.


  Looks like your request was denied due to proxy authentication required.

  Make sure your cachemgr rules are first, before any other http_access
  rules requiring authentication.
 
 Regards
  Henrik


[squid-users] cachemgr problem

2006-03-02 Thread Raj
Hi All,

I am having problems logging on to cachemgr.cgi web page. I am running
squid-2.5.STABLE10 and apache 2.0.55 on an OpenBSD server.

My squid.conf file:

acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl CMGR src 172.26.0.0/255.255.0.0


http_access allow manager localhost
http_access allow manager CMGR
http_access deny manager
cachemgr_passwd password all

I am getting the following error:

While trying to retrieve the URL: cache_object://172.26.11.150/

The following error was encountered:

Cache Access Denied.

Sorry, you are not currently allowed to request:

cache_object://172.26.11.150/from this cache until you have
authenticated yourself.

You need to use Netscape version 2.0 or greater, or Microsoft Internet
Explorer 3.0, or an HTTP/1.1 compliant browser for this to work.
Please contact the cache administrator if you have difficulties
authenticating yourself or change your default password.

Thanks for your help.


Re: [squid-users] cachemgr problem

2006-03-02 Thread Raj
Hi Henrik,

Thanks a lot for the reply. I don't have any more ACL's before
cachemgr rules. I just have this acl on top of cachemgr rules.

external_acl_type wbinfo_group_helper ttl=900 children=125 %LOGIN
/opt/squid/libexec/wbinfo_group.pl

I have gone through all FAQ's and mailing lists. But I couldn't find
any solution for this.

What else can I check?

Thanks


On 3/3/06, Henrik Nordstrom [EMAIL PROTECTED] wrote:
 fre 2006-03-03 klockan 10:19 +1100 skrev Raj:

  While trying to retrieve the URL: cache_object://172.26.11.150/
  The following error was encountered:
  Cache Access Denied.
  Sorry, you are not currently allowed to request:
  cache_object://172.26.11.150/from this cache until you 
  haveauthenticated yourself.
  You need to use Netscape version 2.0 or greater, or Microsoft 
  InternetExplorer 3.0, or an HTTP/1.1 compliant browser for this to 
  work.Please contact the cache administrator if you have 
  difficultiesauthenticating yourself or change your default password.


 Looks like your request was denied due to proxy authentication required.

 Make sure your cachemgr rules are first, before any other http_access
 rules requiring authentication.

 Regards
 Henrik


 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.2.1 (GNU/Linux)

 iD8DBQBEB4kL516QwDnMM9sRAmgDAJ4wS50ry71izx+J4M9onmnwo9owcgCgiBJs
 gS27Lwg7n9KnefUc+NlraXE=
 =yzrX
 -END PGP SIGNATURE-





[squid-users] strange problem with squid 2.5 STABLE10

2006-02-28 Thread Raj
Hi All,

I have a squid proxy (Squid Cache: Version 2.5.STABLE10) running on a
Open BSD server. I am having a strange problem with one particular web site
http://62.17.163.240/. It takes about 6 to 8 secs to access each link
on this web site. But the strange problem is if I change the time on
client's PC (forward the time by 1 minute) then the performance is
pretty normal. I mean it takes only a second or so to access the web
site (http://62.17.163.240/). It doesn't make any sense to me. But
squid experts can tell me if there is any reason behind this.

Also I tried to access this particular web site using Microsoft Proxy
which is on the same LAN as squid proxy. If I use Microsoft proxy I
have no issues even if I dont change the time on client's PC. It is
not possible to change the time on all client's PC becuase all the
PC's get time from NTP server. Is there a way to fix this problem.

Any help would be really appreciated.

Thanks


[squid-users] squid slow for one particular web site

2006-02-27 Thread Raj
Hi All,

I have a squid proxy (Squid Cache: Version 2.5.STABLE10) running on a
Open BSD server. I am having a problem with one particular web site
http://62.17.163.240/. When users try to access this weeb site it
takes 6 to 8 seconds to download each page. Where as if we try to
access any other web site using the proxy it takes only a second to
download the web page. If we by pass the proxy and access the web site
http://62.17.163.240/ it's pretty quick. I am not able to figure out
why only this particular web site. It should cache the web site and
when you try to access this weeb site 2nd or 3rd time it should get it
from the cache instead of going to the origin server. What can I do to
fix the problem?

Below are the logs:

1141028847.893772 172.26.11.50 TCP_MISS/304 287 GET
http://62.17.163.240/lo_0668/jscripts/common.js - DIRECT/62.17.163.240
-
1141028848.412   3382 172.26.11.50 TCP_REFRESH_HIT/304 286 GET
http://62.17.163.240/lo_0668/interface_images/generic/arrowr_r.gif -
DIRECT/62.17.163.240 -
1141028851.208   3315 172.26.11.50 TCP_MISS/304 287 GET
http://62.17.163.240/lo_0668/jscripts/browserfuncs.js -
DIRECT/62.17.163.240 -
1141028852.022814 172.26.11.50 TCP_MISS/304 287 GET
http://62.17.163.240/lo_0668/jscripts/page.js - DIRECT/62.17.163.240 -
1141028852.848821 172.26.11.50 TCP_MISS/304 287 GET
http://62.17.163.240/lo_0668/jscripts/spi.js - DIRECT/62.17.163.240 -
1141028853.690842 172.26.11.50 TCP_MISS/304 287 GET
http://62.17.163.240/lo_0668/jscripts/generic_oce.js -
DIRECT/62.17.163.240 -
1141028854.513805 172.26.11.50 TCP_MISS/304 287 GET
http://62.17.163.240/lo_0668/jscripts/communicate.js -
DIRECT/62.17.163.240 -
1141028855.486972 172.26.11.50 TCP_MISS/304 286 GET
http://62.17.163.240/lo_0668/jscripts/scofind.js -
DIRECT/62.17.163.240 -
1141028856.273788 172.26.11.50 TCP_REFRESH_HIT/304 286 GET
http://62.17.163.240/lo_0668/interface_images/generic/arrowl.gif -
DIRECT/62.17.163.240 -
1141028856.458946 172.26.11.50 TCP_REFRESH_HIT/304 286 GET
http://62.17.163.240/lo_0668/interface_images/generic/arrowr.gif -
DIRECT/62.17.163.240 -
1141028857.266793 172.26.11.50 TCP_REFRESH_HIT/304 286 GET
http://62.17.163.240/lo_0668/interface_images/generic/pix_transp.gif -
DIRECT/62.17.163.240 -
1141028857.268791 172.26.11.50 TCP_REFRESH_HIT/304 286 GET
http://62.17.163.240/lo_0668/interface_images/generic/42.gif -
DIRECT/62.17.163.240 -
1141028857.450992 172.26.11.50 TCP_REFRESH_HIT/304 286 GET
http://62.17.163.240/lo_0668/interface_images/generic/pix_transp.gif -
DIRECT/62.17.163.240 -
1141028858.074767 172.26.11.50 TCP_REFRESH_HIT/304 286 GET
http://62.17.163.240/lo_0668/interface_images/generic/arrowl_r.gif -
DIRECT/62.17.163.240 -
1141028858.243975 172.26.11.50 TCP_REFRESH_HIT/304 286 GET
http://62.17.163.240/lo_0668/interface_images/generic/43.gif -
DIRECT/62.17.163.240 -
1141028860.174   3701 172.26.11.50 TCP_REFRESH_HIT/304 286 GET
http://62.17.163.240/lo_0668/interface_images/generic/pix_transp.gif -
DIRECT/62.17.163.240 -
1141028860.475 35 172.26.11.50 TCP_IMS_HIT/304 216 GET
http://62.17.163.240/lo_0668/interface_images/generic/arrowr_r.gif -
NONE/- image/gif
1141028861.448767 172.26.11.50 TCP_REFRESH_HIT/304 286 GET
http://62.17.163.240/lo_0668/lo_0668/lo_0668page018.html -
DIRECT/62.17.163.240 -
1141028861.713   3926 172.26.11.50 TCP_MISS/404 1843 GET
http://62.17.163.240/favicon.ico - DIRECT/62.17.163.240 text/html
1141028862.121752 172.26.11.50 TCP_MISS/404 1843 GET
http://62.17.163.240/favicon.ico - DIRECT/62.17.163.240 text/html
1141028862.390942 172.26.11.50 TCP_MISS/304 287 GET
http://62.17.163.240/lo_0668/jscripts/common.js - DIRECT/62.17.163.240
-
1141028863.315925 172.26.11.50 TCP_MISS/304 287 GET
http://62.17.163.240/lo_0668/jscripts/browserfuncs.js -
DIRECT/62.17.163.240 -
1141028864.245929 172.26.11.50 TCP_MISS/304 287 GET
http://62.17.163.240/lo_0668/jscripts/page.js - DIRECT/62.17.163.240 -
1141028865.181936 172.26.11.50 TCP_MISS/304 287 GET
http://62.17.163.240/lo_0668/jscripts/spi.js - DIRECT/62.17.163.240 -
1141028866.105924 172.26.11.50 TCP_MISS/304 287 GET
http://62.17.163.240/lo_0668/jscripts/generic_oce.js -
DIRECT/62.17.163.240 -
1141028866.909803 172.26.11.50 TCP_MISS/304 287 GET
http://62.17.163.240/lo_0668/jscripts/communicate.js -
DIRECT/62.17.163.240 -
1141028867.724802 172.26.11.50 TCP_MISS/304 286 GET
http://62.17.163.240/lo_0668/jscripts/scofind.js -
DIRECT/62.17.163.240 -

Thanks


Re: [squid-users] squid slow for one particular web site

2006-02-27 Thread Raj
The first page is a bit slow. But other links are okay if I bypass
proxy. When you first access that web site using proxy it should go to
the origin server. But for subsequent requests it should get it from
the cache. But that is not happening for this site.


Thanks

On 2/28/06, Mark Elsen [EMAIL PROTECTED] wrote:
 On 2/27/06, Raj [EMAIL PROTECTED] wrote:
  Hi All,
 
  I have a squid proxy (Squid Cache: Version 2.5.STABLE10) running on a
  Open BSD server. I am having a problem with one particular web site
  http://62.17.163.240/. When users try to access this weeb site it
  takes 6 to 8 seconds to download each page. Where as if we try to
  access any other web site using the proxy it takes only a second to
  download the web page. If we by pass the proxy and access the web site
  http://62.17.163.240/ it's pretty quick. I am not able to figure out
  why only this particular web site. It should cache the web site and
  when you try to access this weeb site 2nd or 3rd time it should get it
  from the cache instead of going to the origin server. What can I do to
  fix the problem?
 
 

 Hmm, as far as I could experience now at home, on direct-DSL
 the site is pretty slow too.

 It contains at the beginning a line as in ;

 META HTTP-EQUIV=refresh CONTENT=0;URL=/portal/welcome.asp

 I am under the vast impression, that the underlying portal is rather
 very slow.

 M.



[squid-users] how do I check if squid is caching

2006-02-26 Thread Raj
Hi All,

How do I check if squid is caching? If I look at my access logs it
says it is going DIRECT for all the requests.

1140999540.416216 172.26.11.50 TCP_MISS/200 3651 GET
http://images.google.com.au/images? - DIRECT/216.239.53.104 image/jpeg
1140999540.418245 172.26.11.50 TCP_MISS/200 2261 GET
http://images.google.com.au/images? - DIRECT/216.239.53.104 image/jpeg
1140999540.421   2134 172.26.11.50 TCP_MISS/200 25451 GET
http://adsfac.net/ag.asp? - DIRECT/203.110.143.97
application/x-shockwave-flash
1140999540.435214 172.26.11.50 TCP_MISS/200 3210 GET
http://images.google.com.au/images? - DIRECT/216.239.53.99 image/jpeg
1140999540.517 73 172.26.11.50 TCP_MISS/200 871 GET
http://mercury.tiser.com.au/jserver/acc_random=82578961/SITE=NEWS/AREA=NEWS.ROS/AAMSZ=120x20/pageid=11383744
- DIRECT/202.58.36.22 text/html
1140999540.587 63 172.26.11.50 TCP_MISS/200 330 GET
http://mercury.tiser.com.au//IMPCNT/ccid=3300/acc_random=82578961/SITE=NEWS/AREA=NEWS.ROS/AAMSZ=120x20/pageid=11383744
- DIRECT/202.58.36.22 image/gif
1140999540.603214 172.26.11.50 TCP_MISS/200 3420 GET
http://images.google.com.au/images? - DIRECT/216.239.53.104 image/jpeg
1140999540.636207 172.26.11.50 TCP_MISS/200 2706 GET
http://images.google.com.au/images? - DIRECT/216.239.53.104 image/jpeg
1140999540.646224 172.26.11.50 TCP_MISS/200 3908 GET
http://images.google.com.au/images? - DIRECT/216.239.53.104 image/jpeg
1140999540.691104 172.26.11.50 TCP_MISS/200 4546 GET
http://mercury.tiser.com.au/jserver/acc_random=19238704/SITE=NEWS/AREA=NEWS.HOME/AAMSZ=634X45/pageid=11383744
- DIRECT/202.58.36.22 text/html


Does it mean that it is going to the origin server for all the
requests? Is there a way I can check if the squid is caching?

Thanks


Re: [squid-users] Need help to improve squid performance

2006-02-22 Thread Raj
After I upgrade the memory to 2gb can I increase the cache_mem value
to 256MB. At the moment it is 64MB.

Thanks

On 2/22/06, Kevin [EMAIL PROTECTED] wrote:
  We are running OpenBSD version 3.6

 I'd recommend going to 3.8.


   Can you define performance issues?
 
  If I access a website it takes 6 to 8 seconds to download the page. We
  have a 10MB internet link and the link utilisation is only 50% on
  average.

 That seems very high.   Something is broken somewhere.

 My home squid is on a minimal OpenBSD machine, about the same as the
 hardware you specify, but on a slow cablemodem.  In this environment,
 it takes about 8 seconds for CNN to fully load, but barely a half
 second for Google, maybe a second for www.undeadly.org

 Of course this is without the two-layer model and without NTLM.

 Kevin



[squid-users] Need help to improve squid performance

2006-02-21 Thread Raj
Hi All,

I need your suggestions to improve the performance of our proxy
servers setup. we have 4 squid proxy servers running on Open BSD (HP
Vectra 600, PIII 700). Below is squid -v output

/opt/squid/sbin/squid -v
Squid Cache: Version 2.5.STABLE10
configure options:  --prefix=/opt/squid --enable-auth=ntlm,basic
--enable-external-acl-helpers=wbinfo_group --localstatedir=/var/squid
--enable-snmp

We have two 1st tier proxies located on LAN and two 2nd tier proxies
located in the DMZ. 1st Tier uses NTLM via the the Samba WINBIND
process and 2nd tier with no authentication required.

Server A  Server C - 1st tier proxies
Server B  Server D - 2nd tier proxies

We have been having performance issues for a long time. I have
recommended them to buy Sun Fire V240 servers to replace these BSD
boxes. But they want to increase memory (from 256 MB to 2GB) on the
existing servers and see if that improves the performance.

Below is the current cache size on all 4 servers:

cache_dir ufs /var/squid/cache 400 16 256

On Server A  Server C we have the following config option on the
cache_peer line:

cache_peer ServerB parent 3128 3130 weight=15 no-query no-digest

cache_peer ServerD parent 3128 3130 weight=10 no-query no-digest

After I upgrade the memory I want to increase the cache_dir size to
2GB. Could someone suggest me if I need to change the cache_peer
options to improve the performance. I also have cache_mem value set to
64mb at the moment. Do I need to increase that after I upgrade the
memory. Any other help to improve the performance would be really
appreciated.

Thanks.


Re: [squid-users] Need help to improve squid performance

2006-02-21 Thread Raj
On 2/22/06, Kevin [EMAIL PROTECTED] wrote:
 On 2/21/06, Raj [EMAIL PROTECTED] wrote:
  I need your suggestions to improve the performance of our proxy
  servers setup. we have 4 squid proxy servers running on Open BSD (HP
  Vectra 600, PIII 700). Below is squid -v output

 What version of OpenBSD?

We are running OpenBSD version 3.6


 That's some pretty darn old hardware, with a slow CPU, slow IDE hard
 drive, slow RAM, and only expandable to 384MB.


  /opt/squid/sbin/squid -v
  Squid Cache: Version 2.5.STABLE10
  configure options:  --prefix=/opt/squid --enable-auth=ntlm,basic
  --enable-external-acl-helpers=wbinfo_group --localstatedir=/var/squid
  --enable-snmp
 
  We have two 1st tier proxies located on LAN and two 2nd tier proxies
  located in the DMZ. 1st Tier uses NTLM via the the Samba WINBIND
  process and 2nd tier with no authentication required.
 
  Server A  Server C - 1st tier proxies
  Server B  Server D - 2nd tier proxies
 
  We have been having performance issues for a long time. I have

 Can you define performance issues?

If I access a website it takes 6 to 8 seconds to download the page. We
have a 10MB internet link and the link utilisation is only 50% on
average.


 One thing I found helped significantly in resolving performance
 complaints was to implement a monitoring system to track network and
 Internet utilization, Squid's own internal SNMP statistics, OS
 statistics, and also single URL request processing time through each
 proxy and also directly from a DMZ host that doesn't go through any
 proxy.

 I've fixed a lot of the proxy is slow problems by reconfiguring DNS
 servers, upgrading the firmware on switches and routers, converting
 from a DS3 to an OC3, fixing speed and duplex mismatches -- almost
 never has the fix for a the proxy is slow complaint involved
 actually touching OpenBSD or Squid or the proxy server hardware.

 IOW, use instrumentation to isolate each performance issue and
 address each issue individually.


  recommended them to buy Sun Fire V240 servers to replace these BSD
  boxes.

 Actually, I'm not all that impressed with the price/performance of the
 V240 series, IMHO the v20z gives more bang for the buck (and can run
 OpenBSD with dual procs).


  But they want to increase memory (from 256 MB to 2GB) on the
  existing servers and see if that improves the performance.

 Not a bad idea.

  Below is the current cache size on all 4 servers:
 
  cache_dir ufs /var/squid/cache 400 16 256
 
  On Server A  Server C we have the following config option on the
  cache_peer line:
 
  cache_peer ServerB parent 3128 3130 weight=15 no-query no-digest
  cache_peer ServerD parent 3128 3130 weight=10 no-query no-digest

 What sort of firewall is between AC (inside) and BD (outside)?  what
 is the link throughput and latency?

We have a Cisco Firewall between the 1st tier and 2nd tier proxies. I
am not sure about the model. But according to the network guys the
Internet link is 10mb and the utilisation is only 50%.






  After I upgrade the memory I want to increase the cache_dir size to
  2GB.

 Squid will complain if you don't at least add sufficient cache_dir space
 to match the new cache_mem size.

If I increase the cache_mem size to 256MB what should be the cache_dir size?



 I'd suggest adding a new fast disk to each cache server, dedicated to 
 cache_dir.


  Could someone suggest me if I need to change the cache_peer
  options to improve the performance.

 It depends on the nature of the performance issues.
 Enabling digests will give a better hit ratio on content which can be
 served out of cache, without adding any real delay and only minimal
 memory overhead.

Enabling digest means removing no-digest option on the cache_peer line right?


 Enabling ICP would give even higher cache hit ratios, but the extra
 delay in processing requests (waiting for ICP responses) may be
 unacceptable.

Again enabling ICP is removing no-query option right?



   I also have cache_mem value set to
  64mb at the moment. Do I need to increase that after I upgrade the
  memory.

 If you bring each server up to 2GB of RAM, you'll want to increase
 cache_mem significantly, and remember to adjust /etc/login.conf to
 permit the squid user to allocate enough memory and file descriptors.

 OpenBSD isn't as aggressive as other OSes about using free memory to
 cache frequently accessed disk pages, so I set cache_mem very high on
 OpenBSD.


  Any other help to improve the performance would be really
  appreciated.

 Faster/more memory/CPU/disk or even more servers in parallel could help.
 Check that the first tier is configured for never_direct, neither
 clients nor the internal proxy firewalls should be doing DNS lookups
 or trying direct connections to Internet destinations.
 You may be able to turn off request logging on the 2nd tier?

 Kevin



Re: [squid-users] parent cache information

2006-02-19 Thread Raj
Thanks a lot for that. I can only specify proxy-only option on server
B right? Because I am not using cache_peer option on server A which is
facing the internet.
If I use proxy-only option on Server B, then Server B just acts as
proxy and it will cache only non-duplicate content. Are there any
benifits in using 1st tier and 2nd tier proxys. Please reply.


 
  Thanks a lot for your help once again. If I add proxy-only option on
  the peer_cache line Server B wont cache anything right? Because Server
  A is facing the internet and it will cache everything. Lets say I
  access the website google.com, Server A will cache google.com. Since
  Server A has google.com in the cache Server B wont cache that web
  site. Then why should I enable cache_dir on Server B. I am a bit
  confused here about how the caching works. Please reply.
 

 That's quite right.  Running a cache_dir on Server B would be senseless.  
 Guess that's what I get for posting without thinking...

 Either use a cache_dir on both servers without the proxy-only option (with 
 the hope that the two will cache SOME non-duplicate content), or use the 
 proxy-only option and let the cache_dir on Server B sit unused (compile in 
 the null storeio in the future).

 Chris



Re: [squid-users] parent cache information

2006-02-19 Thread Raj
Chris,

Once again thanks heaps. You were absolutely spot on. We have total 4
proxies (2 child  2 parent proxies).

Server A  Server C - parent proxies (2nd tier)
Server B  Server D - Child proxies (1st tier)

1st Tier uses NTLM authentication via the the Samba WINBIND process.
2nd Tier is located in the DMZ with no authentication required.

This is the main reason we are using 1st tier and 2nd tier proxies.
For this type of setup could you please recommend whether to configure
both proxy's to cache or just 2nd tier proxies as cache and 1st tiers
as proxy only. Basically I want to achieve better performance than
what we have now. At the moment as explained to you before both 1st
tier and 2nd tier are caching.

Once again thanks a million.

Regards.

On 2/18/06, Chris Robertson [EMAIL PROTECTED] wrote:
  -Original Message-
  From: Raj [mailto:[EMAIL PROTECTED]
  Sent: Thursday, February 16, 2006 6:08 PM
  To: Chris Robertson
  Cc: squid-users@squid-cache.org
  Subject: Re: [squid-users] parent cache information
 
 
  Thanks a lot for that. I can only specify proxy-only option on server
  B right? Because I am not using cache_peer option on server A which is
  facing the internet.

 Well, there is a no-cache directive that works independently of cache_peer 
 lines...

  If I use proxy-only option on Server B, then Server B just acts as
  proxy and it will cache only non-duplicate content. Are there any
  benifits in using 1st tier and 2nd tier proxys. Please reply.
 

 Actually, I think that using the proxy-only option will prevent Server B from 
 caching ANY content it retrieves from Server A (which in your case would mean 
 ALL content not cached on Server B, a catch 22).  Cache hierarchies are 
 usually used when there are many disparate child proxies (branch offices 
 proxy through the main hub) or there is a bottle neck at each point (small 
 pipe between child and parent proxy, medium pipe between parent and 
 internet).  Other times, an other type of proxy is used as a parent 
 (DansGuardian, virus scanner, etc.).  I'm not sure of the reason for the 
 set-up you describe.  Perhaps access to the proxy in the DMZ is limited to 
 one specific IP address (the child proxy) by the firewall.  Perhaps the child 
 proxy was at some point going to perform authentication from a source not 
 available from the DMZ.  Perhaps the DMZ proxy was going to be acting as an 
 accelerator, and the only way to allow access to the accelerated website from 
 within the LAN was to pass all traffic through the parent.

 Chris



Re: [squid-users] parent cache information

2006-02-19 Thread Raj
Once again thank you very much. Just curious if your child proxy is
not caching why would you have child parent hierarchy. Anyway much
appreciated for your help and your valuable time.

On 2/18/06, Chris Robertson [EMAIL PROTECTED] wrote:
  -Original Message-
  From: Raj [mailto:[EMAIL PROTECTED]
  Sent: Friday, February 17, 2006 2:46 PM
  To: Chris Robertson
  Cc: squid-users@squid-cache.org
  Subject: Re: [squid-users] parent cache information
 
 
  Chris,
 
  Once again thanks heaps. You were absolutely spot on. We have total 4
  proxies (2 child  2 parent proxies).
 
  Server A  Server C - parent proxies (2nd tier)
  Server B  Server D - Child proxies (1st tier)
 
  1st Tier uses NTLM authentication via the the Samba WINBIND process.
  2nd Tier is located in the DMZ with no authentication required.
 
  This is the main reason we are using 1st tier and 2nd tier proxies.
  For this type of setup could you please recommend whether to configure
  both proxy's to cache or just 2nd tier proxies as cache and 1st tiers
  as proxy only. Basically I want to achieve better performance than
  what we have now. At the moment as explained to you before both 1st
  tier and 2nd tier are caching.
 
  Once again thanks a million.
 
  Regards.
 

 Well, I think you will see your best improvement by recompiling and including 
 aufs.

 In any case, for a different reason I have a child-parent hierarchy on a 
 single LAN segment.  My cache_peer line on one child (proxy1) is:

 cache_peer proxy2 sibling  8080  3130  proxy-only no-digest
 cache_peer proxy3 parent  8080  3130 proxy-only round-robin no-digest
 cache_peer proxy3 parent  8081  3131 proxy-only round-robin no-digest

 Both my request hit and byte hit ratio on the child proxy are low (but 
 non-zero) numbers.  Perhaps that indicates that only cached requests fetched 
 from the parent proxy are not cached on the child, vs. all requests.  Then 
 again, due to other quirks with my setup that metric may not be indicative of 
 anything.  As for myself, I can perceive no difference between surfing with 
 or without the proxy.  Anecdotal evidence at best.

 Chris



Re: [squid-users] caching windows update

2005-08-25 Thread Raj Kumar Gurung
Yes mine access.log also shows same ...though i had configured
refresh_pattern option for windows update..
What might be the reason ?

Thanks,
uglyjoe

[EMAIL PROTECTED] wrote:

Hi

Below is my current access.log. My users are accessing windowsupdate but
I don't see any TCP_HIT. Am I giving correct regexp??

Thanks

###

1124941122.867   1467 10.18.24.227 TCP_MISS/206 6604 GET
http://au.download.windowsupdate.com/msdownload/update/v5/psf/windowsxp-
sp2-x86fre-usa-2180_056b2b38baf5620be85ddd58141b073bc0b06a1d.psf -
DIRECT/212.73.245.94 application/octet-stream
1124941122.928   2155 10.18.3.120 TCP_MISS/206 10789 GET
http://au.download.windowsupdate.com/msdownload/update/v5/psf/windowsxp-
sp2-x86fre-usa-2180_056b2b38baf5620be85ddd58141b073bc0b06a1d.psf -
DIRECT/206.24.172.62 application/octet-stream
1124941123.294   1974 10.18.10.174 TCP_MISS/206 5474 GET
http://au.download.windowsupdate.com/msdownload/update/v5/psf/windowsxp-
kb885835-x86-enu_7dc658ed670f17f0b3d24c9d152998153534d79a.psf -
DIRECT/208.175.188.29 application/octet-stream
1124941123.328243 10.18.26.166 TCP_MISS/200 441 HEAD
http://download.windowsupdate.com/v6/microsoftupdate/b/selfupdate/AU/x86
/XP/en/musetup.cab? - DIRECT/212.73.245.94 application/octet-stream
1124941124.052   1293 10.18.19.229 TCP_MISS/206 4516 GET
http://au.download.windowsupdate.com/msdownload/update/v5/psf/windowsxp-
kb896727-x86-enu_2473de55b2d8b55a2ad8ebe40a4908cc126c39f1.psf -
DIRECT/212.73.245.94 application/octet-stream


-Original Message-
From: Kashif Ali Bukhari [mailto:[EMAIL PROTECTED] 
Sent: Thursday, August 25, 2005 8:18 AM
To: Lokesh Khanna
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] caching windows update

nothing wrong may b your clients not using windows update ;-)


On 8/25/05, [EMAIL PROTECTED]
[EMAIL PROTECTED] wrote:
  

Hi

I have recently started using refresh pattern in squid configuration.
My objective is to cache windows update for long time.

Below is my configuration.


refresh_pattern http://.*\.windowsupdate\.microsoft\.com/ 4320 80%


43200
  

refresh_pattern http://office\.microsoft\.com/4320 80%


43200
  

refresh_pattern http://windowsupdate\.microsoft\.com/ 4320 80%


43200
  

refresh_pattern http://w?xpsp[0-9]\.microsoft\.com/   4320 80%


43200
  

refresh_pattern http://w2ksp[0-9]\.microsoft\.com/4320 80%


43200
  

refresh_pattern http://download\.microsoft\.com/  4320 80%


43200
  

refresh_pattern http://download\.macromedia\.com/ 4320 80%


43200
  

refresh_pattern ftp://ftp\.nai\.com/  4320 80%


43200
  

But when I check access.log file, I don't see any TCP_HIT for
windowsupdate. Is there anything wrong in this configuration.

Thanks - LK
Disclaimer





  

The information contained in this e-mail, any attached files, and


response threads are confidential and
  

may be legally privileged. It is intended solely for the use of


individual(s) or entity to which it is addressed
  

and others authorised to receive it. If you are not the intended


recipient, kindly notify the sender by return
  

mail and delete this message and any attachment(s) immediately.

Save as expressly permitted by the author, any disclosure, copying,


distribution or taking action in reliance
  

on the contents of the information contained in this e-mail is


strictly prohibited and may be unlawful.
  

Unless otherwise clearly stated, and related to the official business


of Accelon Nigeria Limited, opinions,
  

conclusions, and views expressed in this message are solely personal


to the author.
  

Accelon Nigeria Limited accepts no liability whatsoever for any loss,


be it direct, indirect or consequential,
  

arising from information made available in this e-mail and actions


resulting there from.
  

For more information about Accelon Nigeria Limited, please see our


website at
  

http://www.accelonafrica.com




**
  



  




Re: [squid-users] caching windows update

2005-08-25 Thread Raj Kumar Gurung
I configured as per your configuration..still TCP_MISSes there in access.log
Are u getting any HIT for windowsupdate site with that ?

Thanks,
uglyjoe79

[EMAIL PROTECTED] wrote:

I think regexp is not correct

We need to put

refresh_pattern http://au\.download\.windowsupdate\.com/ 4320
80% 43200

As per my understanding this should give us tcp_hit for
http://au.download.windowsupdate.com/*

Thanks 

-Original Message-
From: Raj Kumar Gurung [mailto:[EMAIL PROTECTED] 
Sent: Thursday, August 25, 2005 9:32 AM
To: Lokesh Khanna
Cc: [EMAIL PROTECTED]; squid-users@squid-cache.org
Subject: Re: [squid-users] caching windows update

Yes mine access.log also shows same ...though i had configured
refresh_pattern option for windows update..
What might be the reason ?

Thanks,
uglyjoe

[EMAIL PROTECTED] wrote:

  

Hi

Below is my current access.log. My users are accessing windowsupdate


but
  

I don't see any TCP_HIT. Am I giving correct regexp??

Thanks

###

1124941122.867   1467 10.18.24.227 TCP_MISS/206 6604 GET
http://au.download.windowsupdate.com/msdownload/update/v5/psf/windowsxp


-
  

sp2-x86fre-usa-2180_056b2b38baf5620be85ddd58141b073bc0b06a1d.psf -
DIRECT/212.73.245.94 application/octet-stream
1124941122.928   2155 10.18.3.120 TCP_MISS/206 10789 GET
http://au.download.windowsupdate.com/msdownload/update/v5/psf/windowsxp


-
  

sp2-x86fre-usa-2180_056b2b38baf5620be85ddd58141b073bc0b06a1d.psf -
DIRECT/206.24.172.62 application/octet-stream
1124941123.294   1974 10.18.10.174 TCP_MISS/206 5474 GET
http://au.download.windowsupdate.com/msdownload/update/v5/psf/windowsxp


-
  

kb885835-x86-enu_7dc658ed670f17f0b3d24c9d152998153534d79a.psf -
DIRECT/208.175.188.29 application/octet-stream
1124941123.328243 10.18.26.166 TCP_MISS/200 441 HEAD
http://download.windowsupdate.com/v6/microsoftupdate/b/selfupdate/AU/x8


6
  

/XP/en/musetup.cab? - DIRECT/212.73.245.94 application/octet-stream
1124941124.052   1293 10.18.19.229 TCP_MISS/206 4516 GET
http://au.download.windowsupdate.com/msdownload/update/v5/psf/windowsxp


-
  

kb896727-x86-enu_2473de55b2d8b55a2ad8ebe40a4908cc126c39f1.psf -
DIRECT/212.73.245.94 application/octet-stream
###


#
  


-Original Message-
From: Kashif Ali Bukhari [mailto:[EMAIL PROTECTED] 
Sent: Thursday, August 25, 2005 8:18 AM
To: Lokesh Khanna
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] caching windows update

nothing wrong may b your clients not using windows update ;-)


On 8/25/05, [EMAIL PROTECTED]
[EMAIL PROTECTED] wrote:
 



Hi

I have recently started using refresh pattern in squid configuration.
My objective is to cache windows update for long time.

Below is my configuration.


refresh_pattern http://.*\.windowsupdate\.microsoft\.com/ 4320 80%
   

  

43200
 



refresh_pattern http://office\.microsoft\.com/4320 80%
   

  

43200
 



refresh_pattern http://windowsupdate\.microsoft\.com/ 4320 80%
   

  

43200
 



refresh_pattern http://w?xpsp[0-9]\.microsoft\.com/   4320 80%
   

  

43200
 



refresh_pattern http://w2ksp[0-9]\.microsoft\.com/4320 80%
   

  

43200
 



refresh_pattern http://download\.microsoft\.com/  4320 80%
   

  

43200
 



refresh_pattern http://download\.macromedia\.com/ 4320 80%
   

  

43200
 



refresh_pattern ftp://ftp\.nai\.com/  4320 80%
   

  

43200
 



But when I check access.log file, I don't see any TCP_HIT for
windowsupdate. Is there anything wrong in this configuration.

Thanks - LK
Disclaimer

   

  

***


*
  


 



The information contained in this e-mail, any attached files, and
   

  

response threads are confidential and
 



may be legally privileged. It is intended solely for the use of
   

  

individual(s) or entity to which it is addressed
 



and others authorised to receive it. If you are not the intended
   

  

recipient, kindly notify the sender by return
 



mail and delete this message and any attachment(s) immediately.

Save as expressly permitted by the author, any disclosure, copying,
   

  

distribution or taking action in reliance
 



on the contents of the information contained in this e-mail is
   

  

strictly prohibited and may be unlawful.
 



Unless otherwise clearly stated, and related to the official business
   

  

of Accelon Nigeria Limited, opinions,
 



conclusions, and views expressed in this message are solely personal
   

  

to the author.
 



Accelon Nigeria Limited accepts no liability whatsoever for any loss,
   

  

be it direct, indirect or consequential

[squid-users] cache peering question

2005-08-25 Thread Raj Kumar Gurung
Hi list

i have configured two squid servers running Redhat 9.My current
configuration for cache_peer  is as :
CACHE 1:
cache_peer cache2.xxx.comsibling3128  3130 proxy-only
icp_port 3130

CACHE 2:
cache_peer cache1.xxx.comsibling3128  3130 proxy-only
icp_port 3130

Both have GRE tunnels with cisco router.And we could see that the cisco
is behaving as it should with HASH ALLOTMENT of 50% share to each proxy
server.
But when i checked at the cachemgr.cgi menu, i could see large
difference in the number of clients accessing the cache.Is it normal ?
or do i need to tweak some configurations.

Thanks in advance.






[squid-users] gre and wccp with Fedora 4

2005-08-18 Thread Raj Kumar Gurung
I compliled squid and now i am doing  transparent proxy .I compiled 
ip_wccp(1.7) module for that, and its working .But i would also like
create GRE tunnel between the squid box and the wccp router.As with the
current verison of ip_wccp module 1.7 , i found that those gre and wccp
modules are mutually exclulsive and we need to recompile the ip_gre with
patched version.I searched through the Internet for the patch but didnt
find the correct patch for ip_gre.So i would like to have
suggestion/information through this squid mailing list.

Thanks in advance.

uglyjoe79



[squid-users] cache peering

2005-08-18 Thread Raj Kumar Gurung
We have two squid cache and configured general cache peering.
The squid settings seems like this:
1)ICP_PORT configured to 3130 for both squid process.
2)Cache peer configuration
On cache 1
cache_peer  cache2 .xxx.com sibling 3128 3130 proxy-only

On cache 2
cache_peer  cache1 .xxx.com sibling 3128 3130 proxy-only

We could see the icp queries to and from both the squid boxes,but  in
squid mrtg graph,we could see the HTTP ALL SERVICE TIME for cache 1
and cache 2 are diferrent.Also we have configured both cache-box  to
serve proxying transparently with the cisco router.cache 1 is configured
with gre and wccp module and cache 2 is configure with wccp module
only.Does it make a difference ?

suggestions please !!

uglyjoe79




[squid-users] Squid 2.57 core dumped for invalid cache_effective_user and cache_effective_group on HP-UX

2004-10-16 Thread Durai raj
Hello All,

   I got core dumped when I tried with invalid
cache_effective_user Or
cache_effective_group. Here is the details:

For invalid cache_effective_user,

# squid -z
FATAL: getpwnam failed to find userid for effective
user 'squid'
Squid Cache (Version 2.5.STABLE7): Terminated
abnormally.
CPU Usage: 0.020 seconds = 0.010 user + 0.010 sys
Maximum Resident Size: 0 KB
Page faults with physical i/o: 48
Oct 16 14:11:32 vaigi syslog: getpwnam failed to find
userid for effective
user 'squid'
Abort(coredump)

For invalid cache_effective_group,

# squid -z
FATAL: getgrnam failed to find groupid for effective
group 'squid'
Squid Cache (Version 2.5.STABLE7): Terminated
abnormally.
CPU Usage: 0.020 seconds = 0.010 user + 0.010 sys
Maximum Resident Size: 0 KB
Page faults with physical i/o: 0
Oct 16 15:02:22 vaigi syslog: getgrnam failed to find
groupid for effective
group 'squid'
Abort(coredump)

This is because of the accessing invalid pointer in
function
configDoConfigure () at cache_cf.c line 284.

   399  if (NULL == pwd)
   400  /*
   401   * Andres Kroonmaa
[EMAIL PROTECTED]:
   402   * Some getpwnam()
implementations (Solaris?)
require
   403   * an available FD  256 for
opening a FILE* to the
   404   * passwd file.
   405   * DW:
   406   * This should be safe at
startup, but might still
fail
   407   * during reconfigure.
   408   */
   409  fatalf(getpwnam failed to
find userid for effective
user '%s',
   410  Config.effectiveUser);
   411  Config2.effectiveUserID =
pwd-pw_uid;
   412  Config2.effectiveGroupID =
pwd-pw_gid;
   413  }


   420  if (NULL == grp)
   421  fatalf(getgrnam failed to find
groupid for effective
group '%s',
   422  Config.effectiveGroup);
   423  Config2.effectiveGroupID =
grp-gr_gid;
   424  }

Is this the bug or platfrom specific problem ?

Regards,
Durai.


---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system
(http://www.grisoft.com).
Version: 6.0.775 / Virus Database: 522 - Release Date:
10/8/2004




___
Do you Yahoo!?
Declare Yourself - Register online to vote today!
http://vote.yahoo.com


[squid-users] Reg: Squid clientAbortBody() Denial of Service Vulnerability

2004-09-19 Thread Durai raj
Hello All,

  For the bug called Squid clientAbortBody() Denial
of Service
Vulnerability, I saw the following patch for this
bug.

  --- squid-2.5.STABLE5/src/client_side.c.origMon
May 10 11:14:33 2004
  +++ squid-2.5.STABLE5/src/client_side.c Mon May 10
11:14:50 2004
  @@ -3282,7 +3282,7 @@
   CBCB *callback;
   void *cbdata;
   int valid;
  -if (!conn-body.callback || conn-body.request
!= request)
  +if (conn == NULL || !conn-body.callback ||
conn-body.request !=
request)
  return;
   buf = conn-body.buf;
   callback = conn-body.callback;

However, I saw the solution as below for this bug.

SOLUTION:
A patch has been applied to version
2.5.STABLE5 and 2.5.STABLE6.
However, it may reportedly only address the issue
partially.

Is this partially fix OR this bug partially reported?
What is the correct solution for this bug?

I am using Squid version 2.5.STABLE6.

Thanks,
Durai.


---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system
(http://www.grisoft.com).
Version: 6.0.760 / Virus Database: 509 - Release Date:
9/10/2004


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 


[squid-users] Error messages!

2003-06-20 Thread Raj
Hi All!

I am trying to find out what the following error message indicate as I
keep on getting them intemittently. I could not find about them on the
site. Can a squid guru let me know what they mean?

---
commConnectDnsHandle: Bad dns_error_message

clientReadRequest: FD 236 Invalid Request

WARNING: Closing client a.b.c.d connection due to lifetime timeout

Squid Parent: child process 538 exited due to signal 6
--

Btw, where can I find info on all the types of error messages that squid
may spit out on the log files? It would be indeed helpful to me.

Thanks,

Raj



[squid-users] SECURE sites connection problem!

2003-06-15 Thread Raj
HI, it seems that some of our clients cannot connect to some secure
servers to download files if the remote server somehow finds out that
there is a proxy on the path. Is there any way I can bypass any such
secure connections dynamically so that the such connections are
seamless.
Thanks.

Raj



[squid-users] CPU usage question

2003-06-04 Thread Raj
Hello to the Squid community!

I have been using a SQUID_WCCPv1system (2.5.STABLE3 / RH8 / P4 / 1 GB
RAM / 30 GB cache_dir ) fine for a week or so now. I am gradually adding
more and more traffic on it and watching the system load. Right now I
have half the traffic (~3 mbps). I am particularly observing the CPU
usage as it seems to be increasing dramatically with the added load.
Actually I am little confused as to the actual CPU usage by squid and
the total CPU usage all procs on the system.

My cache manager shows:

Resource usage for squid:
 UP Time: 505940.273 seconds
 CPU Time: 188719.130 seconds
 CPU Usage: 37.30%
 CPU Usage, 5 minute avg: 79.35%
 CPU Usage, 60 minute avg: 76.10%
--
My top shows:

12:38pm  up 5 days, 20:35,  1 user,  load average: 2.48, 2.62, 2.68
38 processes: 35 sleeping, 3 running, 0 zombie, 0 stopped
CPU states: 16.7% user, 65.4% system,  0.0% nice, 17.7% idle
Mem:   904112K av,  899604K used,4508K free,   0K shrd,  287492K
buff
Swap: 1020116K av,   47776K used,  972340K free  197028K
cached

  PID USER PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME COMMAND
  580 squid 19   0  297M 267M 15896 R78.4 30.3  3146m squid
 2124 root  10   0   512  496   456 D 0.7  0.0  45:48 syslogd
  133 root   9   0 00 0 DW0.3  0.0  44:37 kjournald
  586 squid  9   0   632  600   556 S 0.1  0.0   9:59 diskd
 2128 root   9   0   416  400   364 S 0.1  0.0  13:38 klogd


My procinfo shows:

user  :  14:20:47.75  10.2%  page in : 12803861  disk 1:
114289r63514927w
nice  :   0:00:16.10   0.0%  page out:  405032647  disk 2:
833539r  672513w
system:   1d 21:08:57.71  32.1%  swap in :19958  disk 3:   822433r
671563w
idle  :   3d  9:06:09.78  57.7%  swap out:14854
---

I know that this may be a basic question but I am kind of confused to
find out the actual CPU usage by squid and the total CPU usage due to
all procs on the system. This info is vital for me to add more Squid
boxes to be able to handle all the traffic of my network.

Thanks in advance.

Raj