[squid-users] calamaris configuration

2011-08-09 Thread benjamin fernandis
Hi,

I try to configure calamaris with centos 6.It is working fine with
html and other output format.but when i tried to use graph format i
got below errors,

 cat /var/log/squid/access.log | /usr/local/calamaris/calamaris -a
--output-file abc.txt -F html,graph   --output-path
/var/www/html/stats/

Use of uninitialized value in concatenation (.) or string at
/usr/local/calamaris/calamaris line 4083,  line 9494.
Use of uninitialized value in concatenation (.) or string at
/usr/local/calamaris/calamaris line 4115,  line 9494.
Use of uninitialized value in concatenation (.) or string at
/usr/local/calamaris/calamaris line 4115,  line 9494.
Can't call method png on an undefined value at
/usr/local/calamaris/calamaris line 4128,  line 9494.


Please guide me to solve this error.And i want any good document to
configure calamaris with different options or is there any examples.

SQUID VERSION:3.1.4
CALAMARIS VERSION:  2.99.4.0
PERL VERSION: 5.10.1
PERL-GD:  2.44-3.el6


Thanks,
Benjamin


[squid-users] sending file

2011-08-09 Thread Mohsen Pahlevanzadeh
Dear all,

Suppose i save as vi browser a web page such as google.com, yahoo.com
and blah blah.So,I want to sending a web page to squid and squid cache
it, but i don't know sequence of files and http headers, What i do
append to http for accepting from squid?

--mohsen


signature.asc
Description: This is a digitally signed message part


[squid-users] Using Squid as an apt cache

2011-08-09 Thread Emmanuel Seyman

Hello, all.

I've setup a squid instance on site so that I can cache .deb files and
speed up on updating our systems (we run a mostly-debian shop).

I've configured Squid according to the instructions found here:
http://itkia.com/using-squid-to-cache-apt-updates-for-debian-and-ubuntu/

and configured apt on all our Debian systems to use squid as an HTTP proxy.

Whenever I run apt-get update, I get the followings errors:

W: GPG error: http://ftp.debian.org squeeze-updates Release: The following 
signatures were invalid: BADSIG AED4B06F473041FA Debian Archive Automatic 
Signing Key (6.0/squeeze) ftpmas...@debian.org
W: GPG error: http://security.debian.org squeeze/updates Release: The following 
signatures were invalid: BADSIG AED4B06F473041FA Debian Archive Automatic 
Signing Key (6.0/squeeze) ftpmas...@debian.org
W: GPG error: http://backports.debian.org squeeze-backports Release: The 
following signatures were invalid: BADSIG AED4B06F473041FA Debian Archive 
Automatic Signing Key (6.0/squeeze) ftpmas...@debian.org

These errors disappear when I remove /etc/apt/apt.conf.d/proxy and force
apt to work around the proxy. Can anybody tell me what's wrong with the
configuration at the website above ?

Emmanuel


Re: [squid-users] Using Squid as an apt cache

2011-08-09 Thread Tarek Kilani
Hi,
as far as i can see it from here is that you need to change your
sources.list file pointing to YOUR deb package mirror.


On 09/08/2011 10:20, Emmanuel Seyman wrote:
 Hello, all.

 I've setup a squid instance on site so that I can cache .deb files and
 speed up on updating our systems (we run a mostly-debian shop).

 I've configured Squid according to the instructions found here:
 http://itkia.com/using-squid-to-cache-apt-updates-for-debian-and-ubuntu/

 and configured apt on all our Debian systems to use squid as an HTTP proxy.

 Whenever I run apt-get update, I get the followings errors:

 W: GPG error: http://ftp.debian.org squeeze-updates Release: The following 
 signatures were invalid: BADSIG AED4B06F473041FA Debian Archive Automatic 
 Signing Key (6.0/squeeze) ftpmas...@debian.org
 W: GPG error: http://security.debian.org squeeze/updates Release: The 
 following signatures were invalid: BADSIG AED4B06F473041FA Debian Archive 
 Automatic Signing Key (6.0/squeeze) ftpmas...@debian.org
 W: GPG error: http://backports.debian.org squeeze-backports Release: The 
 following signatures were invalid: BADSIG AED4B06F473041FA Debian Archive 
 Automatic Signing Key (6.0/squeeze) ftpmas...@debian.org

 These errors disappear when I remove /etc/apt/apt.conf.d/proxy and force
 apt to work around the proxy. Can anybody tell me what's wrong with the
 configuration at the website above ?

 Emmanuel



Re: [squid-users] sending file

2011-08-09 Thread Amos Jeffries

On 09/08/11 18:54, Mohsen Pahlevanzadeh wrote:

Dear all,

Suppose i save as vi browser a web page such as google.com, yahoo.com
and blah blah.So,I want to sending a web page to squid and squid cache
it, but i don't know sequence of files and http headers, What i do
append to http for accepting from squid?

--mohsen


http://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol#Example_session


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.14
  Beta testers wanted for 3.2.0.10


[squid-users] How to set multiple WAN IP with squid ?

2011-08-09 Thread J. Bakshi
Hello,

How can I set multiple WAN IP with squid server with different domains ?

Say mydomain.de connect with squid server with WAN IP 203.XXX.XX.XX
where mydomain.au connect with the same server with IP 205.XXX.XX.XX

TIA


Re: [squid-users] download caching

2011-08-09 Thread Amos Jeffries

On 09/08/11 19:57, Tarek Kilani wrote:

On 09/08/2011 07:46, Amos Jeffries wrote:

On 08/08/11 21:03, Tarek Kilani wrote:

Hi,
I wanted to know if it is possible to cache downloads when several users
in my network want to download the same software from the internet,
avoiding to clog up the bandwidth.
Does somehow cache such downloads, or continues downloads that are being
canceled?



Maybe. maybe, and maybe

The answer to each of your questions above is determined by what
caching is permitted for the object, by the client request, by the
server sending it, and finally by your configuration settings.

We need details about your actual problem to provide help.

Amos

Hi,
what I'm actually looking for is to avoid downloading the firefox
browser (for windows) for example over the internet over and over again.
Or if I manually download any service packs for my windows clients, they
should be downloaded only once truely from the internet. If such a
download procedure happens again the client should get the cached download.


Thank you in advance.


If you have squid 2.6 or 2.7 enable the collapsed_forwarding directive.

Nobody has shown any interest in helping to port it to 3.x series yet. 
So its stuck on my TODO list behind hundreds of bug fixes.
 For now 3.x will download many copies in parallel until one complete 
download finishes. Then cache and use that for any future clients.



If you have an older squid config you also likely have the default 
patterns from the 90's when caching dynamic content was dangerous. Squid 
versions 2.6 and later may need some updates:

  http://wiki.squid-cache.org/ConfigExamples/DynamicContent


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.14
  Beta testers wanted for 3.2.0.10


Re: [squid-users] How to set multiple WAN IP with squid ?

2011-08-09 Thread Amos Jeffries

On 09/08/11 20:59, J. Bakshi wrote:

Hello,

How can I set multiple WAN IP with squid server with different domains ?

Say mydomain.de connect with squid server with WAN IP 203.XXX.XX.XX
where mydomain.au connect with the same server with IP 205.XXX.XX.XX

TIA


http://www.squid-cache.org/Doc/config/tcp_outgoing_address/

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.14
  Beta testers wanted for 3.2.0.10


Re: [squid-users] download caching

2011-08-09 Thread Tarek Kilani

Mit freundlichen Grüßen,

Tarek Kilani


On 09/08/2011 11:10, Amos Jeffries wrote:
 On 09/08/11 19:57, Tarek Kilani wrote:
 On 09/08/2011 07:46, Amos Jeffries wrote:
 On 08/08/11 21:03, Tarek Kilani wrote:
 Hi,
 I wanted to know if it is possible to cache downloads when several
 users
 in my network want to download the same software from the internet,
 avoiding to clog up the bandwidth.
 Does somehow cache such downloads, or continues downloads that are
 being
 canceled?


 Maybe. maybe, and maybe

 The answer to each of your questions above is determined by what
 caching is permitted for the object, by the client request, by the
 server sending it, and finally by your configuration settings.

 We need details about your actual problem to provide help.

 Amos
 Hi,
 what I'm actually looking for is to avoid downloading the firefox
 browser (for windows) for example over the internet over and over again.
 Or if I manually download any service packs for my windows clients, they
 should be downloaded only once truely from the internet. If such a
 download procedure happens again the client should get the cached
 download.


 Thank you in advance.

 If you have squid 2.6 or 2.7 enable the collapsed_forwarding directive.

 Nobody has shown any interest in helping to port it to 3.x series yet.
 So its stuck on my TODO list behind hundreds of bug fixes.
  For now 3.x will download many copies in parallel until one complete
 download finishes. Then cache and use that for any future clients.


 If you have an older squid config you also likely have the default
 patterns from the 90's when caching dynamic content was dangerous.
 Squid versions 2.6 and later may need some updates:
   http://wiki.squid-cache.org/ConfigExamples/DynamicContent


 Amos
Hi Amos,
I only did a lot of reading and information gathering of the squid, so
I'll be setting up a squid from scratch.

Thx in advance


Re: [squid-users] How to set multiple WAN IP with squid ?

2011-08-09 Thread J. Bakshi
On Tue, 09 Aug 2011 21:12:58 +1200
Amos Jeffries squ...@treenet.co.nz wrote:

 On 09/08/11 20:59, J. Bakshi wrote:
  Hello,
 
  How can I set multiple WAN IP with squid server with different domains ?
 
  Say mydomain.de connect with squid server with WAN IP 203.XXX.XX.XX
  where mydomain.au connect with the same server with IP 205.XXX.XX.XX
 
  TIA
 
 http://www.squid-cache.org/Doc/config/tcp_outgoing_address/
 
 Amos

Thanks, I have flipped through it but havn't found How can I add a domain name 
, so that based on domain the outgoing IP will differ.

TIA


Re: [squid-users] How to set multiple WAN IP with squid ?

2011-08-09 Thread J. Bakshi
On Tue, 9 Aug 2011 14:57:58 +0530
J. Bakshi joyd...@infoservices.in wrote:

 On Tue, 09 Aug 2011 21:12:58 +1200
 Amos Jeffries squ...@treenet.co.nz wrote:
 
  On 09/08/11 20:59, J. Bakshi wrote:
   Hello,
  
   How can I set multiple WAN IP with squid server with different domains ?
  
   Say mydomain.de connect with squid server with WAN IP 203.XXX.XX.XX
   where mydomain.au connect with the same server with IP 205.XXX.XX.XX
  
   TIA
  
  http://www.squid-cache.org/Doc/config/tcp_outgoing_address/
  
  Amos
 
 Thanks, I have flipped through it but havn't found How can I add a domain 
 name , so that based on domain the outgoing IP will differ.
 
 TIA

answering self, I think pointing the domain to the IP do that domain 
related thing.


Re: [squid-users] How to set multiple WAN IP with squid ?

2011-08-09 Thread Amos Jeffries

On 09/08/11 21:51, J. Bakshi wrote:

On Tue, 9 Aug 2011 14:57:58 +0530
J. Bakshijoyd...@infoservices.in  wrote:


On Tue, 09 Aug 2011 21:12:58 +1200
Amos Jeffriessqu...@treenet.co.nz  wrote:


On 09/08/11 20:59, J. Bakshi wrote:

Hello,

How can I set multiple WAN IP with squid server with different domains ?

Say mydomain.de connect with squid server with WAN IP 203.XXX.XX.XX
where mydomain.au connect with the same server with IP 205.XXX.XX.XX

TIA


http://www.squid-cache.org/Doc/config/tcp_outgoing_address/

Amos


Thanks, I have flipped through it but havn't found How can I add a domain name 
, so that based on domain the outgoing IP will differ.

TIA


answering self, I think pointing the domain to the IP do that domain 
related thing.


I'm not quite sure what you are talking about.
 It seems likely you are looking for the dstdomain ACL type. Which 
checks the domain the browser is loading in the address bar.
 But some of what you say seems to be talking about srcdomain, which is 
the users ISP network domain name.


There is a long list of ACL types which can be used here:
  http://www.squid.cache.org/Doc/config/acl/

and the FAQ how-to about how ACLs and access controls work in Squid is 
here:   http://wiki.squid-cache.org/SquidFaq/SquidAcl


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.14
  Beta testers wanted for 3.2.0.10


Re: [squid-users] Questions Cache-digest

2011-08-09 Thread igor rocha
Ok Amos . Thanks !

2011/8/8 Amos Jeffries squ...@treenet.co.nz:
 On Mon, 8 Aug 2011 19:53:16 -0300, igor rocha wrote:

 ?

 No ideas. It looks like it should work to me.

 Amos


 2011/8/4 igor rocha:

 Amos,
 I think  ve  done  the changes according to their observations. And
 for my studies no longer have anything to adiconar, yet this file with
 digests results have shown me and make me believe they have something
 wrong. Could analyze the current scenario, and point out errors or
 faults again ?!

 RAM
 #cache_mem 16 MB
 cache_mem 128 MB
 maximum_object_size_in_memory 64 KB
 # HD
 #cache_dir aufs /var/spool/squid 512 16 256
 cache_dir aufs /var/spool/squid 512 16 256
 #maximum_object_size 128 MB
 maximum_object_size 16 MB
 minimum_object_size 0 KB

 # Cache Threshold limits
 cache_swap_low 90
 cache_swap_high 100
 #log_icp_queries on

 #Scenario Cache-digest
 #cache_peer 192.168.15.200 parent 3128 3130
 cache_peer 192.168.15.201 parent 3128 3130
 cache_peer 192.168.15.202 parent 3128 3130
 cache_peer 192.168.15.203 parent 3128 3130

 #Tags cache-digest

 digest_generation on

 digest_bits_per_entry 5

 digest_rebuild_period 3600 s
 digest_rewrite_period 3600 s

 digest_rebuild_chunk_percentage 10
 digest_swapout_chunk_size 4096  bytes


 #icp_port 3130
 #prefer_direct off

 # Cache Uploads
 refresh_pattern ^ftp: 1440 20% 10080
 refresh_pattern ^gopher: 1440 0% 1440
 refresh_pattern . 0 20% 4320

 # Basic configuration
 http_port 3128 transparent
 visible_hostname mc2-node01

 acl cloud src 192.168.15.0/24
 #icp_access allow cloud
 acl all src 0.0.0.0/0.0.0.0
 #icp_access deny all
 http_access allow all

 Grateful!




[squid-users] invalid url in curl

2011-08-09 Thread Mohsen Pahlevanzadeh
Dear all, 

I recall when use squidclient url  and our url is not valid, now i
tested with telnet google, and save html file too google.html and
changed a bit its header for cacheable:
--
curl -H HTTP/1.1 200 OK -H Date: Tue, 09 Aug 2011 12:12:54 GMT -H
Expires: Thu, 08 Sep 2011 12:12:54 GMT  -H Cache-Control: public,
max-age=29000 -H Location: http://www.google.com/; -H Content-Type:
text/html; charset=ISO-8859-1   -H Server: gws -H X-XSS-Protection:
1; mode=block -H X-Cache: MISS from debian -H Transfer-Encoding:
chunked -d @./files/Google.com/Google.html  localhost:3128
---
But i receive the bad url err.i don't know how to put url.url does
initialize with Location header ? if yes squid doesn't get error.

again i'm reading rfc, but i think it associate to acceptance of squid.
How i hand off file with curl to squid?

Yours,
Mohsen





signature.asc
Description: This is a digitally signed message part


[squid-users] Dropping client connections

2011-08-09 Thread a bv
Hi,

On a proxy serving to many clients , how can we drop a clients
connection ? especially to an url ? Im not talking about generally
appliying an url filter, talking about a current -live connection.


Regards


Re: [squid-users] invalid url in curl

2011-08-09 Thread Amos Jeffries

On 10/08/11 00:31, Mohsen Pahlevanzadeh wrote:

Dear all,

I recall when use squidclient url  and our url is not valid, now i
tested with telnet google, and save html file too google.html and
changed a bit its header for cacheable:
--
curl -H HTTP/1.1 200 OK -H Date: Tue, 09 Aug 2011 12:12:54 GMT -H
Expires: Thu, 08 Sep 2011 12:12:54 GMT  -H Cache-Control: public,
max-age=29000 -H Location: http://www.google.com/; -H Content-Type:
text/html; charset=ISO-8859-1   -H Server: gws -H X-XSS-Protection:
1; mode=block -H X-Cache: MISS from debian -H Transfer-Encoding:
chunked -d @./files/Google.com/Google.html  localhost:3128
---
But i receive the bad url err.i don't know how to put url.url does
initialize with Location header ? if yes squid doesn't get error.

again i'm reading rfc, but i think it associate to acceptance of squid.
How i hand off file with curl to squid?


curl is a client software. Just like a browser. It _receives_ files from 
squid. It does not send. Only web servers and proxies send page objects 
in HTTP.


 curl --proxy 127.0.0.1:3128 http://www.google.com/

  Request sent to squid:
---
GET http://www.google.com/ HTTP/1.0
Host: www.google.com
User-Agent: curl
Accept: */*
Proxy-Authorization: Basic ***==
Connection: close

---

   squid at 127.0.0.1 contacts www.google.com,
   www.google.com sends the Reply to squid.
   squid sends it to curl

  Reply that comes back to curl:
---
HTTP/1.1 302 Moved Temporarily
Location: http://www.google.co.nz/
Cache-Control: private
Content-Type: text/html; charset=UTF-8
Set-Cookie: PREF=ID=***:FF=0:TM=1312894424:LM=1312894424:S=***; 
expires=Thu, 08-Aug-2013 12:53:44 GMT; path=/; domain=.google.com

Date: Tue, 09 Aug 2011 12:53:44 GMT
Server: gws
Content-Length: 221
X-XSS-Protection: 1; mode=block
X-Cache: MISS from treenet.co.nz
X-Cache-Lookup: MISS from treenet.co.nz:8080
Via: 1.1 treenet.co.nz (squid/3.3.0.0)
Connection: close

---

   A normal web browser would follow that 302 redirect instruction
   and try again with second Request to squid 

  Request sent to squid:
---
GET http://www.google.co.nz/ HTTP/1.0
Host: www.google.co.nz
User-Agent: curl
Accept: */*
Proxy-Authorization: Basic ***==
Connection: close

---

   squid at 127.0.0.1 contacts www.google.co.nz,
   www.google.co.nz sends the Reply to squid.
   squid sends it to curl:

  Reply that comes back to curl:
---
HTTP/1.1 200 OK
Date: Tue, 09 Aug 2011 13:01:27 GMT
Expires: -1
Cache-Control: private, max-age=0
Content-Type: text/html; charset=ISO-8859-1
Set-Cookie: PREF=ID=***:FF=0:TM=1312894887:LM=1312894887:S=***; 
expires=Thu, 08-Aug-2013 13:01:27 GMT; path=/; domain

=.google.co.nz
Set-Cookie: NID=49=***; expires=Wed, 08-Feb-2012 13:01:27 GMT; path=/; 
domain=.google.co.nz; HttpOnly

Server: gws
X-XSS-Protection: 1; mode=block
X-Cache: MISS from treenet.co.nz
X-Cache-Lookup: MISS from treenet.co.nz:8080
Via: 1.1 treenet.co.nz (squid/3.3.0.0)
Connection: close

!doctype htmlhtmlheadmeta http-equiv=content-type 
content=text/html; 
charset=ISO-8859-1titleGoogle/titlescriptwindow.google={

---

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.14
  Beta testers wanted for 3.2.0.10


Re: [squid-users] Dropping client connections

2011-08-09 Thread Amos Jeffries

On 10/08/11 01:04, a bv wrote:

Hi,

On a proxy serving to many clients , how can we drop a clients
connection ? especially to an url ? Im not talking about generally
appliying an url filter, talking about a current -live connection.


Regards


Can't be done inside Squid (yet). You need to do it at the TCP level.

Sending a RST packet to the Squid TCP socket which the client is using 
for that connection is probably the most effective way. Depending on the 
squid version that port number might be visible in the active_clients 
cache manager report page. As the peer:  details next to the URL.

  squidclient -h $squid mgr:active_requests


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.14
  Beta testers wanted for 3.2.0.10


Re: [squid-users] invalid url in curl

2011-08-09 Thread Mohsen Pahlevanzadeh

On Wed, 2011-08-10 at 01:20 +1200, Amos Jeffries wrote:
 On 10/08/11 00:31, Mohsen Pahlevanzadeh wrote:
  Dear all,
 
  I recall when use squidclient url  and our url is not valid, now i
  tested with telnet google, and save html file too google.html and
  changed a bit its header for cacheable:
  --
  curl -H HTTP/1.1 200 OK -H Date: Tue, 09 Aug 2011 12:12:54 GMT -H
  Expires: Thu, 08 Sep 2011 12:12:54 GMT  -H Cache-Control: public,
  max-age=29000 -H Location: http://www.google.com/; -H Content-Type:
  text/html; charset=ISO-8859-1   -H Server: gws -H X-XSS-Protection:
  1; mode=block -H X-Cache: MISS from debian -H Transfer-Encoding:
  chunked -d @./files/Google.com/Google.html  localhost:3128
  ---
  But i receive the bad url err.i don't know how to put url.url does
  initialize with Location header ? if yes squid doesn't get error.
 
  again i'm reading rfc, but i think it associate to acceptance of squid.
  How i hand off file with curl to squid?
 
 curl is a client software. Just like a browser. It _receives_ files from 
 squid. It does not send. Only web servers and proxies send page objects 
 in HTTP.
 
   curl --proxy 127.0.0.1:3128 http://www.google.com/
 
Request sent to squid:
 ---
 GET http://www.google.com/ HTTP/1.0
 Host: www.google.com
 User-Agent: curl
 Accept: */*
 Proxy-Authorization: Basic ***==
 Connection: close
 
 ---
 
 squid at 127.0.0.1 contacts www.google.com,
 www.google.com sends the Reply to squid.
 squid sends it to curl
 
Reply that comes back to curl:
 ---
 HTTP/1.1 302 Moved Temporarily
 Location: http://www.google.co.nz/
 Cache-Control: private
 Content-Type: text/html; charset=UTF-8
 Set-Cookie: PREF=ID=***:FF=0:TM=1312894424:LM=1312894424:S=***; 
 expires=Thu, 08-Aug-2013 12:53:44 GMT; path=/; domain=.google.com
 Date: Tue, 09 Aug 2011 12:53:44 GMT
 Server: gws
 Content-Length: 221
 X-XSS-Protection: 1; mode=block
 X-Cache: MISS from treenet.co.nz
 X-Cache-Lookup: MISS from treenet.co.nz:8080
 Via: 1.1 treenet.co.nz (squid/3.3.0.0)
 Connection: close
 
 ---
 
 A normal web browser would follow that 302 redirect instruction
 and try again with second Request to squid 
 
Request sent to squid:
 ---
 GET http://www.google.co.nz/ HTTP/1.0
 Host: www.google.co.nz
 User-Agent: curl
 Accept: */*
 Proxy-Authorization: Basic ***==
 Connection: close
 
 ---
 
 squid at 127.0.0.1 contacts www.google.co.nz,
 www.google.co.nz sends the Reply to squid.
 squid sends it to curl:
 
Reply that comes back to curl:
 ---
 HTTP/1.1 200 OK
 Date: Tue, 09 Aug 2011 13:01:27 GMT
 Expires: -1
 Cache-Control: private, max-age=0
 Content-Type: text/html; charset=ISO-8859-1
 Set-Cookie: PREF=ID=***:FF=0:TM=1312894887:LM=1312894887:S=***; 
 expires=Thu, 08-Aug-2013 13:01:27 GMT; path=/; domain
 =.google.co.nz
 Set-Cookie: NID=49=***; expires=Wed, 08-Feb-2012 13:01:27 GMT; path=/; 
 domain=.google.co.nz; HttpOnly
 Server: gws
 X-XSS-Protection: 1; mode=block
 X-Cache: MISS from treenet.co.nz
 X-Cache-Lookup: MISS from treenet.co.nz:8080
 Via: 1.1 treenet.co.nz (squid/3.3.0.0)
 Connection: close
 
 !doctype htmlhtmlheadmeta http-equiv=content-type 
 content=text/html; 
 charset=ISO-8859-1titleGoogle/titlescriptwindow.google={
 ---
 
 Amos
If i send Obejct files in the directory, can i get result? how can i
simulate?
--mohsen


signature.asc
Description: This is a digitally signed message part


[squid-users] SQUID Multiple Instances not working on windows!

2011-08-09 Thread Ghassan Gharabli
Hello,

I have looked and read this URL
http://wiki.squid-cache.org/MultipleInstances but still not working .
Only the first Instance is woking well but the second isnt working
yet! ..  theproblem is I cant see any logs ..

usually the first instance I used to install it as :

squid.exe -i -n squid


but the second instance I tired this one :

squid.exe -i -n SquidSurf -f C:/squid2/etc/squid2.conf

also tried to change the directory of Logs ...

Any help ?


[squid-users] quick question about squid proxy

2011-08-09 Thread Nathan Rice
Hello all,

I apologize if I missed this when I was perusing the squid
documentation.  I am looking for caching proxy with the ability to
transparently authenticate at a remote site on behalf of users.  For
example, a user requests page X, which requires a password; the squid
server fetches this page on behalf of the user, providing canned
credentials when required; squid then serves this page to the user
without requiring any password.

Is this possible with squid?  If so, could someone kindly point me to
the relevant section of the documentation?

Thank you,

Nathan Rice


Re: [squid-users] quick question about squid proxy

2011-08-09 Thread Amos Jeffries

On Tue, 9 Aug 2011 17:45:10 -0400, Nathan Rice wrote:

Hello all,

I apologize if I missed this when I was perusing the squid
documentation.  I am looking for caching proxy with the ability to
transparently authenticate at a remote site on behalf of users.  For
example, a user requests page X, which requires a password; the squid
server fetches this page on behalf of the user, providing canned
credentials when required; squid then serves this page to the user
without requiring any password.

Is this possible with squid?  If so, could someone kindly point me to
the relevant section of the documentation?

Thank you,

Nathan Rice


Site credentials are normally restricted very strictly to 
browser-website communication and the proxy does not take part.


That said, for specific site(s) you can configure an explicit 
originserver cache_peer link to the web server. Using the login= option 
to send credentials for all requests down that link.

 http://www.squid-cache.org/Doc/cofnig/cache_peer

These are restricted to insecure Basic auth credentials in all squid. 
Latest releases extend this to include Negotiate/Kerberos auth as 
mentioned in that doc.


NOTE that in any event the user is never actually authenticated. What 
goes down the link may in fact be multiple interleaved users on the 
receiving side of Squid. The only thing that type of auth validates is 
that the request came through your Squid. Be careful.


Amos



Re: [squid-users] SQUID Multiple Instances not working on windows!

2011-08-09 Thread Amos Jeffries

On Wed, 10 Aug 2011 00:10:51 +0300, Ghassan Gharabli wrote:

Hello,

I have looked and read this URL
http://wiki.squid-cache.org/MultipleInstances but still not working .
Only the first Instance is woking well but the second isnt working
yet! ..  theproblem is I cant see any logs ..

usually the first instance I used to install it as :

squid.exe -i -n squid


but the second instance I tired this one :

squid.exe -i -n SquidSurf -f C:/squid2/etc/squid2.conf

also tried to change the directory of Logs ...

Any help ?



Start the second instance with -X option on the command line and it 
will dump a whole lot of early startup debug information. The answer may 
be buried in there somewhere.


Amos


[squid-users] TCP_MISS/200

2011-08-09 Thread alexus
how can I improve these: in particular TCP_MISS/200 as it seems like I
have most of these and almost none TCP_HIT/200
-bash-3.2# cat access.log | awk '{print $4}' | sort | uniq -c | sort -rn
115522 TCP_MISS/200
87750 TCP_DENIED/407
8933 TCP_MISS/304
7646 TCP_MISS/302
6456 TCP_MEM_HIT/200
4183 TCP_MISS/204
2412 TCP_REFRESH_UNMODIFIED/304
2137 TCP_MISS/404
1524 NONE/400
1109 TCP_IMS_HIT/304
 959 TCP_MISS/000
 636 TCP_CLIENT_REFRESH_MISS/200
 481 TCP_REFRESH_UNMODIFIED/200
 344 TCP_MISS/301
 310 TCP_CLIENT_REFRESH_MISS/304
 179 TCP_DENIED/403
 167 TCP_REFRESH_MODIFIED/200
 111 TCP_MISS/206
  82 TCP_MISS/503
  76 TCP_MISS/500
  63 TCP_MISS/403
  62 TCP_HIT/200
  46 TCP_MISS/400
  40 TCP_MEM_HIT/302
  27 TCP_MEM_HIT/301
  25 TCP_MISS/303
  20 TCP_IMS_HIT/200
  16 TCP_MISS/502
  13 TCP_MISS/410
  13 NONE/417
  12 TCP_HIT/000
  11 TCP_MISS/401
  10 TCP_REFRESH_MODIFIED/302
   7 TCP_MISS/504
   7 TCP_MISS/307
   2 TCP_MEM_HIT/206
   1 lama
   1 TCP_MISS/202
   1 TCP_CLIENT_REFRESH_MISS/000
-bash-3.2#
--
http://alexus.org/



--
http://alexus.org/


[squid-users] RE: Too Many Open File Descriptors

2011-08-09 Thread Justin Lawler
Hi,

We have two instances of squid (3.0.15) running on a solaris box. Every so 
often (like many once every month) we get a load of below errors:

2011/08/09 19:22:10| comm_open: socket failure: (24) Too many open files

Sometimes it goes away of its own, sometimes squid crashes and restarts.

When it happens, generally happens on both instances of squid on the same box.

We have number open file descriptors set to 2048 - using squidclient mrg:info:

root@squid01# squidclient mgr:info | grep file
    Maximum number of file descriptors:   2048
    Largest file desc currently in use:   2041
    Number of file desc currently in use: 1903
    Available number of file descriptors:  138
    Reserved number of file descriptors:   100
    Store Disk files open:  68

We're using squid as an ICAP client. Both squid instances point two different 
ICAP servers, so it's unlikely a problem with the ICAP server.

Is this a known issue? As its going on for a long time (over 40 minutes 
continuously), it doesn't seem like it's just the traffic spiking for a long 
period. Also, we're not seeing it on other boxes, which are load balanced.

Any pointers much appreciated.

Regards,
Justin
This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp



Re: [squid-users] RE: Too Many Open File Descriptors

2011-08-09 Thread Wilson Hernandez
That used to happen to us and had to write a script to start squid like 
this:


#!/bin/sh -e
#

echo Starting squid...

ulimit -HSn 65536
sleep 1
/usr/local/squid/sbin/squid

echo Done..




On 8/9/2011 10:47 PM, Justin Lawler wrote:

Hi,

We have two instances of squid (3.0.15) running on a solaris box. Every so 
often (like many once every month) we get a load of below errors:

2011/08/09 19:22:10| comm_open: socket failure: (24) Too many open files

Sometimes it goes away of its own, sometimes squid crashes and restarts.

When it happens, generally happens on both instances of squid on the same box.

We have number open file descriptors set to 2048 - using squidclient mrg:info:

root@squid01# squidclient mgr:info | grep file
 Maximum number of file descriptors:   2048
 Largest file desc currently in use:   2041
 Number of file desc currently in use: 1903
 Available number of file descriptors:  138
 Reserved number of file descriptors:   100
 Store Disk files open:  68

We're using squid as an ICAP client. Both squid instances point two different 
ICAP servers, so it's unlikely a problem with the ICAP server.

Is this a known issue? As its going on for a long time (over 40 minutes 
continuously), it doesn't seem like it's just the traffic spiking for a long 
period. Also, we're not seeing it on other boxes, which are load balanced.

Any pointers much appreciated.

Regards,
Justin
This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp





Re: [squid-users] RE: Too Many Open File Descriptors

2011-08-09 Thread Amos Jeffries

On Tue, 09 Aug 2011 23:07:05 -0400, Wilson Hernandez wrote:
That used to happen to us and had to write a script to start squid 
like this:


#!/bin/sh -e
#

echo Starting squid...

ulimit -HSn 65536
sleep 1
/usr/local/squid/sbin/squid

echo Done..




Pretty much the only solution.

ICAP raises the potential worst-case socket consumption per client 
request from 3 FD to 7. REQMOD also doubles the minimum resource 
consumption from 1 FD to 2.


Amos



On 8/9/2011 10:47 PM, Justin Lawler wrote:

Hi,

We have two instances of squid (3.0.15) running on a solaris box. 
Every so often (like many once every month) we get a load of below 
errors:


2011/08/09 19:22:10| comm_open: socket failure: (24) Too many open 
files


Sometimes it goes away of its own, sometimes squid crashes and 
restarts.


When it happens, generally happens on both instances of squid on the 
same box.


We have number open file descriptors set to 2048 - using squidclient 
mrg:info:


root@squid01# squidclient mgr:info | grep file
 Maximum number of file descriptors:   2048
 Largest file desc currently in use:   2041
 Number of file desc currently in use: 1903
 Available number of file descriptors:  138
 Reserved number of file descriptors:   100
 Store Disk files open:  68

We're using squid as an ICAP client. Both squid instances point two 
different ICAP servers, so it's unlikely a problem with the ICAP 
server.


Is this a known issue? As its going on for a long time (over 40 
minutes continuously), it doesn't seem like it's just the traffic 
spiking for a long period. Also, we're not seeing it on other boxes, 
which are load balanced.


Any pointers much appreciated.

Regards,
Justin
This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,

you may review at http://www.amdocs.com/email_disclaimer.asp





Re: [squid-users] TCP_MISS/200

2011-08-09 Thread Amos Jeffries

On Tue, 9 Aug 2011 21:48:15 -0400, alexus wrote:
how can I improve these: in particular TCP_MISS/200 as it seems like 
I

have most of these and almost none TCP_HIT/200
-bash-3.2# cat access.log | awk '{print $4}' | sort | uniq -c | sort 
-rn

115522 TCP_MISS/200
87750 TCP_DENIED/407
8933 TCP_MISS/304
7646 TCP_MISS/302
6456 TCP_MEM_HIT/200


Find out what URLs.

grep TCP_MISS/200 access.log | awk '{print $7}' | sort | uniq -c | 
sort -rn | head -20


and paste some worst offenders into redbot.org to find out more about 
them.


Amso


Re: [squid-users] invalid url in curl

2011-08-09 Thread Amos Jeffries

On Tue, 09 Aug 2011 19:57:47 +0430, Mohsen Pahlevanzadeh wrote:

On Wed, 2011-08-10 at 01:20 +1200, Amos Jeffries wrote:

On 10/08/11 00:31, Mohsen Pahlevanzadeh wrote:
 Dear all,

 I recall when use squidclient url  and our url is not valid, now 
i

 tested with telnet google, and save html file too google.html and
 changed a bit its header for cacheable:
 --
 curl -H HTTP/1.1 200 OK -H Date: Tue, 09 Aug 2011 12:12:54 GMT 
-H
 Expires: Thu, 08 Sep 2011 12:12:54 GMT  -H Cache-Control: 
public,
 max-age=29000 -H Location: http://www.google.com/; -H 
Content-Type:
 text/html; charset=ISO-8859-1   -H Server: gws -H 
X-XSS-Protection:
 1; mode=block -H X-Cache: MISS from debian -H 
Transfer-Encoding:

 chunked -d @./files/Google.com/Google.html  localhost:3128
 ---
 But i receive the bad url err.i don't know how to put url.url does
 initialize with Location header ? if yes squid doesn't get 
error.


 again i'm reading rfc, but i think it associate to acceptance of 
squid.

 How i hand off file with curl to squid?

curl is a client software. Just like a browser. It _receives_ files 
from
squid. It does not send. Only web servers and proxies send page 
objects

in HTTP.

  curl --proxy 127.0.0.1:3128 http://www.google.com/

   Request sent to squid:
---
GET http://www.google.com/ HTTP/1.0
Host: www.google.com
User-Agent: curl
Accept: */*
Proxy-Authorization: Basic ***==
Connection: close

---

squid at 127.0.0.1 contacts www.google.com,
www.google.com sends the Reply to squid.
squid sends it to curl

   Reply that comes back to curl:
---
HTTP/1.1 302 Moved Temporarily
Location: http://www.google.co.nz/
Cache-Control: private
Content-Type: text/html; charset=UTF-8
Set-Cookie: PREF=ID=***:FF=0:TM=1312894424:LM=1312894424:S=***;
expires=Thu, 08-Aug-2013 12:53:44 GMT; path=/; domain=.google.com
Date: Tue, 09 Aug 2011 12:53:44 GMT
Server: gws
Content-Length: 221
X-XSS-Protection: 1; mode=block
X-Cache: MISS from treenet.co.nz
X-Cache-Lookup: MISS from treenet.co.nz:8080
Via: 1.1 treenet.co.nz (squid/3.3.0.0)
Connection: close

---

A normal web browser would follow that 302 redirect instruction
and try again with second Request to squid 

   Request sent to squid:
---
GET http://www.google.co.nz/ HTTP/1.0
Host: www.google.co.nz
User-Agent: curl
Accept: */*
Proxy-Authorization: Basic ***==
Connection: close

---

squid at 127.0.0.1 contacts www.google.co.nz,
www.google.co.nz sends the Reply to squid.
squid sends it to curl:

   Reply that comes back to curl:
---
HTTP/1.1 200 OK
Date: Tue, 09 Aug 2011 13:01:27 GMT
Expires: -1
Cache-Control: private, max-age=0
Content-Type: text/html; charset=ISO-8859-1
Set-Cookie: PREF=ID=***:FF=0:TM=1312894887:LM=1312894887:S=***;
expires=Thu, 08-Aug-2013 13:01:27 GMT; path=/; domain
=.google.co.nz
Set-Cookie: NID=49=***; expires=Wed, 08-Feb-2012 13:01:27 GMT; 
path=/;

domain=.google.co.nz; HttpOnly
Server: gws
X-XSS-Protection: 1; mode=block
X-Cache: MISS from treenet.co.nz
X-Cache-Lookup: MISS from treenet.co.nz:8080
Via: 1.1 treenet.co.nz (squid/3.3.0.0)
Connection: close

!doctype htmlhtmlheadmeta http-equiv=content-type
content=text/html;
charset=ISO-8859-1titleGoogle/titlescriptwindow.google={
---

Amos

If i send Obejct files in the directory, can i get result? how can i
simulate?
--mohsen


The above was a simulation I created by actually doing the command:
   curl --proxy 127.0.0.1:3128 http://www.google.com/


The object files are in a directory '/' on the machine www.google.co.nz 
accessed through a http:// service.


If you have an file transfer service (FTP) on the local machine a file 
called name might be available to Squid as the URL 
ftp://localhost/name;


Amos



RE: [squid-users] RE: Too Many Open File Descriptors

2011-08-09 Thread Justin Lawler
Hi,

Thanks for this. Is this a known issue? Is there any bugs/articles on this? 
Just we would need something more concrete to go to the customer with on this 
issue - more of a background on this issue would be very helpful.

Is 2048 FD's enough? Is there any connection leaks? Does squid ignore this 2048 
value?

The OS has FD limits as below - so would have thought current configuration 
should be ok?
set rlim_fd_max=65536
set rlim_fd_cur=8192


Thanks,
Justin


-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Wednesday, August 10, 2011 11:47 AM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] RE: Too Many Open File Descriptors

 On Tue, 09 Aug 2011 23:07:05 -0400, Wilson Hernandez wrote:
 That used to happen to us and had to write a script to start squid 
 like this:

 #!/bin/sh -e
 #

 echo Starting squid...

 ulimit -HSn 65536
 sleep 1
 /usr/local/squid/sbin/squid

 echo Done..



 Pretty much the only solution.

 ICAP raises the potential worst-case socket consumption per client 
 request from 3 FD to 7. REQMOD also doubles the minimum resource 
 consumption from 1 FD to 2.

 Amos


 On 8/9/2011 10:47 PM, Justin Lawler wrote:
 Hi,

 We have two instances of squid (3.0.15) running on a solaris box. 
 Every so often (like many once every month) we get a load of below 
 errors:

 2011/08/09 19:22:10| comm_open: socket failure: (24) Too many open 
 files

 Sometimes it goes away of its own, sometimes squid crashes and 
 restarts.

 When it happens, generally happens on both instances of squid on the 
 same box.

 We have number open file descriptors set to 2048 - using squidclient 
 mrg:info:

 root@squid01# squidclient mgr:info | grep file
  Maximum number of file descriptors:   2048
  Largest file desc currently in use:   2041
  Number of file desc currently in use: 1903
  Available number of file descriptors:  138
  Reserved number of file descriptors:   100
  Store Disk files open:  68

 We're using squid as an ICAP client. Both squid instances point two 
 different ICAP servers, so it's unlikely a problem with the ICAP 
 server.

 Is this a known issue? As its going on for a long time (over 40 
 minutes continuously), it doesn't seem like it's just the traffic 
 spiking for a long period. Also, we're not seeing it on other boxes, 
 which are load balanced.

 Any pointers much appreciated.

 Regards,
 Justin
 This message and the information contained herein is proprietary and 
 confidential and subject to the Amdocs policy statement,
 you may review at http://www.amdocs.com/email_disclaimer.asp



This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp