Re: [squid-users] Clarification on delay_pool bandwidth restricting with external acls

2012-06-07 Thread Carlos Manuel Trepeu Pupo
I have similar configuration, the download of the user its not more
than 128 bytes, but the squid consume all the bandwidth.

squid 3.0 STABLE1

On Thu, Jun 7, 2012 at 6:43 AM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 1/06/2012 2:20 p.m., Cameron Charles wrote:

 Hi all, im just after some clarification on using delay pools with
 external acls as triggers, i have a good understanding of the
 components of delay pools and how they operate but most documentation
 only mentions users aka ip addresses as the method of triggering the
 restriction, i would like to use an external acl to decide whether a
 request should be limited or not regardless of any other factors, so
 any and all traffic coming through squid, if a match to this acl, is
 restricted to say 128bps, is this possible and is the following the
 correct way to achieve this???

 acl bandwidth_UNIQUENAME_acl external bandwidth_check_ext_acl_type
 UNIQUENAME
 http_reply_access allow bandwidth_UNIQUENAME_acl !all
 delay_class 1 1
 delay_parameters 1 128.0/128.0
 delay_access 1 allow bandwidth_128.0_acl
 delay_initial_bucket_level 1 100

 additionally im a inexperienced when it comes to actually testing
 bandwidth limits is it possible to simply download a file that is
 known to be a match to the ext acl and observe that it doesn't
 download at over the bandwidth restriction or is testing this more
 complicated.


 Yes that is pretty much it when testing. Max download should be no more than
 128 bytes per second according to that config.

 If that shows a problem the other thing is to set debug_options ALL,5  (or
 the specific delay pools, comm, external ACL and access control levels
 specifically) and watch for external ACL results and delay pool operations
 to see if the issue shows up.

 Amos


[squid-users] delay_pools fail

2012-06-01 Thread Carlos Manuel Trepeu Pupo
I'm using squid 3.0 STABLE1 on ubuntu 8.04, I have this conf:


delay_pools 1

delay_class 1 1
delay_parameters 1 15000/15000
delay_access 1 allow all

To limit all traffic to 15 KB, but the traffic reach 45, 60, 25 .
What happening this ?? And how can I limit this ???


Re: [squid-users] limiting connections

2012-05-29 Thread Carlos Manuel Trepeu Pupo
Here I make this post alive because a make a few changes. Here you
have, if anyone need it:

#!/bin/bash
while read line; do

   shortLine=`echo $line | awk -F / '{print $NF}'`
   #echo $shortLine  /home/carlos/guarda   - This is for debugging
   result=`squidclient -h 127.0.0.1 mgr:active_requests | grep
-c $shortLine`

 if [ $result == 1 ]
   then
   echo 'OK'
   #echo 'OK'/home/carlos/guarda   - This is for debugging
 else
   echo 'ERR'
   #echo 'ERR'/home/carlos/guarda   - This is for debugging
 fi
done


The main change is to compare the file to download and not the URL, to
avoid the use of mirrors to increase the simultaneous connections.


On Tue, May 29, 2012 at 9:46 AM, Carlos Manuel Trepeu Pupo
charlie@gmail.com wrote:
 Here I make this post alive because a make a few changes. Here you
 have, if anyone need it:

 #!/bin/bash
 while read line; do

        shortLine=`echo $line | awk -F / '{print $NF}'`
        #echo $shortLine  /home/carlos/guarda   - This is for debugging
        result=`squidclient -h 127.0.0.1 mgr:active_requests | grep
 -c $shortLine`

  if [ $result == 1 ]
        then
        echo 'OK'
        #echo 'OK'/home/carlos/guarda   - This is for debugging
  else
        echo 'ERR'
        #echo 'ERR'/home/carlos/guarda   - This is for debugging
  fi
 done


 The main change is to compare the file to download and not the URL, to
 avoid the use of mirrors to increase the simultaneous connections.


 On Thu, Apr 5, 2012 at 12:52 PM, H h...@hm.net.br wrote:
 Carlos Manuel Trepeu Pupo wrote:
 On Thu, Apr 5, 2012 at 10:32 AM, H h...@hm.net.br wrote:
 Carlos Manuel Trepeu Pupo wrote:
 what is your purpose? solve bandwidth problems? Connection rate?
 Congestion? I believe that limiting to *one* download is not your real
 intention, because the browser could still open hundreds of regular
 pages and your download limit is nuked and was for nothing ...

 what is your operating system?

 I pretend solve bandwidth problems. For the persons who uses download
 manager or accelerators, just limit them to 1 connection. Otherwise I
 tried to solve with delay_pool, the packet that I delivery to the
 client was just like I configured, but with accelerators the upload
 saturate the channel.



 since you did not say what OS youŕe running I can give you only some
 direction, any or most Unix firewall can solve this easy, if you use
 Linux you may like pf with FBSD you should go with ipfw, the latter
 probably is easier to understand but for both you will find zillions of
 examples on the net, look for short setups

 Sorry, I forgot !! Squid is in Debian 6.0 32 bits. My firewall is
 Kerio but in Windows, and i'm not so glad to use it !!!


 first you divide your bandwidth between your users

 First I search about the dynamic bandwidth with Squid, but squid do
 not do this, and them after many search I just find ISA Server with a
 third-party plugin, but I prefer linux.


 if you use TPROXy you can devide/limit the bandwidth on the outside
 interface in order to limit only access to the link but if squid has the
 object in cache it might go out as fast as it can

 you still can manage the bandwidth pool with delay parameters if you wish

 I tried with delay_pool, but the delay_pool just manage the download
 average, and not the upload, I need the both. The last time I tried
 with delay_pool the download accelerator download at the speed that
 I specify, but the proxy consume all channel with the download,
 something that I never understand.



 I guess you meant downlaod accelerator, not manager, you can then limit
 the connection rate within the bandwidth for each user and each
 protocol, for DL-accelerator you should pay attention to udp packages as
 well, you did not say how much user and bandwdith you have but limit the
 tcp connection to 25 and udp to 40 to begin with, then test it until
 coming to something what suites your wish

 I have 128 kbps, and I have no idea about the UDP packages !!! That's
 new for me !! Any documentation that I can read ???



 any of this we talk about has nothing to do with squid

 bw control, connection limiting etc you should handle with the firewall

 let squid do what it does well, cache and proxy

 you could consider a different setup, a Unix box with firewall on your
 internet connection and as your gateway, squid as TPROXY or transparent
 proxy if you need NAT, all on the same box

 if you use Linux you should look for pf firewall, if you use FreeBSD you
 should use ipfw firewall and read the specific documentations, if this
 all is new for you,  you might find it easier to use FreeBSD since all
 setups are straight forward, linux and also pf is a little bit more
 complicated
 as example, setting nat on IPFW can be down with three lines of code, I
 believe pf needs at least 6 to work

 but before you dig deeper you might think about a new design of your
 concept of Internet access



 you still could check which DLaccel your

Re: [squid-users] Program for realtime monitoring

2012-04-17 Thread Carlos Manuel Trepeu Pupo
On Tue, Apr 17, 2012 at 4:24 AM, Maqsood Ahmad maqsood...@hotmail.com wrote:

 Well the idea behind this requirement was initially a precaution in case of 
 something goes wrong with the squid.
 But i have tested it and it work great. Separate Web server machine running 
 sqstat pointed to squid server.


You mean in separate server, I understand many server pointing to squid, sorry




 Date: Fri, 13 Apr 2012 08:17:37 -0400
 From: charlie@gmail.com
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] Program for realtime monitoring

 I don´t understand !!! Why can you need to configure in different machines ??

 On Fri, Apr 13, 2012 at 8:11 AM, Carlos Manuel Trepeu Pupo
 charlie@gmail.com wrote:
  I don´t understand !!! Why can you need to configure in different machines 
  ??
 
  On Fri, Apr 13, 2012 at 12:24 AM, Maqsood Ahmad maqsood...@hotmail.com 
  wrote:
 
  Hi,
 
 
  Is it possible that we can configure sqstat on separate machine
 
 
 
  Maqsood Ahmad
 
 
 
 
 
 
  Date: Thu, 12 Apr 2012 09:49:43 -0400
  From: charlie@gmail.com
  To: squid-users@squid-cache.org
  Subject: Re: [squid-users] Program for realtime monitoring
 
  Give 777 permissions  to the folder sqstat !!!
 
  2012/4/12 CyberSoul cybers...@gmx.com:
  The first time I make the test, I just install apache without any
  particular configuration.
  
  I just make a folder sqstat inside WWW, and copy all the content
  there. To call the script you need to type
  http://ip-address-server/sqstat/sqstat.php
  
  For the Squid you just need to permit the access from IP where sqstat
  is, to the cache. For sqstat you need to make this config:
  $squidhost[0]=ip_proxy;
  $squidhost[1]=ip_proxy1;
   as many proxies server you have
  $squidport[0]=3128;
  $squidport[1]=3128;
  . one for each proxy server
  $cachemgr_passwd[0]=;
  $cachemgr_passwd[1]=;
  . one for each proxy server and between  puts the password if
  you have one.
  resolveip[0]=false;
  resolveip[1]=false;
  . one for each proxy server
  $group_by[0]=username;
  $group_by[1]=host;
  ... depend if you want to group by ip or username.
  
  That's all you need, anything else you need don't be afraid to ask
  
   Well, try to call script type by
   http://ip-address-server/sqstat/sqstat.php
   and in browser I see just text of sqstat.php file or again
   SqStat Error
   Error (13) Permission Denied
  
   I think trouble in httpd.conf, can you send me your httpd.conf file or 
   httpd.conf.default file?
  
  
  
 



Re: [squid-users] Program for realtime monitoring

2012-04-13 Thread Carlos Manuel Trepeu Pupo
I don´t understand !!! Why can you need to configure in different machines ??

On Fri, Apr 13, 2012 at 8:11 AM, Carlos Manuel Trepeu Pupo
charlie@gmail.com wrote:
 I don´t understand !!! Why can you need to configure in different machines ??

 On Fri, Apr 13, 2012 at 12:24 AM, Maqsood Ahmad maqsood...@hotmail.com 
 wrote:

 Hi,


 Is it possible that we can configure sqstat on separate machine



 Maqsood Ahmad






 Date: Thu, 12 Apr 2012 09:49:43 -0400
 From: charlie@gmail.com
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] Program for realtime monitoring

 Give 777 permissions  to the folder sqstat !!!

 2012/4/12 CyberSoul cybers...@gmx.com:
 The first time I make the test, I just install apache without any
 particular configuration.
 
 I just make a folder sqstat inside WWW, and copy all the content
 there. To call the script you need to type
 http://ip-address-server/sqstat/sqstat.php
 
 For the Squid you just need to permit the access from IP where sqstat
 is, to the cache. For sqstat you need to make this config:
 $squidhost[0]=ip_proxy;
 $squidhost[1]=ip_proxy1;
  as many proxies server you have
 $squidport[0]=3128;
 $squidport[1]=3128;
 . one for each proxy server
 $cachemgr_passwd[0]=;
 $cachemgr_passwd[1]=;
 . one for each proxy server and between  puts the password if
 you have one.
 resolveip[0]=false;
 resolveip[1]=false;
 . one for each proxy server
 $group_by[0]=username;
 $group_by[1]=host;
 ... depend if you want to group by ip or username.
 
 That's all you need, anything else you need don't be afraid to ask
 
  Well, try to call script type by
  http://ip-address-server/sqstat/sqstat.php
  and in browser I see just text of sqstat.php file or again
  SqStat Error
  Error (13) Permission Denied
 
  I think trouble in httpd.conf, can you send me your httpd.conf file or 
  httpd.conf.default file?
 
 
 



Re: [squid-users] Program for realtime monitoring

2012-04-12 Thread Carlos Manuel Trepeu Pupo
Give 777 permissions  to the folder sqstat !!!

2012/4/12 CyberSoul cybers...@gmx.com:
The first time I make the test, I just install apache without any
particular configuration.

I just make a folder sqstat inside WWW, and copy all the content
there. To call the script you need to type
http://ip-address-server/sqstat/sqstat.php

For the Squid you just need to permit the access from IP where sqstat
is, to the cache. For sqstat you need to make this config:
$squidhost[0]=ip_proxy;
$squidhost[1]=ip_proxy1;
 as many proxies server you have
$squidport[0]=3128;
$squidport[1]=3128;
. one for each proxy server
$cachemgr_passwd[0]=;
$cachemgr_passwd[1]=;
. one for each proxy server and between  puts the password if
you have one.
resolveip[0]=false;
resolveip[1]=false;
. one for each proxy server
$group_by[0]=username;
$group_by[1]=host;
... depend if you want to group by ip or username.

That's all you need, anything else you need don't be afraid to ask

 Well, try to call script type by
 http://ip-address-server/sqstat/sqstat.php
 and in browser I see just text of sqstat.php file or again
 SqStat Error
 Error (13) Permission Denied

 I think trouble in httpd.conf, can you send me your httpd.conf file or 
 httpd.conf.default file?





Re: [squid-users] Program for realtime monitoring

2012-04-11 Thread Carlos Manuel Trepeu Pupo
I'm using SqStat 1.20 and work great for me !!!

2012/4/11 Alex Crow a...@nanogherkin.com:
 On 11/04/12 06:08, CyberSoul wrote:

 Hi all, could anyone give me suggestion for utilite or script realtime
 monitoring for Squid, which can do the following requirements:

 1) work through web-inteface
 2) show current connection speed and username ( or ip)
 3) show the full path of the file that is currently downloaded or browsing

 For example, I open browser and go to the address
 http://ip-address-squid/program-for-realtime-monitoring

 and I can see about the following columns (for example)

 No.Username (or IP)   Current connection speedURL

 1user1 (192.168.1.17)  145 KB/s
  
 http://uk.download.nvidia.com/XFree86/Linux-x86/295.33/NVIDIA-Linux-x86-295.33.run
 2user2 (192.168.1.53)  89 KB/s
 http://www.centos.org/modules/tinycontent/index.php?id=2

 Any ideas?


 Ntop (http://www.ntop.org/products/ntop/) is pretty nice but as Amos said to
 get things like what file are they downloading you'll probably have to
 write something to parse the cachemgr data from squid.

 Alex




Re: [squid-users] limiting connections

2012-04-05 Thread Carlos Manuel Trepeu Pupo
On Thu, Apr 5, 2012 at 7:01 AM, H h...@hm.net.br wrote:
 Carlos Manuel Trepeu Pupo wrote:
 On Tue, Apr 3, 2012 at 6:35 PM, H h...@hm.net.br wrote:
 Eliezer Croitoru wrote:
 On 03/04/2012 18:30, Carlos Manuel Trepeu Pupo wrote:
 On Mon, Apr 2, 2012 at 6:43 PM, Amos Jeffriessqu...@treenet.co.nz
 wrote:
 On 03.04.2012 02:21, Carlos Manuel Trepeu Pupo wrote:

 Thanks a looot !! That's what I'm missing, everything work
 fine now. So this script can use it cause it's already works.

 Now, I need to know if there is any way to consult the active request
 in squid that work faster that squidclient 


 ACL types are pretty easy to add to the Squid code. I'm happy to
 throw an
 ACL patch your way for a few $$.

 Which comes back to me earlier still unanswered question about why
 you want
 to do this very, very strange thing?

 Amos



 OK !! Here the complicate and strange explanation:

 Where I work we have 128 Kbps for the use of almost 80 PCs, a few of
 them use download accelerators and saturate the channel. I began to
 use the ACL maxconn but I have still a few problems. 60 of the clients
 are under an ISA server that I don't administrate, so I can't limit
 the maxconn to them like the others. Now with this ACL, everyone can
 download but with only one connection. that's the strange main idea.
 what do you mean by only one connection?
 if it's under one isa server then all of them share the same external IP.


 Hi

 I am following this thread with mixed feelings of weirdness and
 admiration ...

 there are always two ways to reach a far point, it's left around or
 right around the world, depending on your position one of the ways is
 always the longer one. I can understand that some without hurry and
 money issues chose the longer one, perhaps also because of more chance
 for adventurous happenings, unknown and the unexpected

 so know I explained in a similar long way what I do not understand, why
 would you make such a complicated out of scope code, slow, certainly
 dangerous ... if at least it would be perl, but bash calling external
 prog and grepping, whow ... when you can solve it with a line of code ?

 this task would fit pf or ipfw much better, would be more elegant and
 zillions times faster and secure, not speaking about time investment,
 how much time you need to write 5/6 keywords of code?

 or is it for demonstration purpose, showing it as an alternative
 possibility?


 It's great read this. I just know BASH SHELL, but if you tell me that
 I can make this safer and faster... Previously post I talk about
 this!! That someone tell me if there is a better way of do that, I'm
 newer !! Please, if you can guide me



 who knows ...

 what is your purpose? solve bandwidth problems? Connection rate?
 Congestion? I believe that limiting to *one* download is not your real
 intention, because the browser could still open hundreds of regular
 pages and your download limit is nuked and was for nothing ...

 what is your operating system?


I pretend solve bandwidth problems. For the persons who uses download
manager or accelerators, just limit them to 1 connection. Otherwise I
tried to solve with delay_pool, the packet that I delivery to the
client was just like I configured, but with accelerators the upload
saturate the channel.



 --
 H
 +55 11 4249.



Re: [squid-users] limiting connections

2012-04-05 Thread Carlos Manuel Trepeu Pupo
On Thu, Apr 5, 2012 at 10:32 AM, H h...@hm.net.br wrote:
 Carlos Manuel Trepeu Pupo wrote:
  what is your purpose? solve bandwidth problems? Connection rate?
  Congestion? I believe that limiting to *one* download is not your real
  intention, because the browser could still open hundreds of regular
  pages and your download limit is nuked and was for nothing ...
 
  what is your operating system?
 
 I pretend solve bandwidth problems. For the persons who uses download
 manager or accelerators, just limit them to 1 connection. Otherwise I
 tried to solve with delay_pool, the packet that I delivery to the
 client was just like I configured, but with accelerators the upload
 saturate the channel.



 since you did not say what OS youŕe running I can give you only some
 direction, any or most Unix firewall can solve this easy, if you use
 Linux you may like pf with FBSD you should go with ipfw, the latter
 probably is easier to understand but for both you will find zillions of
 examples on the net, look for short setups

Sorry, I forgot !! Squid is in Debian 6.0 32 bits. My firewall is
Kerio but in Windows, and i'm not so glad to use it !!!


 first you divide your bandwidth between your users

First I search about the dynamic bandwidth with Squid, but squid do
not do this, and them after many search I just find ISA Server with a
third-party plugin, but I prefer linux.


 if you use TPROXy you can devide/limit the bandwidth on the outside
 interface in order to limit only access to the link but if squid has the
 object in cache it might go out as fast as it can

 you still can manage the bandwidth pool with delay parameters if you wish

I tried with delay_pool, but the delay_pool just manage the download
average, and not the upload, I need the both. The last time I tried
with delay_pool the download accelerator download at the speed that
I specify, but the proxy consume all channel with the download,
something that I never understand.



 I guess you meant downlaod accelerator, not manager, you can then limit
 the connection rate within the bandwidth for each user and each
 protocol, for DL-accelerator you should pay attention to udp packages as
 well, you did not say how much user and bandwdith you have but limit the
 tcp connection to 25 and udp to 40 to begin with, then test it until
 coming to something what suites your wish

I have 128 kbps, and I have no idea about the UDP packages !!! That's
new for me !! Any documentation that I can read ???


 you still could check which DLaccel your people are using and then limit
 or block only this P2P ports which used to be very effective

Even if I do not permit CONNECT the users can use P2P ports ??

Thanks for this, I can get clear many question about squid that I have !!!





 --
 H
 +55 11 4249.



Re: [squid-users] limiting connections

2012-04-04 Thread Carlos Manuel Trepeu Pupo
On Tue, Apr 3, 2012 at 6:35 PM, H h...@hm.net.br wrote:
 Eliezer Croitoru wrote:
 On 03/04/2012 18:30, Carlos Manuel Trepeu Pupo wrote:
 On Mon, Apr 2, 2012 at 6:43 PM, Amos Jeffriessqu...@treenet.co.nz
 wrote:
 On 03.04.2012 02:21, Carlos Manuel Trepeu Pupo wrote:

 Thanks a looot !! That's what I'm missing, everything work
 fine now. So this script can use it cause it's already works.

 Now, I need to know if there is any way to consult the active request
 in squid that work faster that squidclient 


 ACL types are pretty easy to add to the Squid code. I'm happy to
 throw an
 ACL patch your way for a few $$.

 Which comes back to me earlier still unanswered question about why
 you want
 to do this very, very strange thing?

 Amos



 OK !! Here the complicate and strange explanation:

 Where I work we have 128 Kbps for the use of almost 80 PCs, a few of
 them use download accelerators and saturate the channel. I began to
 use the ACL maxconn but I have still a few problems. 60 of the clients
 are under an ISA server that I don't administrate, so I can't limit
 the maxconn to them like the others. Now with this ACL, everyone can
 download but with only one connection. that's the strange main idea.
 what do you mean by only one connection?
 if it's under one isa server then all of them share the same external IP.


 Hi

 I am following this thread with mixed feelings of weirdness and
 admiration ...

 there are always two ways to reach a far point, it's left around or
 right around the world, depending on your position one of the ways is
 always the longer one. I can understand that some without hurry and
 money issues chose the longer one, perhaps also because of more chance
 for adventurous happenings, unknown and the unexpected

 so know I explained in a similar long way what I do not understand, why
 would you make such a complicated out of scope code, slow, certainly
 dangerous ... if at least it would be perl, but bash calling external
 prog and grepping, whow ... when you can solve it with a line of code ?

 this task would fit pf or ipfw much better, would be more elegant and
 zillions times faster and secure, not speaking about time investment,
 how much time you need to write 5/6 keywords of code?

 or is it for demonstration purpose, showing it as an alternative
 possibility?


It's great read this. I just know BASH SHELL, but if you tell me that
I can make this safer and faster... Previously post I talk about
this!! That someone tell me if there is a better way of do that, I'm
newer !! Please, if you can guide me


 --
 H
 +55 11 4249.



Re: [squid-users] limiting connections

2012-04-03 Thread Carlos Manuel Trepeu Pupo
On Mon, Apr 2, 2012 at 6:43 PM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 03.04.2012 02:21, Carlos Manuel Trepeu Pupo wrote:

 Thanks a looot !! That's what I'm missing, everything work
 fine now. So this script can use it cause it's already works.

 Now, I need to know if there is any way to consult the active request
 in squid that work faster that squidclient 


 ACL types are pretty easy to add to the Squid code. I'm happy to throw an
 ACL patch your way for a few $$.

 Which comes back to me earlier still unanswered question about why you want
 to do this very, very strange thing?

 Amos



OK !! Here the complicate and strange explanation:

Where I work we have 128 Kbps for the use of almost 80 PCs, a few of
them use download accelerators and saturate the channel. I began to
use the ACL maxconn but I have still a few problems. 60 of the clients
are under an ISA server that I don't administrate, so I can't limit
the maxconn to them like the others. Now with this ACL, everyone can
download but with only one connection. that's the strange main idea.


Re: [squid-users] limiting connections

2012-04-03 Thread Carlos Manuel Trepeu Pupo
On Tue, Apr 3, 2012 at 4:36 PM, Eliezer Croitoru elie...@ngtech.co.il wrote:
 On 03/04/2012 18:30, Carlos Manuel Trepeu Pupo wrote:

 On Mon, Apr 2, 2012 at 6:43 PM, Amos Jeffriessqu...@treenet.co.nz
  wrote:

 On 03.04.2012 02:21, Carlos Manuel Trepeu Pupo wrote:


 Thanks a looot !! That's what I'm missing, everything work
 fine now. So this script can use it cause it's already works.

 Now, I need to know if there is any way to consult the active request
 in squid that work faster that squidclient 


 ACL types are pretty easy to add to the Squid code. I'm happy to throw an
 ACL patch your way for a few $$.

 Which comes back to me earlier still unanswered question about why you
 want
 to do this very, very strange thing?

 Amos



 OK !! Here the complicate and strange explanation:

 Where I work we have 128 Kbps for the use of almost 80 PCs, a few of
 them use download accelerators and saturate the channel. I began to
 use the ACL maxconn but I have still a few problems. 60 of the clients
 are under an ISA server that I don't administrate, so I can't limit
 the maxconn to them like the others. Now with this ACL, everyone can
 download but with only one connection. that's the strange main idea.

 what do you mean by only one connection?
 if it's under one isa server then all of them share the same external IP.


Yes, all the users under ISA server just can download the same file
with one connection, no more, because as you say have the same IP.


 --
 Eliezer Croitoru
 https://www1.ngtech.co.il
 IT consulting for Nonprofit organizations
 eliezer at ngtech.co.il


Re: [squid-users] limiting connections

2012-04-02 Thread Carlos Manuel Trepeu Pupo
Thanks a looot !! That's what I'm missing, everything work
fine now. So this script can use it cause it's already works.

Now, I need to know if there is any way to consult the active request
in squid that work faster that squidclient 

On Sat, Mar 31, 2012 at 9:58 PM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 1/04/2012 7:58 a.m., Carlos Manuel Trepeu Pupo wrote:

 On Sat, Mar 31, 2012 at 4:18 AM, Amos Jeffriessqu...@treenet.co.nz
  wrote:

 On 31/03/2012 3:07 a.m., Carlos Manuel Trepeu Pupo wrote:


 Now I have the following question:
 The possible error to return are 'OK' or 'ERR', if I assume like
 Boolean answer, OK-TRUE    ERR-FALSE. Is this right ?


 Equivalent, yes. Specifically it means success / failure or match /
 non-match on the ACL.


 So, if I deny my acl:
 http_access deny external_helper_acl

 work like this (with the http_access below):
 If return OK -    I denied
 If return ERR -    I do not denied

 It's right this ??? Tanks again for the help !!!


 Correct.

 OK, following the idea of this thread that's what I have:

 #!/bin/bash
 while read line; do
         # -  This it for debug (Testing i saw that not always save to
 file, maybe not always pass from this ACL)
         echo $line  /home/carlos/guarda

         result=`squidclient -h 10.11.10.18 mgr:active_requests | grep
 -c $line`

   if [ $result == 1 ]
         then
         echo 'OK'
         echo 'OK'/home/carlos/guarda
   else
         echo 'ERR'
         echo 'ERR'/home/carlos/guarda
   fi
 done

 In the squid.conf this is the configuration:

 acl test src 10.11.10.12/32
 acl test src 10.11.10.11/32

 acl extensions url_regex /etc/squid3/extensions
 # extensions contains:

 \.(iso|avi|wav|mp3|mp4|mpeg|swf|flv|mpg|wma|ogg|wmv|asx|asf|deb|rpm|exe|zip|tar|tgz|rar|ppt|doc|tiff|pdf)$
 external_acl_type one_conn %URI /home/carlos/contain
 acl limit external one_conn

 http_access allow localhost
 http_access deny extensions !limit
 deny_info ERR_LIMIT limit
 http_access allow test


 I start to download from:
 10.11.10.12 -
  http://ch.releases.ubuntu.com//oneiric/ubuntu-11.10-desktop-i386.iso
 then start from:
 10.11.10.11 -
  http://ch.releases.ubuntu.com//oneiric/ubuntu-11.10-desktop-i386.iso

 And let me download. What I'm missing ???


 You must set ttl=0 negative_ttl=0 grace=0 as options for your
 external_acl_type directive. To disable caching optimizations on the helper
 results.

 Amos


Re: [squid-users] limiting connections

2012-03-31 Thread Carlos Manuel Trepeu Pupo
On Sat, Mar 31, 2012 at 4:18 AM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 31/03/2012 3:07 a.m., Carlos Manuel Trepeu Pupo wrote:


 Now I have the following question:
 The possible error to return are 'OK' or 'ERR', if I assume like
 Boolean answer, OK-TRUE  ERR-FALSE. Is this right ?


 Equivalent, yes. Specifically it means success / failure or match /
 non-match on the ACL.


 So, if I deny my acl:
 http_access deny external_helper_acl

 work like this (with the http_access below):
 If return OK -  I denied
 If return ERR -  I do not denied

 It's right this ??? Tanks again for the help !!!


 Correct.

OK, following the idea of this thread that's what I have:

#!/bin/bash
while read line; do
# - This it for debug (Testing i saw that not always save to
file, maybe not always pass from this ACL)
echo $line  /home/carlos/guarda 

result=`squidclient -h 10.11.10.18 mgr:active_requests | grep
-c $line`

  if [ $result == 1 ]
then
echo 'OK'
echo 'OK'/home/carlos/guarda 
  else
echo 'ERR'
echo 'ERR'/home/carlos/guarda 
  fi
done

In the squid.conf this is the configuration:

acl test src 10.11.10.12/32
acl test src 10.11.10.11/32

acl extensions url_regex /etc/squid3/extensions
# extensions contains:
\.(iso|avi|wav|mp3|mp4|mpeg|swf|flv|mpg|wma|ogg|wmv|asx|asf|deb|rpm|exe|zip|tar|tgz|rar|ppt|doc|tiff|pdf)$
external_acl_type one_conn %URI /home/carlos/contain
acl limit external one_conn

http_access allow localhost
http_access deny extensions !limit
deny_info ERR_LIMIT limit
http_access allow test


I start to download from:
10.11.10.12 - 
http://ch.releases.ubuntu.com//oneiric/ubuntu-11.10-desktop-i386.iso
then start from:
10.11.10.11 - 
http://ch.releases.ubuntu.com//oneiric/ubuntu-11.10-desktop-i386.iso

And let me download. What I'm missing ???


# -

http_access deny all




 Amos



Re: [squid-users] limiting connections

2012-03-30 Thread Carlos Manuel Trepeu Pupo
On Thu, Mar 29, 2012 at 4:03 PM, Eliezer Croitoru elie...@ngtech.co.il wrote:
 On 29/03/2012 21:05, Carlos Manuel Trepeu Pupo wrote:

 On Tue, Mar 27, 2012 at 1:23 PM, Eliezer Croitoruelie...@ngtech.co.il
  wrote:

 On 27/03/2012 17:27, Carlos Manuel Trepeu Pupo wrote:


 On Mon, Mar 26, 2012 at 5:45 PM, Amos Jeffriessqu...@treenet.co.nz
  wrote:


 On 27.03.2012 10:13, Carlos Manuel Trepeu Pupo wrote:



 On Sat, Mar 24, 2012 at 6:31 PM, Amos Jeffriessqu...@treenet.co.nz
 wrote:



 On 25/03/2012 7:23 a.m., Carlos Manuel Trepeu Pupo wrote:

 On Thu, Mar 22, 2012 at 10:00 PM, Amos Jeffries wrote:




 On 23/03/2012 5:42 a.m., Carlos Manuel Trepeu Pupo wrote:




 I need to block each user to make just one connection to download
 specific extension files, but I dont know how to tell that can
 make
 just one connection to each file and not just one connection to
 every
 file with this extension.

 i.e:
 www.google.com #All connection that required
 www.any.domain.com/my_file.rar #just one connection to that file
 www.other.domain.net/other_file.iso #just connection to this file
 www.other_domain1.com/other_file1.rar #just one connection to that
 file

 I hope you understand me and can help me, I have my boss hurrying
 me
 !!!





 There is no easy way to test this in Squid.

 You need an external_acl_type helper which gets given the URI and
 decides
 whether it is permitted or not. That decision can be made by
 querying
 Squid
 cache manager for the list of active_requests and seeing if the URL
 appears
 more than once.




 Hello Amos, following your instructions I make this
 external_acl_type
 helper:

 #!/bin/bash
 result=`squidclient -h 192.168.19.19 mgr:active_requests | grep -c
 $1`
 if [ $result -eq 0 ]
 then
 echo 'OK'
 else
 echo 'ERR'
 fi

 # If I have the same URI then I denied. I make a few test and it
 work
 for me. The problem is when I add the rule to the squid. I make
 this:

 acl extensions url_regex /etc/squid3/extensions
 external_acl_type one_conn %URI /home/carlos/script
 acl limit external one_conn

 # where extensions have:





 \.(iso|avi|wav|mp3|mp4|mpeg|swf|flv|mpg|wma|ogg|wmv|asx|asf|deb|rpm|exe|zip|tar|tgz|rar|ppt|doc|tiff|pdf)$

 http_access deny extensions limit


 So when I make squid3 -k reconfigure the squid stop working

 What can be happening ???





 * The helper needs to be running in a constant loop.
 You can find an example




 http://bazaar.launchpad.net/~squid/squid/3.2/view/head:/helpers/url_rewrite/fake/url_fake_rewrite.sh
 although that is re-writer and you do need to keep the OK/ERR for
 external
 ACL.




 Sorry, this is my first helper, I do not understand the meaning of
 running in a constant loop, in the example I see something like I do.
 Making some test I found that without this line :
 result=`squidclient -h 192.168.19.19 mgr:active_requests | grep -c
 $1`
 the helper not crash, dont work event too, but do not crash, so i
 consider this is in some way the problem.





 Squid starts helpers then uses the STDIN channel to pass it a series of
 requests, reading STDOUt channel for the results. The helper once
 started
 is
 expected to continue until a EOL/close/terminate signal is received on
 its
 STDIN.

 Your helper is exiting without being asked to be Squid after only one
 request. That is logged by Squid as a crash.




 * eq 0 - there should always be 1 request matching the URL. Which
 is
 the
 request you are testing to see if its1 or not. You are wanting to
 deny
 for
 the case where there are *2* requests in existence.




 This is true, but the way I saw was: If the URL do not exist, so
 can't be duplicate, I think isn't wrong !!




 It can't not exist. Squid is already servicing the request you are
 testing
 about.

 Like this:

  receive HTTP request -    (count=1)
  - test ACL (count=1 -    OK)
  - done (count=0)

  receive a HTTP request (count-=1)
  - test ACL (count=1 -    OK)
  receive b HTTP request (count=2)
  - test ACL (count=2 -    ERR)
  - reject b (count=1)
  done a (count=0)



 With your explanation and code from Eliezer Croitoru I made this:

 #!/bin/bash

 while read line; do
        result=`squidclient -h 192.168.19.19 mgr:active_requests | grep
 -c $line`

        echo $line    /home/carlos/guarda       # -    Add this line
 to
 see in a file the $URI I passed to the helper

        if [ $result -eq 1 ]                                   # -
 With your great explain you made me, I change to 1
        then
        echo 'OK'
        else
        echo 'ERR'
        fi
 done

 It's look like it's gonna work, but, here another miss.
 1- The echo $line    /home/carlos/guarda do not save anything to
 the
 file.
 2- When I return 'OK' then in my .conf I can't make a rule like I
 wrote before, I have to make something like this: http_access deny
 extensions !limit, in the many helps you bring me guys, I learn that
 the name limit here its not functional. The deny of limit its
 because when there are just one connection I cant block

Fwd: [squid-users] limiting connections

2012-03-29 Thread Carlos Manuel Trepeu Pupo
On Tue, Mar 27, 2012 at 1:23 PM, Eliezer Croitoru elie...@ngtech.co.il wrote:
 On 27/03/2012 17:27, Carlos Manuel Trepeu Pupo wrote:

 On Mon, Mar 26, 2012 at 5:45 PM, Amos Jeffriessqu...@treenet.co.nz
  wrote:

 On 27.03.2012 10:13, Carlos Manuel Trepeu Pupo wrote:


 On Sat, Mar 24, 2012 at 6:31 PM, Amos Jeffriessqu...@treenet.co.nz
 wrote:


 On 25/03/2012 7:23 a.m., Carlos Manuel Trepeu Pupo wrote:

 On Thu, Mar 22, 2012 at 10:00 PM, Amos Jeffries wrote:



 On 23/03/2012 5:42 a.m., Carlos Manuel Trepeu Pupo wrote:



 I need to block each user to make just one connection to download
 specific extension files, but I dont know how to tell that can make
 just one connection to each file and not just one connection to
 every
 file with this extension.

 i.e:
 www.google.com #All connection that required
 www.any.domain.com/my_file.rar #just one connection to that file
 www.other.domain.net/other_file.iso #just connection to this file
 www.other_domain1.com/other_file1.rar #just one connection to that
 file

 I hope you understand me and can help me, I have my boss hurrying me
 !!!




 There is no easy way to test this in Squid.

 You need an external_acl_type helper which gets given the URI and
 decides
 whether it is permitted or not. That decision can be made by querying
 Squid
 cache manager for the list of active_requests and seeing if the URL
 appears
 more than once.



 Hello Amos, following your instructions I make this external_acl_type
 helper:

 #!/bin/bash
 result=`squidclient -h 192.168.19.19 mgr:active_requests | grep -c
 $1`
 if [ $result -eq 0 ]
 then
 echo 'OK'
 else
 echo 'ERR'
 fi

 # If I have the same URI then I denied. I make a few test and it work
 for me. The problem is when I add the rule to the squid. I make this:

 acl extensions url_regex /etc/squid3/extensions
 external_acl_type one_conn %URI /home/carlos/script
 acl limit external one_conn

 # where extensions have:




 \.(iso|avi|wav|mp3|mp4|mpeg|swf|flv|mpg|wma|ogg|wmv|asx|asf|deb|rpm|exe|zip|tar|tgz|rar|ppt|doc|tiff|pdf)$

 http_access deny extensions limit


 So when I make squid3 -k reconfigure the squid stop working

 What can be happening ???




 * The helper needs to be running in a constant loop.
 You can find an example



 http://bazaar.launchpad.net/~squid/squid/3.2/view/head:/helpers/url_rewrite/fake/url_fake_rewrite.sh
 although that is re-writer and you do need to keep the OK/ERR for
 external
 ACL.



 Sorry, this is my first helper, I do not understand the meaning of
 running in a constant loop, in the example I see something like I do.
 Making some test I found that without this line :
 result=`squidclient -h 192.168.19.19 mgr:active_requests | grep -c $1`
 the helper not crash, dont work event too, but do not crash, so i
 consider this is in some way the problem.




 Squid starts helpers then uses the STDIN channel to pass it a series of
 requests, reading STDOUt channel for the results. The helper once started
 is
 expected to continue until a EOL/close/terminate signal is received on
 its
 STDIN.

 Your helper is exiting without being asked to be Squid after only one
 request. That is logged by Squid as a crash.




 * eq 0 - there should always be 1 request matching the URL. Which is
 the
 request you are testing to see if its1 or not. You are wanting to deny
 for
 the case where there are *2* requests in existence.



 This is true, but the way I saw was: If the URL do not exist, so
 can't be duplicate, I think isn't wrong !!



 It can't not exist. Squid is already servicing the request you are
 testing
 about.

 Like this:

  receive HTTP request -  (count=1)
  - test ACL (count=1 -  OK)
  - done (count=0)

  receive a HTTP request (count-=1)
  - test ACL (count=1 -  OK)
  receive b HTTP request (count=2)
  - test ACL (count=2 -  ERR)
  - reject b (count=1)
  done a (count=0)


 With your explanation and code from Eliezer Croitoru I made this:

 #!/bin/bash

 while read line; do
        result=`squidclient -h 192.168.19.19 mgr:active_requests | grep
 -c $line`

        echo $line  /home/carlos/guarda       # -  Add this line to
 see in a file the $URI I passed to the helper

        if [ $result -eq 1 ]                                   # -
 With your great explain you made me, I change to 1
        then
        echo 'OK'
        else
        echo 'ERR'
        fi
 done

 It's look like it's gonna work, but, here another miss.
 1- The echo $line  /home/carlos/guarda do not save anything to the
 file.
 2- When I return 'OK' then in my .conf I can't make a rule like I
 wrote before, I have to make something like this: http_access deny
 extensions !limit, in the many helps you bring me guys, I learn that
 the name limit here its not functional. The deny of limit its
 because when there are just one connection I cant block the page.
 3- With the script just like Eliezer tape it the page with the URL to
 download stay loading infinitely.

 So, I have less work, can you help me ??


 1. the first

Re: [squid-users] limiting connections

2012-03-29 Thread Carlos Manuel Trepeu Pupo
On Thu, Mar 29, 2012 at 4:03 PM, Eliezer Croitoru elie...@ngtech.co.il wrote:
 On 29/03/2012 21:05, Carlos Manuel Trepeu Pupo wrote:

 On Tue, Mar 27, 2012 at 1:23 PM, Eliezer Croitoruelie...@ngtech.co.il
  wrote:

 On 27/03/2012 17:27, Carlos Manuel Trepeu Pupo wrote:


 On Mon, Mar 26, 2012 at 5:45 PM, Amos Jeffriessqu...@treenet.co.nz
  wrote:


 On 27.03.2012 10:13, Carlos Manuel Trepeu Pupo wrote:



 On Sat, Mar 24, 2012 at 6:31 PM, Amos Jeffriessqu...@treenet.co.nz
 wrote:



 On 25/03/2012 7:23 a.m., Carlos Manuel Trepeu Pupo wrote:

 On Thu, Mar 22, 2012 at 10:00 PM, Amos Jeffries wrote:




 On 23/03/2012 5:42 a.m., Carlos Manuel Trepeu Pupo wrote:




 I need to block each user to make just one connection to download
 specific extension files, but I dont know how to tell that can
 make
 just one connection to each file and not just one connection to
 every
 file with this extension.

 i.e:
 www.google.com #All connection that required
 www.any.domain.com/my_file.rar #just one connection to that file
 www.other.domain.net/other_file.iso #just connection to this file
 www.other_domain1.com/other_file1.rar #just one connection to that
 file

 I hope you understand me and can help me, I have my boss hurrying
 me
 !!!





 There is no easy way to test this in Squid.

 You need an external_acl_type helper which gets given the URI and
 decides
 whether it is permitted or not. That decision can be made by
 querying
 Squid
 cache manager for the list of active_requests and seeing if the URL
 appears
 more than once.




 Hello Amos, following your instructions I make this
 external_acl_type
 helper:

 #!/bin/bash
 result=`squidclient -h 192.168.19.19 mgr:active_requests | grep -c
 $1`
 if [ $result -eq 0 ]
 then
 echo 'OK'
 else
 echo 'ERR'
 fi

 # If I have the same URI then I denied. I make a few test and it
 work
 for me. The problem is when I add the rule to the squid. I make
 this:

 acl extensions url_regex /etc/squid3/extensions
 external_acl_type one_conn %URI /home/carlos/script
 acl limit external one_conn

 # where extensions have:





 \.(iso|avi|wav|mp3|mp4|mpeg|swf|flv|mpg|wma|ogg|wmv|asx|asf|deb|rpm|exe|zip|tar|tgz|rar|ppt|doc|tiff|pdf)$

 http_access deny extensions limit


 So when I make squid3 -k reconfigure the squid stop working

 What can be happening ???





 * The helper needs to be running in a constant loop.
 You can find an example




 http://bazaar.launchpad.net/~squid/squid/3.2/view/head:/helpers/url_rewrite/fake/url_fake_rewrite.sh
 although that is re-writer and you do need to keep the OK/ERR for
 external
 ACL.




 Sorry, this is my first helper, I do not understand the meaning of
 running in a constant loop, in the example I see something like I do.
 Making some test I found that without this line :
 result=`squidclient -h 192.168.19.19 mgr:active_requests | grep -c
 $1`
 the helper not crash, dont work event too, but do not crash, so i
 consider this is in some way the problem.





 Squid starts helpers then uses the STDIN channel to pass it a series of
 requests, reading STDOUt channel for the results. The helper once
 started
 is
 expected to continue until a EOL/close/terminate signal is received on
 its
 STDIN.

 Your helper is exiting without being asked to be Squid after only one
 request. That is logged by Squid as a crash.




 * eq 0 - there should always be 1 request matching the URL. Which
 is
 the
 request you are testing to see if its1 or not. You are wanting to
 deny
 for
 the case where there are *2* requests in existence.




 This is true, but the way I saw was: If the URL do not exist, so
 can't be duplicate, I think isn't wrong !!




 It can't not exist. Squid is already servicing the request you are
 testing
 about.

 Like this:

  receive HTTP request -    (count=1)
  - test ACL (count=1 -    OK)
  - done (count=0)

  receive a HTTP request (count-=1)
  - test ACL (count=1 -    OK)
  receive b HTTP request (count=2)
  - test ACL (count=2 -    ERR)
  - reject b (count=1)
  done a (count=0)



 With your explanation and code from Eliezer Croitoru I made this:

 #!/bin/bash

 while read line; do
        result=`squidclient -h 192.168.19.19 mgr:active_requests | grep
 -c $line`

        echo $line    /home/carlos/guarda       # -    Add this line
 to
 see in a file the $URI I passed to the helper

        if [ $result -eq 1 ]                                   # -
 With your great explain you made me, I change to 1
        then
        echo 'OK'
        else
        echo 'ERR'
        fi
 done

 It's look like it's gonna work, but, here another miss.
 1- The echo $line    /home/carlos/guarda do not save anything to
 the
 file.
 2- When I return 'OK' then in my .conf I can't make a rule like I
 wrote before, I have to make something like this: http_access deny
 extensions !limit, in the many helps you bring me guys, I learn that
 the name limit here its not functional. The deny of limit its
 because when there are just one connection I cant block

Re: [squid-users] limiting connections

2012-03-29 Thread Carlos Manuel Trepeu Pupo
On Thu, Mar 29, 2012 at 4:03 PM, Eliezer Croitoru elie...@ngtech.co.il wrote:
 On 29/03/2012 21:05, Carlos Manuel Trepeu Pupo wrote:

 On Tue, Mar 27, 2012 at 1:23 PM, Eliezer Croitoruelie...@ngtech.co.il
  wrote:

 On 27/03/2012 17:27, Carlos Manuel Trepeu Pupo wrote:


 On Mon, Mar 26, 2012 at 5:45 PM, Amos Jeffriessqu...@treenet.co.nz
  wrote:


 On 27.03.2012 10:13, Carlos Manuel Trepeu Pupo wrote:



 On Sat, Mar 24, 2012 at 6:31 PM, Amos Jeffriessqu...@treenet.co.nz
 wrote:



 On 25/03/2012 7:23 a.m., Carlos Manuel Trepeu Pupo wrote:

 On Thu, Mar 22, 2012 at 10:00 PM, Amos Jeffries wrote:




 On 23/03/2012 5:42 a.m., Carlos Manuel Trepeu Pupo wrote:




 I need to block each user to make just one connection to download
 specific extension files, but I dont know how to tell that can
 make
 just one connection to each file and not just one connection to
 every
 file with this extension.

 i.e:
 www.google.com #All connection that required
 www.any.domain.com/my_file.rar #just one connection to that file
 www.other.domain.net/other_file.iso #just connection to this file
 www.other_domain1.com/other_file1.rar #just one connection to that
 file

 I hope you understand me and can help me, I have my boss hurrying
 me
 !!!





 There is no easy way to test this in Squid.

 You need an external_acl_type helper which gets given the URI and
 decides
 whether it is permitted or not. That decision can be made by
 querying
 Squid
 cache manager for the list of active_requests and seeing if the URL
 appears
 more than once.




 Hello Amos, following your instructions I make this
 external_acl_type
 helper:

 #!/bin/bash
 result=`squidclient -h 192.168.19.19 mgr:active_requests | grep -c
 $1`
 if [ $result -eq 0 ]
 then
 echo 'OK'
 else
 echo 'ERR'
 fi

 # If I have the same URI then I denied. I make a few test and it
 work
 for me. The problem is when I add the rule to the squid. I make
 this:

 acl extensions url_regex /etc/squid3/extensions
 external_acl_type one_conn %URI /home/carlos/script
 acl limit external one_conn

 # where extensions have:





 \.(iso|avi|wav|mp3|mp4|mpeg|swf|flv|mpg|wma|ogg|wmv|asx|asf|deb|rpm|exe|zip|tar|tgz|rar|ppt|doc|tiff|pdf)$

 http_access deny extensions limit


 So when I make squid3 -k reconfigure the squid stop working

 What can be happening ???





 * The helper needs to be running in a constant loop.
 You can find an example




 http://bazaar.launchpad.net/~squid/squid/3.2/view/head:/helpers/url_rewrite/fake/url_fake_rewrite.sh
 although that is re-writer and you do need to keep the OK/ERR for
 external
 ACL.




 Sorry, this is my first helper, I do not understand the meaning of
 running in a constant loop, in the example I see something like I do.
 Making some test I found that without this line :
 result=`squidclient -h 192.168.19.19 mgr:active_requests | grep -c
 $1`
 the helper not crash, dont work event too, but do not crash, so i
 consider this is in some way the problem.





 Squid starts helpers then uses the STDIN channel to pass it a series of
 requests, reading STDOUt channel for the results. The helper once
 started
 is
 expected to continue until a EOL/close/terminate signal is received on
 its
 STDIN.

 Your helper is exiting without being asked to be Squid after only one
 request. That is logged by Squid as a crash.




 * eq 0 - there should always be 1 request matching the URL. Which
 is
 the
 request you are testing to see if its1 or not. You are wanting to
 deny
 for
 the case where there are *2* requests in existence.




 This is true, but the way I saw was: If the URL do not exist, so
 can't be duplicate, I think isn't wrong !!




 It can't not exist. Squid is already servicing the request you are
 testing
 about.

 Like this:

  receive HTTP request -    (count=1)
  - test ACL (count=1 -    OK)
  - done (count=0)

  receive a HTTP request (count-=1)
  - test ACL (count=1 -    OK)
  receive b HTTP request (count=2)
  - test ACL (count=2 -    ERR)
  - reject b (count=1)
  done a (count=0)



 With your explanation and code from Eliezer Croitoru I made this:

 #!/bin/bash

 while read line; do
        result=`squidclient -h 192.168.19.19 mgr:active_requests | grep
 -c $line`

        echo $line    /home/carlos/guarda       # -    Add this line
 to
 see in a file the $URI I passed to the helper

        if [ $result -eq 1 ]                                   # -
 With your great explain you made me, I change to 1
        then
        echo 'OK'
        else
        echo 'ERR'
        fi
 done

 It's look like it's gonna work, but, here another miss.
 1- The echo $line    /home/carlos/guarda do not save anything to
 the
 file.
 2- When I return 'OK' then in my .conf I can't make a rule like I
 wrote before, I have to make something like this: http_access deny
 extensions !limit, in the many helps you bring me guys, I learn that
 the name limit here its not functional. The deny of limit its
 because when there are just one connection I cant block

Re: [squid-users] limiting connections

2012-03-27 Thread Carlos Manuel Trepeu Pupo
On Mon, Mar 26, 2012 at 5:45 PM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 27.03.2012 10:13, Carlos Manuel Trepeu Pupo wrote:

 On Sat, Mar 24, 2012 at 6:31 PM, Amos Jeffries squ...@treenet.co.nz
 wrote:

 On 25/03/2012 7:23 a.m., Carlos Manuel Trepeu Pupo wrote:

 On Thu, Mar 22, 2012 at 10:00 PM, Amos Jeffries wrote:


 On 23/03/2012 5:42 a.m., Carlos Manuel Trepeu Pupo wrote:


 I need to block each user to make just one connection to download
 specific extension files, but I dont know how to tell that can make
 just one connection to each file and not just one connection to every
 file with this extension.

 i.e:
 www.google.com #All connection that required
 www.any.domain.com/my_file.rar #just one connection to that file
 www.other.domain.net/other_file.iso #just connection to this file
 www.other_domain1.com/other_file1.rar #just one connection to that
 file

 I hope you understand me and can help me, I have my boss hurrying me
 !!!



 There is no easy way to test this in Squid.

 You need an external_acl_type helper which gets given the URI and
 decides
 whether it is permitted or not. That decision can be made by querying
 Squid
 cache manager for the list of active_requests and seeing if the URL
 appears
 more than once.


 Hello Amos, following your instructions I make this external_acl_type
 helper:

 #!/bin/bash
 result=`squidclient -h 192.168.19.19 mgr:active_requests | grep -c $1`
 if [ $result -eq 0 ]
 then
 echo 'OK'
 else
 echo 'ERR'
 fi

 # If I have the same URI then I denied. I make a few test and it work
 for me. The problem is when I add the rule to the squid. I make this:

 acl extensions url_regex /etc/squid3/extensions
 external_acl_type one_conn %URI /home/carlos/script
 acl limit external one_conn

 # where extensions have:



 \.(iso|avi|wav|mp3|mp4|mpeg|swf|flv|mpg|wma|ogg|wmv|asx|asf|deb|rpm|exe|zip|tar|tgz|rar|ppt|doc|tiff|pdf)$

 http_access deny extensions limit


 So when I make squid3 -k reconfigure the squid stop working

 What can be happening ???



 * The helper needs to be running in a constant loop.
 You can find an example


 http://bazaar.launchpad.net/~squid/squid/3.2/view/head:/helpers/url_rewrite/fake/url_fake_rewrite.sh
 although that is re-writer and you do need to keep the OK/ERR for
 external
 ACL.


 Sorry, this is my first helper, I do not understand the meaning of
 running in a constant loop, in the example I see something like I do.
 Making some test I found that without this line :
 result=`squidclient -h 192.168.19.19 mgr:active_requests | grep -c $1`
 the helper not crash, dont work event too, but do not crash, so i
 consider this is in some way the problem.



 Squid starts helpers then uses the STDIN channel to pass it a series of
 requests, reading STDOUt channel for the results. The helper once started is
 expected to continue until a EOL/close/terminate signal is received on its
 STDIN.

 Your helper is exiting without being asked to be Squid after only one
 request. That is logged by Squid as a crash.




 * eq 0 - there should always be 1 request matching the URL. Which is
 the
 request you are testing to see if its 1 or not. You are wanting to deny
 for
 the case where there are *2* requests in existence.


 This is true, but the way I saw was: If the URL do not exist, so
 can't be duplicate, I think isn't wrong !!


 It can't not exist. Squid is already servicing the request you are testing
 about.

 Like this:

  receive HTTP request - (count=1)
  - test ACL (count=1 - OK)
  - done (count=0)

  receive a HTTP request (count-=1)
  - test ACL (count=1 - OK)
  receive b HTTP request (count=2)
  - test ACL (count=2 - ERR)
  - reject b (count=1)
  done a (count=0)

With your explanation and code from Eliezer Croitoru I made this:

#!/bin/bash

while read line; do
   result=`squidclient -h 192.168.19.19 mgr:active_requests | grep
-c $line`

   echo $line  /home/carlos/guarda   # - Add this line to
see in a file the $URI I passed to the helper

   if [ $result -eq 1 ]   # -
With your great explain you made me, I change to 1
   then
   echo 'OK'
   else
   echo 'ERR'
   fi
done

It's look like it's gonna work, but, here another miss.
1- The echo $line  /home/carlos/guarda do not save anything to the file.
2- When I return 'OK' then in my .conf I can't make a rule like I
wrote before, I have to make something like this: http_access deny
extensions !limit, in the many helps you bring me guys, I learn that
the name limit here its not functional. The deny of limit its
because when there are just one connection I cant block the page.
3- With the script just like Eliezer tape it the page with the URL to
download stay loading infinitely.

So, I have less work, can you help me ??







 * ensure you have manager requests form localhost not going through the
 ACL
 test.


 I was making this wrong, the localhost was going through the ACL, but
 I just changed !!! The problem persist

Re: [squid-users] limiting connections

2012-03-26 Thread Carlos Manuel Trepeu Pupo
On Sat, Mar 24, 2012 at 6:31 PM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 25/03/2012 7:23 a.m., Carlos Manuel Trepeu Pupo wrote:

 On Thu, Mar 22, 2012 at 10:00 PM, Amos Jeffries wrote:

 On 23/03/2012 5:42 a.m., Carlos Manuel Trepeu Pupo wrote:

 I need to block each user to make just one connection to download
 specific extension files, but I dont know how to tell that can make
 just one connection to each file and not just one connection to every
 file with this extension.

 i.e:
 www.google.com #All connection that required
 www.any.domain.com/my_file.rar #just one connection to that file
 www.other.domain.net/other_file.iso #just connection to this file
 www.other_domain1.com/other_file1.rar #just one connection to that file

 I hope you understand me and can help me, I have my boss hurrying me !!!


 There is no easy way to test this in Squid.

 You need an external_acl_type helper which gets given the URI and decides
 whether it is permitted or not. That decision can be made by querying
 Squid
 cache manager for the list of active_requests and seeing if the URL
 appears
 more than once.

 Hello Amos, following your instructions I make this external_acl_type
 helper:

 #!/bin/bash
 result=`squidclient -h 192.168.19.19 mgr:active_requests | grep -c $1`
 if [ $result -eq 0 ]
 then
 echo 'OK'
 else
 echo 'ERR'
 fi

 # If I have the same URI then I denied. I make a few test and it work
 for me. The problem is when I add the rule to the squid. I make this:

 acl extensions url_regex /etc/squid3/extensions
 external_acl_type one_conn %URI /home/carlos/script
 acl limit external one_conn

 # where extensions have:

 \.(iso|avi|wav|mp3|mp4|mpeg|swf|flv|mpg|wma|ogg|wmv|asx|asf|deb|rpm|exe|zip|tar|tgz|rar|ppt|doc|tiff|pdf)$

 http_access deny extensions limit


 So when I make squid3 -k reconfigure the squid stop working

 What can be happening ???


 * The helper needs to be running in a constant loop.
 You can find an example
 http://bazaar.launchpad.net/~squid/squid/3.2/view/head:/helpers/url_rewrite/fake/url_fake_rewrite.sh
 although that is re-writer and you do need to keep the OK/ERR for external
 ACL.

Sorry, this is my first helper, I do not understand the meaning of
running in a constant loop, in the example I see something like I do.
Making some test I found that without this line :
result=`squidclient -h 192.168.19.19 mgr:active_requests | grep -c $1`
the helper not crash, dont work event too, but do not crash, so i
consider this is in some way the problem.


 * eq 0 - there should always be 1 request matching the URL. Which is the
 request you are testing to see if its 1 or not. You are wanting to deny for
 the case where there are *2* requests in existence.

This is true, but the way I saw was: If the URL do not exist, so
can't be duplicate, I think isn't wrong !!


 * ensure you have manager requests form localhost not going through the ACL
 test.

I was making this wrong, the localhost was going through the ACL, but
I just changed !!! The problem persist, What can I do ???



 Amos



Re: [squid-users] limiting connections

2012-03-24 Thread Carlos Manuel Trepeu Pupo
On Thu, Mar 22, 2012 at 10:00 PM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 23/03/2012 5:42 a.m., Carlos Manuel Trepeu Pupo wrote:

 I need to block each user to make just one connection to download
 specific extension files, but I dont know how to tell that can make
 just one connection to each file and not just one connection to every
 file with this extension.

 i.e:
 www.google.com #All connection that required
 www.any.domain.com/my_file.rar #just one connection to that file
 www.other.domain.net/other_file.iso #just connection to this file
 www.other_domain1.com/other_file1.rar #just one connection to that file

 I hope you understand me and can help me, I have my boss hurrying me !!!


 There is no easy way to test this in Squid.

 You need an external_acl_type helper which gets given the URI and decides
 whether it is permitted or not. That decision can be made by querying Squid
 cache manager for the list of active_requests and seeing if the URL appears
 more than once.

Hello Amos, following your instructions I make this external_acl_type helper:

#!/bin/bash
result=`squidclient -h 192.168.19.19 mgr:active_requests | grep -c $1`
if [ $result -eq 0 ]
then
echo 'OK'
else
echo 'ERR'
fi

# If I have the same URI then I denied. I make a few test and it work
for me. The problem is when I add the rule to the squid. I make this:

acl extensions url_regex /etc/squid3/extensions
external_acl_type one_conn %URI /home/carlos/script
acl limit external one_conn

# where extensions have:
\.(iso|avi|wav|mp3|mp4|mpeg|swf|flv|mpg|wma|ogg|wmv|asx|asf|deb|rpm|exe|zip|tar|tgz|rar|ppt|doc|tiff|pdf)$

http_access deny extensions limit


So when I make squid3 -k reconfigure the squid stop working

What can be happening ???

This is my log of squid:
Mar 24 09:25:04 test squid[28075]: helperHandleRead: unexpected read
from one_conn #1, 3 bytes 'OK '
Mar 24 09:25:04 test squid[28075]: helperHandleRead: unexpected read
from one_conn #2, 3 bytes 'OK '
Mar 24 09:25:04 test squid[28075]: WARNING: one_conn #1 (FD 15) exited
Mar 24 09:25:04 test squid[28075]: WARNING: one_conn #2 (FD 16) exited
Mar 24 09:25:04 test squid[28075]: CACHEMGR: unknown@192.168.19.19
requesting 'active_requests'
Mar 24 09:25:04 test squid[28075]: helperHandleRead: unexpected read
from one_conn #3, 3 bytes 'OK '
Mar 24 09:25:04 test squid[28075]: WARNING: one_conn #3 (FD 24) exited
Mar 24 09:25:04 test squid[28075]: helperHandleRead: unexpected read
from one_conn #4, 4 bytes 'ERR '
Mar 24 09:25:04 test squid[28075]: WARNING: one_conn #4 (FD 27) exited
Mar 24 09:25:04 test squid[28075]: Too few one_conn processes are running
Mar 24 09:25:04 test squid[28075]: storeDirWriteCleanLogs: Starting...
Mar 24 09:25:04 test squid[28075]: WARNING: Closing open FD   12
Mar 24 09:25:04 test squid[28075]:   Finished.  Wrote 25613 entries.
Mar 24 09:25:04 test squid[28075]:   Took 0.00 seconds (7740404.96 entries/sec).
Mar 24 09:25:04 test squid[28075]: The one_conn helpers are crashing
too rapidly, need help!



 Amos



Re: [squid-users] Re: Need some help about delay_parameters directive

2012-03-22 Thread Carlos Manuel Trepeu Pupo
The delay_parameter it's in bytes not bits. Try and tell if work !!

On Wed, Mar 21, 2012 at 2:11 AM, Muhammad Yousuf Khan sir...@gmail.com wrote:
 please help me. delay_parameter 1 32000/1024 this means  if i
 complete10MB what ever the size would be regardless of that my
 bandwidth would be limited to 32KB but in this case i can only able to
 download 5MB and then my bandwidth shrink downs to 32KB. why ..
 please help me. i search squid website it is clearly stating that
 delay parameters accepts bytes as a value. please help

 Thanks.

 On Tue, Mar 20, 2012 at 6:58 PM, Muhammad Yousuf Khan sir...@gmail.com 
 wrote:
 here is my acl and i want to limit download after every 10 MB of
 download. now i am a bit confuse now. why this value giving me
 expected result.
 my_ip src 10.51.100.240
 delay_pools 1
 delay_class 1 1
 delay_parameters 1 1/2000
 delay_access 1 allow my_ip

 according to my learning and understanding with squid delay parameters
 directive, it accepts bites as values. please correct me if wrong
 because i am a newbie.
 so according to my computation the calculation should be some thing
 like that. 10M should be

 10 x1024 = 10240KB
 10240x1024 = 10485760 Bytes
 10485760 x 8 = 83886080 bits

 now 2000 is giving me the desired result except 83886080. why?

 Please correct me and tell me what is wrong with my calculation or
 understanding.

 Thanks,

 MYK


[squid-users] limiting connections

2012-03-22 Thread Carlos Manuel Trepeu Pupo
I need to block each user to make just one connection to download
specific extension files, but I dont know how to tell that can make
just one connection to each file and not just one connection to every
file with this extension.

i.e:
www.google.com #All connection that required
www.any.domain.com/my_file.rar #just one connection to that file
www.other.domain.net/other_file.iso #just connection to this file
www.other_domain1.com/other_file1.rar #just one connection to that file

I hope you understand me and can help me, I have my boss hurrying me !!!


Re: [squid-users] Re: Need some help about delay_parameters directive

2012-03-22 Thread Carlos Manuel Trepeu Pupo
If you want to limit to 128 KB/s when the download reach 10 MB, then
must be something like this:

delay_parameters 1 131072/10485760

Remember that 1 MB = 1*1024*1024 bytes , thats why 1024000 bytes
represent 1 MB, I hope can help you !!!

On Thu, Mar 22, 2012 at 4:41 PM, Muhammad Yousuf Khan sir...@gmail.com wrote:
 bandwidth


[squid-users] squid with parent sock5 proxy

2012-03-01 Thread Carlos Manuel Trepeu Pupo
Hi, I need to install squid, but my parent proxy only have sock5.
Can I use squid in windows for this ???


[squid-users] ACL

2012-02-17 Thread Carlos Manuel Trepeu Pupo
Hi !

I want to block:
http://*.google.com.cu

but allow:
http://www.google.com.cu/custom*

I mean deny all the subdomain of google.com.cu except all the URL that
contain the line below

I have Ubuntu with Squid 3.0 STABLE1 with this conf:

acl deny_google dstdom_regex -i google.com

acl allow_google urlpath_regex -i www.google.com.cu/custom

http_access allow allow_google
http_access deny deny_google

With this conf allow all the custom search but deny the rest. The
problem is that this configuration do not work ... what it's wrong ??


Thanksss !!


[squid-users] blocking page preview of google

2012-02-14 Thread Carlos Manuel Trepeu Pupo
I have squid 3.0 STABLE1 and I allow to my user google it. Now I have
too low bandwidth so, I need to block the preview page and other
services of google, how can I do something like that with Squid ? What
else can help to do that ?


Re: [squid-users] limit maxconn

2012-01-27 Thread Carlos Manuel Trepeu Pupo
On 1/26/12, Amos Jeffries squ...@treenet.co.nz wrote:
 On 27/01/2012 2:46 p.m., Carlos Manuel Trepeu Pupo wrote:
 I have squid 3.0 STABLE1 giving service to 340 clients. I need to
 limit the maxconn to 20, but I need to know if I put 192.168.10.0/24
 will limit each IP to 20 or the entire /24 to 20. In case that the
 rule it's for the entire /24, so I need to create the rule for each IP
 ?

 Put 192.168.10.0/24 where exactly?

Sorry for the explication !!

In the maxconn ACL? Wont work, maxconn takes a single value.
In a separate unrelated src ACL? notice how src != maxconn. And its
 test result is equally independent when tested. src looks for an
 individual IP (the packet src IP) in a set.

 Amos


# I have this:
acl client src 10.10.10.0/24
acl client src 10.71.0.0/24
acl client src 10.1.0.0/24

acl max_conn maxconn 10

http_access deny client max_conn

# The idea of above configuration is allow a maximum of 10 http
connection from each IP from clients networks to access the proxy.

I need to know if this work or this configuration allow just 10 http
connection between all !!!


[squid-users] limit maxconn

2012-01-26 Thread Carlos Manuel Trepeu Pupo
I have squid 3.0 STABLE1 giving service to 340 clients. I need to
limit the maxconn to 20, but I need to know if I put 192.168.10.0/24
will limit each IP to 20 or the entire /24 to 20. In case that the
rule it's for the entire /24, so I need to create the rule for each IP
?

Thanks


Re: [squid-users] save last access

2012-01-23 Thread Carlos Manuel Trepeu Pupo
By user just real people and maybe his IP.
By Surf Last - only when a user is loading the page
I need the report in real-time but maybe one user surfed many days
ago, but I need to save this track.

I use Squid 3.0 Stable1. What daemon can I use to make this ?

Thanks a lot for your answer !!

On Sat, Jan 21, 2012 at 1:23 AM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 21/01/2012 5:06 a.m., Carlos Manuel Trepeu Pupo wrote:

 Hello ! I need to know when my users surf last time, so I need to know
 if there is any way to have this information and save to an sql
 database.


 The Squid log files are text data. So the answer is yes.

 Please explain user.  Only real people? or any machine which connects to
 Squid?

 Please explain surf last. Only when a user is loading the page? or even
 when their machine is doing something automatically by itself?

 Please explain under what conditions you are wantign the information back.
 monthly report? weekly? daily? hourly? real-time?


 Current Squid releases support logging daemons which can send log data
 anywhere and translate it to any form. Squid-3.2 bundles with a DB
 (database) daemon which is also available from SourceForge for squid-2.7

 Older Squid need log file reader daemons. Like squidtaild, and logger2sql.

 Amos



[squid-users] save last access

2012-01-20 Thread Carlos Manuel Trepeu Pupo
Hello ! I need to know when my users surf last time, so I need to know
if there is any way to have this information and save to an sql
database.


Re: [squid-users] Configuring Squid LDAP Authentication

2012-01-11 Thread Carlos Manuel Trepeu Pupo
With that tutorial from papercut I just configure my LDAP auth and
everything work great, post you .conf and the version of squid.

On Wed, Jan 11, 2012 at 1:30 PM, berry guru berryg...@gmail.com wrote:
 first s


[squid-users] something in the logs

2011-12-14 Thread Carlos Manuel Trepeu Pupo
In the log I see this:
1323897149.888  9 10.10.10.3 TCP_MEM_HIT/200 454242 GET
http://proxy.mydomain:3128/squid-internal-periodic/store_digest -
NONE/- application/cache-digest

What it's the meaning of this ?


Re: AW: [squid-users] block TOR

2011-12-05 Thread Carlos Manuel Trepeu Pupo
I want to block the Tor traffic because my clients use it to jump my
rules about the blocked site. In my firewall it's a little more
difficult refresh the Node that I want to block.

Jenny told about he/she can't establish a connection to the TOR net
across squid, but I can't see the problem, using CONNECT and 443 port
it's all the client needs !!!

I'm waiting for you guys !!!

On Sun, Dec 4, 2011 at 1:50 AM, Jenny Lee bodycar...@live.com wrote:

 Judging from dst acl, ultrasurf traffic and all in this thread, this is 
 talking about outgoing traffic to Tor via squid.

 Why would anyone want to block Tor traffic to his/her webserver (if this is 
 not an ecommerce site)? If it was an ecommerce site, they would know what to 
 do already and not ask this question here. Tor exists are made available 
 daily and firewall is hte place to drop them.

 I still want to hear what OP would say.

 Jenny




 From: amuel...@gmx.de
 To: squid-users@squid-cache.org
 Date: Sun, 4 Dec 2011 00:39:01 +0100
 Subject: AW: [squid-users] block TOR

 The question is with traffic of tor should be blocked. Outgoing client
 traffic to the tor network or incoming httpd requests from tor exit nodes ?

 Andreas

 -Ursprüngliche Nachricht-
 Von: Jenny Lee [mailto:bodycar...@live.com]
 Gesendet: Sonntag, 4. Dezember 2011 00:09
 An: charlie@gmail.com; leolis...@solutti.com.br
 Cc: squid-users@squid-cache.org
 Betreff: RE: [squid-users] block TOR


 I dont understand how you are managing to have anything to do with Tor to
 start with.

 Tor is speaking SOCKS5. You need Polipo to speak HTTP on the client side and
 SOCKS on the server side.

 I have actively tried to connect to 2 of our SOCKS5 machines (and Tor) via
 my Squid and I could not succeed. I have even tried Amos' custom squid with
 SOCKS support and still failed.

 Can someone explain to me as to how you are connecting to Tor with squid
 (and consequently having a need to block it)?

 Jenny


  Date: Sat, 3 Dec 2011 16:37:05 -0500
  Subject: Re: [squid-users] block TOR
  From: charlie@gmail.com
  To: leolis...@solutti.com.br
  CC: bodycar...@live.com; squid-users@squid-cache.org
 
  Sorry for reopen an old post, but a few days ago i tried with this
  solution, and . like magic, all traffic to the Tor net it's
  blocked, just typing this:
  acl tor dst /etc/squid3/tor
  http_access deny tor
  where /etc/squid3/tor it's the file that I download from the page you
  people recommend me !!!
 
  Thanks a lot, this is something that are searching a lot of admin that
  I know, you should put somewhere where are easily to find !!! Thanks
  again !!
 
  Sorry for my english
 
  On Fri, Nov 18, 2011 at 4:17 PM, Carlos Manuel Trepeu Pupo
  charlie@gmail.com wrote:
   Thanks a lot, I gonna make that script to refresh the list. You´ve
   been lot of helpful.
  
   On Fri, Nov 18, 2011 at 3:39 PM, Leonardo Rodrigues
   leolis...@solutti.com.br wrote:
  
   i dont know if this is valid for TOR ... but at least Ultrasurf,
   which i have analized a bit further, encapsulates traffic over
   squid always using CONNECT method and connecting to an IP address.
   It's basically different from normal HTTPS traffic, which also uses
   CONNECT method but almost always (i have found 2-3 exceptions in some
 years) connects to a FQDN.
  
   So, at least with Ultrasurf, i could handle it over squid simply
   blocking CONNECT connections which tries to connect to an IP
   address instead of a FQDN.
  
   Of course, Ultrasurf (and i suppose TOR) tries to encapsulate
   traffic to the browser-configured proxy as last resort. If it finds
   an NAT-opened network, it will always tries to go direct instead of
   through the proxy. So, its mandatory that you do NOT have a
   NAT-opened network, specially on ports
   TCP/80 and TCP/443. If you have those ports opened with your NAT
   rules, than i really think you'll never get rid of those services,
   like TOR and Ultrasurf.
  
  
  
  
   Em 18/11/11 14:03, Carlos Manuel Trepeu Pupo escreveu:
  
   So, like I see, we (the admin) have no way to block it !!
  
   On Thu, Sep 29, 2011 at 3:30 PM, Jenny Leebodycar...@live.com wrote:
  
   Date: Thu, 29 Sep 2011 11:24:55 -0400
   From: charlie@gmail.com
   To: squid-users@squid-cache.org
   Subject: [squid-users] block TOR
  
   There is any way to block TOR with my Squid ?
  
   How do you get it working with tor in the first place?
  
   I really tried for one of our users. Even used Amos's custom
   squid with SOCKS option but no go.
  
   Jenny
  
  
   --
  
  
   Atenciosamente / Sincerily,
   Leonardo Rodrigues
   Solutti Tecnologia
   http://www.solutti.com.br
  
   Minha armadilha de SPAM, NÃO mandem email gertru...@solutti.com.br
   My SPAMTRAP, do not email it
  
  
  
  
  




[squid-users] about rewrite an URL

2011-12-05 Thread Carlos Manuel Trepeu Pupo
I have in my FTP a mirror of the Bases of Kaspersky (all versions), I
use KLUpdater to make that (from the Kaspersky site), now I want to
redirect everyone to search for the domain of Kaspersky's update to my
FTP, how can I do that ?

That's a lot


Re: [squid-users] block TOR

2011-12-03 Thread Carlos Manuel Trepeu Pupo
Sorry for reopen an old post, but a few days ago i tried with this
solution, and . like magic, all traffic to the Tor net it's
blocked, just typing this:
acl tor dst /etc/squid3/tor
http_access deny tor
where /etc/squid3/tor it's the file that I download from the page you
people recommend me !!!

Thanks a lot, this is something that are searching a lot of admin that
I know, you should put somewhere where are easily to find !!! Thanks
again !!

Sorry for my english

On Fri, Nov 18, 2011 at 4:17 PM, Carlos Manuel Trepeu Pupo
charlie@gmail.com wrote:
 Thanks a lot, I gonna make that script to refresh the list. You´ve
 been lot of helpful.

 On Fri, Nov 18, 2011 at 3:39 PM, Leonardo Rodrigues
 leolis...@solutti.com.br wrote:

    i dont know if this is valid for TOR ... but at least Ultrasurf, which i
 have analized a bit further, encapsulates traffic over squid always using
 CONNECT method and connecting to an IP address. It's basically different
 from normal HTTPS traffic, which also uses CONNECT method but almost always
 (i have found 2-3 exceptions in some years) connects to a FQDN.

    So, at least with Ultrasurf, i could handle it over squid simply blocking
 CONNECT connections which tries to connect to an IP address instead of a
 FQDN.

    Of course, Ultrasurf (and i suppose TOR) tries to encapsulate traffic to
 the browser-configured proxy as last resort. If it finds an NAT-opened
 network, it will always tries to go direct instead of through the proxy. So,
 its mandatory that you do NOT have a NAT-opened network, specially on ports
 TCP/80 and TCP/443. If you have those ports opened with your NAT rules, than
 i really think you'll never get rid of those services, like TOR and
 Ultrasurf.




 Em 18/11/11 14:03, Carlos Manuel Trepeu Pupo escreveu:

 So, like I see, we (the admin) have no way to block it !!

 On Thu, Sep 29, 2011 at 3:30 PM, Jenny Leebodycar...@live.com  wrote:

 Date: Thu, 29 Sep 2011 11:24:55 -0400
 From: charlie@gmail.com
 To: squid-users@squid-cache.org
 Subject: [squid-users] block TOR

 There is any way to block TOR with my Squid ?

 How do you get it working with tor in the first place?

 I really tried for one of our users. Even used Amos's custom squid with
 SOCKS option but no go.

 Jenny


 --


        Atenciosamente / Sincerily,
        Leonardo Rodrigues
        Solutti Tecnologia
        http://www.solutti.com.br

        Minha armadilha de SPAM, NÃO mandem email
        gertru...@solutti.com.br
        My SPAMTRAP, do not email it







[squid-users] about include

2011-11-23 Thread Carlos Manuel Trepeu Pupo
Hello !! I want to know if Squid Cache: Version 3.0.STABLE1 permits
include. I tried to use it but tell me not recognized . Why don't
use this option in all versions? This could be helpful to organize the
squid.conf in many single files with the parameter that we never or
almost never touch. Sorry about my english !!


Re: [squid-users] about include

2011-11-23 Thread Carlos Manuel Trepeu Pupo
But all the newer version supported, even the 3.1.x ?

On Wed, Nov 23, 2011 at 10:36 AM, Matus UHLAR - fantomas
uh...@fantomas.sk wrote:
 On 23.11.11 09:56, Carlos Manuel Trepeu Pupo wrote:

 Hello !! I want to know if Squid Cache: Version 3.0.STABLE1 permits
 include. I tried to use it but tell me not recognized . Why don't
 use this option in all versions? This could be helpful to organize the
 squid.conf in many single files with the parameter that we never or
 almost never touch. Sorry about my english !!

 Try upgrading to newer version, that
 - is supported
 - has less bugs
 - supports include.

 --
 Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
 Warning: I wish NOT to receive e-mail advertising to this address.
 Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
 Save the whales. Collect the whole set.



[squid-users] about SSL client

2011-11-21 Thread Carlos Manuel Trepeu Pupo
Can I make an encrypted connection between mu clients an mu Squid
server, how can I make this and what version I need ?

Thanks


Re: [squid-users] block TOR

2011-11-18 Thread Carlos Manuel Trepeu Pupo
So, like I see, we (the admin) have no way to block it !!

On Thu, Sep 29, 2011 at 3:30 PM, Jenny Lee bodycar...@live.com wrote:


 Date: Thu, 29 Sep 2011 11:24:55 -0400
 From: charlie@gmail.com
 To: squid-users@squid-cache.org
 Subject: [squid-users] block TOR

 There is any way to block TOR with my Squid ?

 How do you get it working with tor in the first place?

 I really tried for one of our users. Even used Amos's custom squid with SOCKS 
 option but no go.

 Jenny


Re: [squid-users] block TOR

2011-11-18 Thread Carlos Manuel Trepeu Pupo
But this list do not change frequently?

On Fri, Nov 18, 2011 at 11:38 AM, Andreas Müller amuel...@gmx.de wrote:
 Hello,

 here you'll find a list of all tor nodes. It should be easy to block them.

 http://torstatus.blutmagie.de/

 Andreas

 -Ursprüngliche Nachricht-
 Von: Carlos Manuel Trepeu Pupo [mailto:charlie@gmail.com]
 Gesendet: Freitag, 18. November 2011 17:03
 An: Jenny Lee
 Cc: squid-users@squid-cache.org
 Betreff: Re: [squid-users] block TOR

 So, like I see, we (the admin) have no way to block it !!

 On Thu, Sep 29, 2011 at 3:30 PM, Jenny Lee bodycar...@live.com wrote:


 Date: Thu, 29 Sep 2011 11:24:55 -0400
 From: charlie@gmail.com
 To: squid-users@squid-cache.org
 Subject: [squid-users] block TOR

 There is any way to block TOR with my Squid ?

 How do you get it working with tor in the first place?

 I really tried for one of our users. Even used Amos's custom squid with
 SOCKS option but no go.

 Jenny





Re: [squid-users] block TOR

2011-11-18 Thread Carlos Manuel Trepeu Pupo
Thanks a lot, I gonna make that script to refresh the list. You´ve
been lot of helpful.

On Fri, Nov 18, 2011 at 3:39 PM, Leonardo Rodrigues
leolis...@solutti.com.br wrote:

    i dont know if this is valid for TOR ... but at least Ultrasurf, which i
 have analized a bit further, encapsulates traffic over squid always using
 CONNECT method and connecting to an IP address. It's basically different
 from normal HTTPS traffic, which also uses CONNECT method but almost always
 (i have found 2-3 exceptions in some years) connects to a FQDN.

    So, at least with Ultrasurf, i could handle it over squid simply blocking
 CONNECT connections which tries to connect to an IP address instead of a
 FQDN.

    Of course, Ultrasurf (and i suppose TOR) tries to encapsulate traffic to
 the browser-configured proxy as last resort. If it finds an NAT-opened
 network, it will always tries to go direct instead of through the proxy. So,
 its mandatory that you do NOT have a NAT-opened network, specially on ports
 TCP/80 and TCP/443. If you have those ports opened with your NAT rules, than
 i really think you'll never get rid of those services, like TOR and
 Ultrasurf.




 Em 18/11/11 14:03, Carlos Manuel Trepeu Pupo escreveu:

 So, like I see, we (the admin) have no way to block it !!

 On Thu, Sep 29, 2011 at 3:30 PM, Jenny Leebodycar...@live.com  wrote:

 Date: Thu, 29 Sep 2011 11:24:55 -0400
 From: charlie@gmail.com
 To: squid-users@squid-cache.org
 Subject: [squid-users] block TOR

 There is any way to block TOR with my Squid ?

 How do you get it working with tor in the first place?

 I really tried for one of our users. Even used Amos's custom squid with
 SOCKS option but no go.

 Jenny


 --


        Atenciosamente / Sincerily,
        Leonardo Rodrigues
        Solutti Tecnologia
        http://www.solutti.com.br

        Minha armadilha de SPAM, NÃO mandem email
        gertru...@solutti.com.br
        My SPAMTRAP, do not email it







[squid-users] block TOR

2011-09-29 Thread Carlos Manuel Trepeu Pupo
There is any way to block TOR with my Squid ?


Re: [squid-users] Modify the HTML by squid be return to the visitor

2011-08-25 Thread Carlos Manuel Trepeu Pupo
On Thu, Aug 25, 2011 at 1:25 AM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 25/08/11 15:14, 铀煌林 wrote:

 I am running the squid on my proxy server and provide proxy service to
 visitors.
 I want to add some HTML codes such as h1Hi [USERNAME], you are
 useing my proxy/h1 at the first line inside thebody  tag. And
 return this to my visitors. So they can not only see the page they
 want to, but also my notification.
 How could squid do that?

 This is a very, very bad idea. Your customers will NOT enjoy it.

 Before you go any further with this idea. Here are the results from other
 peoples attempts to alter content:
  * Rogers http://lauren.vortex.com/archive/000349.html
  * http://davefleet.com/2007/12/canadian-isp-rogers-hijacking-web-pages/
  * T-Mobile
 http://www.mysociety.org/2011/08/11/mobile-operators-breaking-content/
  * Phorm
 http://www.guardian.co.uk/media/pda/2008/apr/16/moreonispshijackingourweb
  *
 http://serverfault.com/questions/298277/add-frame-window-to-all-websites-for-users-on-network

 Please consider the more socially acceptable alternative of a splash/welcome
 page instead. Keeping in mind that even this portal approach is only
 acceptable for free or cheap services. If your customers are paying for
 access, that is what they are wanting. Not fancy gimicks that interfere with
 their content.

How can I make in my MAN to show a splash/welcome page ?



 I do some search, but find nothing. Thank a lot for any reply.

 Lookup web page and HTML hijacking methods. ICAP, eCAP.

 Then be prepared to risk legal action by the page copyright owners and
 loosing your customers.
 http://en.wikipedia.org/wiki/Network_neutrality#Legal_situation

 Amos
 --
 Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.14
  Beta testers wanted for 3.2.0.10



[squid-users] about ICP

2011-08-24 Thread Carlos Manuel Trepeu Pupo
What parameters do I need to configure in cache_peer to cache all the
petitions, even when the parent have it ?


Re: [squid-users] about the cache and CARP

2011-08-24 Thread Carlos Manuel Trepeu Pupo
On Tue, Aug 23, 2011 at 9:01 AM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 24/08/11 00:47, Carlos Manuel Trepeu Pupo wrote:

 2011/8/23 Amos Jeffriessqu...@treenet.co.nz:

 On 23/08/11 21:37, Matus UHLAR - fantomas wrote:

 On 16.08.11 16:54, Carlos Manuel Trepeu Pupo wrote:

 I want to make Common Address Redundancy Protocol or CARP with two
 squid 3.0 STABLE10 that I have, but here I found this question:

 the CARP that squid supports is the Cache Array Routing Protocol
 http://en.wikipedia.org/wiki/Cache_Array_Routing_Protocol

 - this is something different than Common Address Redundancy Protocol
 http://en.wikipedia.org/wiki/Common_Address_Redundancy_Protocol

 Well, technically Squid supports both. Though we generally don't use the
 term CARP to talk about the OS addressing algorithms. HA, LVS or NonStop
 are
 usually mentioned directly.

 Thanks for the tips, from now I will be careful with the term.



 If the main Squid with 40 GB of cache shutdown for any reason, then
 the 2nd squid will start up but without any cache.

 There is any way to synchronize the both cache, so when this happen
 the 2nd one start with all the cache ?

 You would need something that would synchronize squid's caches,
 otherwise it would eat two times the bandwidth.

 Seconded.

 If the second Squid is not running until the event the cache can be
 safely
 mirrored. Though that method will cause a slow DIRTY startup rather than
 a
 fast not-swap. On 40GB it could be very slow, and maybe worse than an
 empty
 cache.

 NP: the traffic spike from an empty cache decreases in exponential
 proportion to the hit ratio of the traffic. From a spike peak equal to
 the
 internal bandwidth rate.

 PS.  I have a feeling you might have some graphs to demonstrate that
 spike
 effect Carlos. Would you be able to share the images and numeric details?
 I'm looking for details to update the 2002 documentation.

 Thanks to everyone, you guys always helping me !! Now I have a few
 problem with Debian and LVM, until I solve it I can't do it anything.
 But here another idea:

 I put the two squid in cascade and the Master (HA) make the petitions
 first to the second squid and if it down go directly to Internet. The
 both squid will cache all the contents, so will be duplicate the
 contents, but if someone go down, the other one will respond with all
 the content cached.

 It look like this:

 client ---  Server1 ---  Server2 ---  Internet (server1 and server2
 will cache all)
 Server1 down
 client ---  Server2 ---  Internet (server2 will cache all)
 Server2 down
 client ---  Server1 ---  Internet (server2 will cache all)

 What do you think ?

 Looks good.

 Check your cache_peer directives connect-fail-limit=N values. It affects
 whether and how much breakage a clients sees when Server2 goes down. If that
 option is available on your Server1 squid, you want it set relatively low,
 but not so low that random failures disconnect them.

 background-ping option is also useful for recovery once Server2 comes back
 up.

 Amos
 --
 Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.14
  Beta testers wanted for 3.2.0.10


Everything it's working fine !! Until now they are still in LAB mode,
but with excellent results in tests. Now I would like to improve the
mechanism to HA of my servers. Any other idea how to improve the work
that I made until now. (I just make squid with UCARP in Debian)


Re: [squid-users] about the cache and CARP

2011-08-23 Thread Carlos Manuel Trepeu Pupo
2011/8/23 Amos Jeffries squ...@treenet.co.nz:
 On 23/08/11 21:37, Matus UHLAR - fantomas wrote:

 On 16.08.11 16:54, Carlos Manuel Trepeu Pupo wrote:

 I want to make Common Address Redundancy Protocol or CARP with two
 squid 3.0 STABLE10 that I have, but here I found this question:

 the CARP that squid supports is the Cache Array Routing Protocol
 http://en.wikipedia.org/wiki/Cache_Array_Routing_Protocol

 - this is something different than Common Address Redundancy Protocol
 http://en.wikipedia.org/wiki/Common_Address_Redundancy_Protocol

 Well, technically Squid supports both. Though we generally don't use the
 term CARP to talk about the OS addressing algorithms. HA, LVS or NonStop are
 usually mentioned directly.

Thanks for the tips, from now I will be careful with the term.



 If the main Squid with 40 GB of cache shutdown for any reason, then
 the 2nd squid will start up but without any cache.

 There is any way to synchronize the both cache, so when this happen
 the 2nd one start with all the cache ?

 You would need something that would synchronize squid's caches,
 otherwise it would eat two times the bandwidth.

 Seconded.

 If the second Squid is not running until the event the cache can be safely
 mirrored. Though that method will cause a slow DIRTY startup rather than a
 fast not-swap. On 40GB it could be very slow, and maybe worse than an empty
 cache.

 NP: the traffic spike from an empty cache decreases in exponential
 proportion to the hit ratio of the traffic. From a spike peak equal to the
 internal bandwidth rate.

 PS.  I have a feeling you might have some graphs to demonstrate that spike
 effect Carlos. Would you be able to share the images and numeric details?
 I'm looking for details to update the 2002 documentation.

Thanks to everyone, you guys always helping me !! Now I have a few
problem with Debian and LVM, until I solve it I can't do it anything.
But here another idea:

I put the two squid in cascade and the Master (HA) make the petitions
first to the second squid and if it down go directly to Internet. The
both squid will cache all the contents, so will be duplicate the
contents, but if someone go down, the other one will respond with all
the content cached.

It look like this:

client --- Server1 --- Server2 --- Internet (server1 and server2
will cache all)
Server1 down
client --- Server2 --- Internet (server2 will cache all)
Server2 down
client --- Server1 --- Internet (server2 will cache all)

What do you think ?

Regards.


 Amos
 --
 Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.14
  Beta testers wanted for 3.2.0.10



Re: [squid-users] about the cache and CARP

2011-08-17 Thread Carlos Manuel Trepeu Pupo
Thanks for your reply. I understand what you told me, but here another idea:

If I read ok, with CARP I can define a master PC, so all the traffic
go to that machine and if it go down, then reply the other, no? Well,
I can make that the master ask always to the second machine and cache
all the petitions. Then, I have all the cache duplicate, but if the
master go down, the second one will have all the cache.

It's that right ?

Regards
Carlos Manuel

2011/8/16 Henrik Nordström hen...@henriknordstrom.net:
 tis 2011-08-16 klockan 16:54 -0400 skrev Carlos Manuel Trepeu Pupo:
 I want to make Common Address Redundancy Protocol or CARP with two
 squid 3.0 STABLE10 that I have, but here I found this question:

 If the main Squid with 40 GB of cache shutdown for any reason, then
 the 2nd squid will start up but without any cache.

 Why will the second Squid start up without any cache?

 If you are using CARP then cache is sort of distributed over the
 available caches, and the amount of cache you loose is proportional to
 the amount of cache space that goes offline.

 However, CARP routing in Squid-3.0 only applies when you have multiple
 levels of caches. Still doable with just two servers but you then need
 two Squid instances per server.

 * Frontend Squids, doing in-memory cache and CARP routing to Cache
 Squids
 * Cache Squids, doing disk caching

 When request routing is done 100% CARP then you loose 50% of the cache
 should one of the two cache servers go down.

 There is also possible hybrid models where the cache gets more
 duplicated among the cache servers, but not sure 3.0 can handle those.

 Regards
 Henrik




[squid-users] about the cache and CARP

2011-08-16 Thread Carlos Manuel Trepeu Pupo
I want to make Common Address Redundancy Protocol or CARP with two
squid 3.0 STABLE10 that I have, but here I found this question:

If the main Squid with 40 GB of cache shutdown for any reason, then
the 2nd squid will start up but without any cache.

There is any way to synchronize the both cache, so when this happen
the 2nd one start with all the cache ?

Thank again for all your help !!!


Re: [squid-users] strange things happend

2011-08-02 Thread Carlos Manuel Trepeu Pupo
I have this config:

delay_pools 1
delay_class 1 1
delay_parameters 8096/8096
delay_access allow all

Just like I describe my squid deliver to all user at 8 Kbps, but when
one user use a download manager squid consume all the bandwidth but
continuing delivering at 8 Kbps. What could be ?


2011/8/2 Amos Jeffries squ...@treenet.co.nz:
 On 31/07/11 02:33, Carlos Manuel Trepeu Pupo wrote:

 After many days, and after many fights with my users, now I can see my
 delay_pools work fine (sorry to all the people who tried help me). But
 now I have other problem that describe here:

 I see in my Firewall-Router (Kerio Control 7) that my squid 3.0
 STABLE1 are downloading at 128 kbps (that's all my bandwidth) and in
 real-time I see a lot of simultaneous connection to a site where the
 users are downloading. Then I thought my delay_pools don't work, but
 after many test I check this users, and I can see their speed were the
 speed configured in my squid, so, they have multiple connection but
 have the right speed, however my proxy are consuming all the
 bandwidth.

 Why this could be happen ?

 Thanks again !!

 What config do you have now?

 IIRC last config let _each_ user have 120KBps bandwidth.

 Amos
 --
 Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.14
  Beta testers wanted for 3.2.0.10



[squid-users] delay_class 3 ???

2011-08-02 Thread Carlos Manuel Trepeu Pupo
Hi everyone, thanks again for all the help !

I have many subnets:

10.10.1.0/24
10.10.2.0/24
10.10.3.0/24
10.10.4.0/24
.
.
.
10.10.200.0/24

and I want to control the bandwidth of each one. I think this could be:

acl clients src /etc/squid3/net   # In this file I have all
the subnets

delay_pools 1
delay_class 1 3
delay_parameters 1 491520/491520 16384/16384 -1/-1
delay_access 1 allow clients

Here I just want to restrict  the /24 subnet, I don't care the
individual host, and restrict each one at 16 Kbps, not all to 16 Kbps.
With this conf I make this ?


[squid-users] strange things happend

2011-08-01 Thread Carlos Manuel Trepeu Pupo
After many days, and after many fights with my users, now I can see my
delay_pools work fine (sorry to all the people who tried help me). But
now I have other problem that describe here:

I see in my Firewall-Router (Kerio Control 7) that my squid 3.0
STABLE1 are downloading at 128 kbps (that's all my bandwidth) and in
real-time I see a lot of simultaneous connection to a site where the
users are downloading. Then I thought my delay_pools don't work, but
after many test I check this users, and I can see their speed were the
speed configured in my squid, so, they have multiple connection but
have the right speed, however my proxy are consuming all the
bandwidth.

Why this could be happen ?

Thanks again !!


[squid-users] delay_pool

2011-07-29 Thread Carlos Manuel Trepeu Pupo
In my squid 3.0 STABLE1 I have the following configuration:

delay_pools 1

delay_class 1 1
delay_parameters 1 1024/1024
delay_access 1 allow all

But one user are downloading at 120 Kb/s

Why it's that ?


Re: [squid-users] delay_pool

2011-07-29 Thread Carlos Manuel Trepeu Pupo
Well, here it's the scenario:

few servers 10.10.10.0/24
4 PC's admin 10.10.10.0/24
1 client across a Kerio 10.10.10.52
bandwidth : 120 Kb/s

I just want to filter Kerio client, but nothing happen !! A few weeks
ago, someone tell me about the thrust in XFF, but I don't understand.

I thought that with this delay access I will control everybody, but
not !!! HELP !!!

delay_pools 1

delay_class 1 1
delay_parameters 1 1024/1024
delay_access 1 allow all

This delay it's just a test, and not working !!!



2011/7/29 Christian Tosta christian.to...@gmail.com:
 acl BadDownloads url_regex -i /etc/squid/rules/bad_downloads.url_regex
 acl BigDownloads rep_header Content-Length ^[3-9]?+[0-9]{0,7}$ # Match sizes
 of 30 MB to infinity
 ### Bandwidth Control
 #
 delay_initial_bucket_level 100
 delay_pools 3
 delay_class 1 2
 delay_parameters 1 -1/-1 -1/-1
 delay_access 1 allow Servers # Free bandwidth for servers
 delay_access 1 deny all
 delay_class 2 2
 delay_parameters 2 12/12 4000/8000 # Big Downloads at 32-64kbps per
 IP
 delay_access 2 allow Downloads
 delay_access 2 allow BigDownloads
 delay_access 2 deny all
 delay_class 3 2
 delay_parameters 3 12/12 2/28000 # Other Access at 160-224kbps
 per IP
 delay_access 3 allow IpsIntranet
 delay_access 3 deny all

 2011/7/29 Carlos Manuel Trepeu Pupo charlie@gmail.com

 In my squid 3.0 STABLE1 I have the following configuration:

 delay_pools 1

 delay_class 1 1
 delay_parameters 1 1024/1024
 delay_access 1 allow all

 But one user are downloading at 120 Kb/s

 Why it's that ?




Re: [squid-users] about delay_pools

2011-07-11 Thread Carlos Manuel Trepeu Pupo
2011/7/11 Amos Jeffries squ...@treenet.co.nz

 On 09/07/11 01:40, Carlos Manuel Trepeu Pupo wrote:

 2011/7/8 Amos Jeffriessqu...@treenet.co.nz:

 On 08/07/11 02:36, Carlos Manuel Trepeu Pupo wrote:

 Hi! I'm using squid 3.0 STABLE1. Here are my delay_pool in the squid.conf

 acl enterprise src 10.10.10.2/32
 acl bad_guys src 10.10.10.52/32
 acl dsl_bandwidth src 10.10.48.48/32

 delay_pools 3

 delay_class 1 1
 delay_parameters 1 25600/25600
 delay_access 1 allow bad_guys
 delay_access 1 deny all

 delay_class 2 1
 delay_parameters 2 65536/65536
 delay_access 2 allow enterprise
 delay_access 2 deny all

 delay_class 3 1
 delay_parameters 3 10240/10240
 delay_access 3 allow dsl_bandwidth
 delay_access 3 deny all


 I think everything was right, but since yesterday I see bad_guys
 downloading from youtube using all my bandwidth !! I have a channel of
 128 Kb in technology ATM. So I hope you can help me !!!

 step 1) please verify that a recent release still has this problem.
 3.0.STABLE1 was obsoleted years ago.

 step 2) check for things like follow_x_forwarded_for allowing them to fake
 their source address. 3.0 series did not check this properly and allows
 people to trivially bypass any IP-based security if you trust that header.

 Amos

 I

 If I deny bad_guys they can't surf. The user it's a client who have
 a Kerio Firewall-Proxy with 10 users. I make the test to visit them
 and stop his service, then the bandwidth go down, so I check they are
 who violate the delay_pool. Now, the question is why this happen?

 I just gave you several possible answers to that.

 Considering that you only listed 10.10.10.52 and Kerio pass on 
 X-Forwarded-For headers, the comment I made about follow_x_forwarded_for 
 becomes a very important thing to know. Trusting XFF from their Kerio means 
 firstly that src 10.10.10.52 does not match and secondly that your delay 
 pools, if it did match, gives each of their 10 internal machines a different 
 pool.

Sorry, but I don't understand how can I gives each of their 10
internal machines a different pool. I read the documentation about
follow_x_forwarded_for.  I will appreciate if you explain me better.
Thanks


 (Every time this happen I check the destination domain it's youtube
 and they are downloading from there.)

 Another possibility is that it is in fact an upload that you can see. 
 delay_pools in 3.0 only work on bytes fetched _from_ the server. Outgoing 
 bytes are not limited.

 Amos
 --
 Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.14
  Beta testers wanted for 3.2.0.9


Re: [squid-users] about delay_pools

2011-07-08 Thread Carlos Manuel Trepeu Pupo
2011/7/8 Amos Jeffries squ...@treenet.co.nz:
 On 08/07/11 02:36, Carlos Manuel Trepeu Pupo wrote:

 Hi! I'm using squid 3.0 STABLE1. Here are my delay_pool in the squid.conf

 acl enterprise src 10.10.10.2/32
 acl bad_guys src 10.10.10.52/32
 acl dsl_bandwidth src 10.10.48.48/32

 delay_pools 3

 delay_class 1 1
 delay_parameters 1 25600/25600
 delay_access 1 allow bad_guys
 delay_access 1 deny all

 delay_class 2 1
 delay_parameters 2 65536/65536
 delay_access 2 allow enterprise
 delay_access 2 deny all

 delay_class 3 1
 delay_parameters 3 10240/10240
 delay_access 3 allow dsl_bandwidth
 delay_access 3 deny all


 I think everything was right, but since yesterday I see bad_guys
 downloading from youtube using all my bandwidth !! I have a channel of
 128 Kb in technology ATM. So I hope you can help me !!!

 step 1) please verify that a recent release still has this problem.
 3.0.STABLE1 was obsoleted years ago.

 step 2) check for things like follow_x_forwarded_for allowing them to fake
 their source address. 3.0 series did not check this properly and allows
 people to trivially bypass any IP-based security if you trust that header.

 Amos
 --
 Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.14
  Beta testers wanted for 3.2.0.9

I

If I deny bad_guys they can't surf. The user it's a client who have
a Kerio Firewall-Proxy with 10 users. I make the test to visit them
and stop his service, then the bandwidth go down, so I check they are
who violate the delay_pool. Now, the question is why this happen?
(Every time this happen I check the destination domain it's youtube
and they are downloading from there.)


[squid-users] about delay_pools

2011-07-07 Thread Carlos Manuel Trepeu Pupo
Hi! I'm using squid 3.0 STABLE1. Here are my delay_pool in the squid.conf

acl enterprise src 10.10.10.2/32
acl bad_guys src 10.10.10.52/32
acl dsl_bandwidth src 10.10.48.48/32

delay_pools 3

delay_class 1 1
delay_parameters 1 25600/25600
delay_access 1 allow bad_guys
delay_access 1 deny all

delay_class 2 1
delay_parameters 2 65536/65536
delay_access 2 allow enterprise
delay_access 2 deny all

delay_class 3 1
delay_parameters 3 10240/10240
delay_access 3 allow dsl_bandwidth
delay_access 3 deny all


I think everything was right, but since yesterday I see bad_guys
downloading from youtube using all my bandwidth !! I have a channel of
128 Kb in technology ATM. So I hope you can help me !!!


Re: [squid-users] using reverse squid to manage XMPP

2011-05-19 Thread Carlos Manuel Trepeu Pupo
Sorry, I already know that squid itsn't that I want, but do you know
any relay XMPP, I search and I didn't find anything

2011/5/17 Amos Jeffries squ...@treenet.co.nz:
 On Tue, 17 May 2011 11:24:50 -0400, Carlos Manuel Trepeu Pupo wrote:

 Hello, until now, everything that I question here have been solved, so
 here I bring this new situation:

 Debian 6 64 bits, with squid 3.1.12.

 I have only one real IP with Kerio as firewall and in my private net
 one reverse squid to publish my internal pages. I use Kerio because I
 also have email and more services. So my clients wants to publish
 their jabber to internet and I have the idea that the Squid could
 route me the XMPP incoming traffic, because the outgoing traffic pass
 throw the firewall with NAT.

 I have a rule that tell all the incoming traffic in XMPP ports go to
 my squid at 3128 port, but nothing happens, even in the log of squid
 do not appear nothing.

 I make a proof with my Jabber (Openfire) in/out throw Kerio and there
 is no problem, so I'm missing some squid's configuration to do this,
 or Squid it's not the solution to my trouble.


 Can you help me?

 No, squid is HTTP proxy. XMMP is a completely different protocol.

 Look for an XMMP relay.

 Amos



[squid-users] using reverse squid to manage XMPP

2011-05-17 Thread Carlos Manuel Trepeu Pupo
Hello, until now, everything that I question here have been solved, so
here I bring this new situation:

Debian 6 64 bits, with squid 3.1.12.

I have only one real IP with Kerio as firewall and in my private net
one reverse squid to publish my internal pages. I use Kerio because I
also have email and more services. So my clients wants to publish
their jabber to internet and I have the idea that the Squid could
route me the XMPP incoming traffic, because the outgoing traffic pass
throw the firewall with NAT.

I have a rule that tell all the incoming traffic in XMPP ports go to
my squid at 3128 port, but nothing happens, even in the log of squid
do not appear nothing.

I make a proof with my Jabber (Openfire) in/out throw Kerio and there
is no problem, so I'm missing some squid's configuration to do this,
or Squid it's not the solution to my trouble.


Can you help me?


Re: [squid-users] Cache peer does not work

2011-05-14 Thread Carlos Manuel Trepeu Pupo
2011/5/14 Dr. Muhammad Masroor Ali mmasroor...@cse.buet.ac.bd:
 Yes, I am actually writing
 cache_peer 172.16.101.3     parent    3128  3130  default

 I was so exasperated that I did not type it correctly.

 Dr. Muhammad Masroor Ali
 Professor
 Department of Computer Science and Engineering
 Bangladesh University of Engineering and Technology
 Dhaka-1000, Bangladesh

 Phone: 880 2 966 5650 (PABX)

 In a world without walls and fences, who needs Windows and Gates?





 On Sat, May 14, 2011 at 10:23 PM, Hasanen AL-Bana hasa...@gmail.com wrote:
 cache_peer 127.0.0.1     parent    3128  3130  default
 the above link points to the same server ! probably incorrect , you
 must use your parent IP address instead (172.16.101.3)

 On Sat, May 14, 2011 at 7:17 PM, Dr. Muhammad Masroor Ali
 mmasroor...@cse.buet.ac.bd wrote:

 Dear All,
 I thought that this would have been straight forward.

 In an Ubuntu machine, I use proxy 172.16.101.3 as the proxy for
 browsing. This does not require any user name or password for access.

 I installed squid3 in this machine and set 127.0.0.1:3128 as proxy in
 the browser. Also in the squid.conf file I have put,
 cache_peer 127.0.0.1     parent    3128  3130  default

 Now when I try to browse, nothing happens. That is, the browser says
 connecting or some such, and after a very long time it fails to open
 the page.

You must have to configure the parameter nonhierarchical_direct off

Then, the squid will ask to the parent first before go directly to
find the page.



 No error message has been found in the log files. I am really at a
 loss what to do.

 Could somebody please tell me what to do. Thanks in advance.

 Dr. Muhammad Masroor Ali
 Professor
 Department of Computer Science and Engineering
 Bangladesh University of Engineering and Technology
 Dhaka-1000, Bangladesh

 Phone: 880 2 966 5650 (PABX)

 In a world without walls and fences, who needs Windows and Gates?




Re: [squid-users] you cache is running out of filedescriptors in ubuntu

2011-05-11 Thread Carlos Manuel Trepeu Pupo
2011/5/11 Amos Jeffries squ...@treenet.co.nz:
 On Tue, 10 May 2011 16:10:38 -0400, Carlos Manuel Trepeu Pupo wrote:

 Hi I have down all my work, I find some information to fix this but
 tell me modify /etc/default/squid and I don't have this file, what
 could I do? It's urgent  I have squid 3.0 STABLE1

 Create the file if missing. It is an optional user-override config file.

 For 3.0 you need to add the ./configure --with-filedescriptors=NUMBER
 option. Where NUMBER is something big enough not to die under your traffic
 load. You also need to run ulimit -n NUMBER before starting Squid every
 time.


 The FD overflow could also be *two* of those fixed bugs I warned you about
 the other day...

 3.0 have issues with too many persistent connection FD being held. Which can
 overflow the FD limits on certain types of traffic behaviour.

 3.0 and early 3.1 have issues with connection garbage objects being released
 very late in the transaction, which can waste FD.

 Amos


Thanks for all, this week or the next, I will change to the most
recently STABLE version, then I will solve all this problem. There's
someplace where I can find all the parameters to compile squid? Thanks
again !!


[squid-users] partitioning Debian 6 for squid instalation

2011-05-11 Thread Carlos Manuel Trepeu Pupo
Hi everyone, I gonna install the latest Squid STABLE version in Debian
6 64bits, so I like to know the recommended hard disk partition !


[squid-users] about chroot

2011-05-11 Thread Carlos Manuel Trepeu Pupo
I'm right now installing my Debian 6, next will be install Squid
3.1.12, so Amos, I suppose we are in peace, lol. I like to enhance my
security with a chroot, but reading in internet the information it's
no too much, only see this in all the comments:

if you use a HTTP port less than 1024 and try to reconfigure, you may
get an error saying that Squid can not open the port.

So I want to know if the effort will really worth, and how the hell I
will reconfigure squid in chroot?

Thanks again !!!


[squid-users] you cache is running out of filedescriptors in ubuntu

2011-05-10 Thread Carlos Manuel Trepeu Pupo
Hi I have down all my work, I find some information to fix this but
tell me modify /etc/default/squid and I don't have this file, what
could I do? It's urgent  I have squid 3.0 STABLE1


Re: [squid-users] modify the delay_pools at fly

2011-05-09 Thread Carlos Manuel Trepeu Pupo
2011/5/4 Amos Jeffries squ...@treenet.co.nz:
 On Wed, 4 May 2011 12:38:43 -0400, Carlos Manuel Trepeu Pupo wrote:

 2011/5/4 Amos Jeffries:

 On 05/05/11 03:35, Carlos Manuel Trepeu Pupo wrote:

 I tried in previous post to change the established connection when the
 time of the delay_pool change. Amos give me 3 solution and now I'm
 trying with QoS, but I have this idea:

 If I have 2, 3 or the count of squid.conf that I could need, and with
 one script I make squid3 -k reconfigure. That not finish any active
 connection and apply the changes, what do you think?

 It is favoured by some. Has the slight side effect of forgetting the
 delay
 pool assigned on older Squid versions.

 What do you mean about forget the delay_pool?

 The reconfigure erases old delay pools config and re-creates it.
 As I recall the old code used to leave it at that, with the existing
 connections having no delay pool config set. That got fixed a year or two
 ago to re-calculate all existing requests delay pools after a configure.
 They may get a freshly filled pool suddenly, but stay limited overall.

Please, can you explain me better ?. My english played me a bad play
and I can't understand at all. Thanks



 Remember that I have Ubuntu 10.04 with Squid 3 STABLE1. This night

 10.04 and 3.0.STABLE1? dude!

 lol I'm now deploying Debian 6, but I don't want to install squid
 until I solved my problems.


 when my users gone I gonna try !! Tomorrow I tell you, but if someone
 tried this, please, send the result, so i can use my time in QoS.


 Now I just tried the -k reconfigure, but something strange happen, so
 I backup my squid.conf and in the new one I just put this delay_pool:
 delay_pools 1
 delay_class 1 1
 delay_parameters 1 10240/10240
 delay_access 1 allow all

 With this parameters the speed shouldn't be more than 10 KB, but I can
 see in my firewall the proxy reaches speeds until 32 KB, I guess there
 are just peaks, but if I have 100 clients, and all them make these
 peaks, then my DSL will be saturated.

 I'd put that down to STABLE1. Try again with the newer version in Deb 6.

 Amos




[squid-users] deny_info

2011-05-09 Thread Carlos Manuel Trepeu Pupo
Hi, I'm now using deny_info to personalize the error pages. I have
installed Squid 3.0 STABLE1 (I know it's an old version). Here is an
example of my squid.conf:

acl ext url_regex -i \.exe$
acl ip src 192.168.10.10
acl max maxconn 1
http_access deny ip ext max
# I already create the page in the directory's errors pages.
deny_info ERR_EXT_PAGE max
http_access allow !maxconn

The problem is that the page that show me it the default of denied and
not the mine. What's wrong and how could I fixed ?


Re: [squid-users] deny_info

2011-05-09 Thread Carlos Manuel Trepeu Pupo
2011/5/9 Amos Jeffries squ...@treenet.co.nz:
 On Mon, 9 May 2011 13:07:50 -0400, Carlos Manuel Trepeu Pupo wrote:

 Hi, I'm now using deny_info to personalize the error pages. I have
 installed Squid 3.0 STABLE1 (I know it's an old version). Here is an

 So why for the sake of 6 *major* security vulnerabilities did you do that?
 http://www.squid-cache.org/Advisories

I'm making test for all the new thing I will implement, so, when all
work fine I'll make the change !!!

 example of my squid.conf:

 acl ext url_regex -i \.exe$
 acl ip src 192.168.10.10
 acl max maxconn 1
 http_access deny ip ext max
 # I already create the page in the directory's errors pages.
 deny_info ERR_EXT_PAGE max
 http_access allow !maxconn

 The problem is that the page that show me it the default of denied and
 not the mine. What's wrong and how could I fixed ?

 Are you sure its being denied by deny ip ext max?

yes that's the unique http_access that work with this acl.


I make a few test and this is the result:

#THIS NOT WORK
acl ext url_regex -i \.exe$
acl ip src 192.168.10.10
acl max maxconn 1
http_access deny ip ext max
# I already create the page in the directory's errors pages.
deny_info ERR_EXT_PAGE max
http_access allow !max

#THIS WORK
acl ext url_regex -i \.exe$
acl ip src 192.168.10.10
acl max maxconn 1
http_access deny max
# I already create the page in the directory's errors pages.
deny_info ERR_EXT_PAGE max
http_access allow !max

The difference it's that the http_access deny only have an argument
my ACL, but if I combine it, then do not show me the PAGE that I
created. There any way to solve that?


 Amos



[squid-users] modify the delay_pools at fly

2011-05-04 Thread Carlos Manuel Trepeu Pupo
I tried in previous post to change the established connection when the
time of the delay_pool change. Amos give me 3 solution and now I'm
trying with QoS, but I have this idea:

If I have 2, 3 or the count of squid.conf that I could need, and with
one script I make squid3 -k reconfigure. That not finish any active
connection and apply the changes, what do you think?

Remember that I have Ubuntu 10.04 with Squid 3 STABLE1. This night
when my users gone I gonna try !! Tomorrow I tell you, but if someone
tried this, please, send the result, so i can use my time in QoS.


Re: [squid-users] modify the delay_pools at fly

2011-05-04 Thread Carlos Manuel Trepeu Pupo
2011/5/4 Amos Jeffries squ...@treenet.co.nz:
 On 05/05/11 03:35, Carlos Manuel Trepeu Pupo wrote:

 I tried in previous post to change the established connection when the
 time of the delay_pool change. Amos give me 3 solution and now I'm
 trying with QoS, but I have this idea:

 If I have 2, 3 or the count of squid.conf that I could need, and with
 one script I make squid3 -k reconfigure. That not finish any active
 connection and apply the changes, what do you think?

 It is favoured by some. Has the slight side effect of forgetting the delay
 pool assigned on older Squid versions.

What do you mean about forget the delay_pool?



 Remember that I have Ubuntu 10.04 with Squid 3 STABLE1. This night

 10.04 and 3.0.STABLE1? dude!

lol I'm now deploying Debian 6, but I don't want to install squid
until I solved my problems.


 when my users gone I gonna try !! Tomorrow I tell you, but if someone
 tried this, please, send the result, so i can use my time in QoS.

 Amos
 --
 Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.12
  Beta testers wanted for 3.2.0.7 and 3.1.12.1


Now I just tried the -k reconfigure, but something strange happen, so
I backup my squid.conf and in the new one I just put this delay_pool:
delay_pools 1
delay_class 1 1
delay_parameters 1 10240/10240
delay_access 1 allow all

With this parameters the speed shouldn't be more than 10 KB, but I can
see in my firewall the proxy reaches speeds until 32 KB, I guess there
are just peaks, but if I have 100 clients, and all them make these
peaks, then my DSL will be saturated.


[squid-users] Missing cachemgr.cgi

2011-04-30 Thread Carlos Manuel Trepeu Pupo
I installed Squid 3.0 Stable1 in my Ubuntu, but now, I can't find my
cachemgr.cgi or cachemgr.conf, I search in
/usr/lib/squid3
/etc/squid/cachemgr.conf
/usr/lib/cgi-bin/cachemgr3.cgi

I guess was not installed when I use my apt-get install squid3. Anyone
can help me?


[squid-users] About the delay_pool in squid 3.0 STABLE1

2011-04-29 Thread Carlos Manuel Trepeu Pupo
Hi!! I have installed Squid 3.0 STABLE1 in UBUNTU since a few months
ago, and I tried

to use the delay_pool with a fixed speed without any problems. Now, I
want to let the

user to download at more speed in the launch hour, so I put others
delay_pool for that

time and a different speed for the page that not contain the
restriction, but I can

see when the connection starts at this time, and the lunch it's over
the speed do not

decrease. I think it's that the connection established can't be
modified, but I hope

squid do not work like this. Here I send part of my squid.conf.

acl special_client src 192.168.0.10/32
acl client src 192.168.0.20/32
#page control are page like megaupload, hotfile, an others
acl page_control url_regex -i /etc/squid3/page_control
#ext_control are extension like .rar .iso, and many others
acl ext_control url_regex -i /etc/squid3/ext_control
acl happy_hours time MTWHFA 12:00-13:30

delay_pool 4
delay_class 1 1
delay_parameters 1 51200/51200
delay_access 1 allow special_client page_control happy_hours
delay_access 1 allow special_client ext_control happy_hours
delay_access deny all
delay_class 2 1
delay_parameters 2 3/3
delay_access 2 allow client page_control happy_hours
delay_access 2 allow client ext_control happy_hours
delay_access 2 deny all
delay_class 3 1
delay_parameters 3 9/9
delay_access 3 allow client page_control !happy_hours
delay_access 3 allow special_client ext_control !happy_hours
delay_access 3 deny all
delay_class 4 1
delay_parameters 4 120/120
delay_access 4 allow client !page_control !ext_control
delay_access 4 allow special_client !page_control !ext_control
delay_access 4 deny all


Waiting for your answer !!!