[squid-users] i would like to know if someone managed to use ubuntu and TPROXY?

2011-11-03 Thread Eliezer Croitoru

i would like to know if someone managed to use ubuntu and TPROXY?
i want to write a detailed manual on how to build a gentoo and ubutnu 
based TPROXY squid server in a bridge and router mode.


if anyone can help me and give me a working guide i will be more then happy.


for now i managed to build an ubuntu machine as a bridge with TPROXY but 
im having troubles with it.
sometimes the iptables slips and wont pass the packets into the iptables 
and will just bridge them.


Thanks
Eliezer


Re: [squid-users] Reverse Proxy + Exchange OWA issue

2011-11-04 Thread Eliezer Croitoru

On 04/11/2011 16:55, Rick Chisholm wrote:

clear your cache, then you get the pause again.  The light versions of OWA

did you tried the 3.2 branch?
as let say 3.2.0.8?

Eliezer


Re: [squid-users] Reverse Proxy + Exchange OWA issue

2011-11-04 Thread Eliezer Croitoru

On 04/11/2011 17:41, Rick Chisholm wrote:

3.2 does not appear to be available in the BSD ports tree at this time.


compile it yourself.. it pretty simple..

Eliezer



On Fri, November 4, 2011 11:39 am, Eliezer Croitoru wrote:

On 04/11/2011 16:55, Rick Chisholm wrote:

clear your cache, then you get the pause again.  The light versions of
OWA

did you tried the 3.2 branch?
as let say 3.2.0.8?

Eliezer








Re: [squid-users] Squid box dropping connections

2011-11-17 Thread Eliezer Croitoru

On 17/11/2011 16:11, Nataniel Klug wrote:

 Hello all,

 I am facing a very difficult problem in my network. I am
using a layout like this:

(internet) ==  ==  == [clients]

 I am running CentOS v5.1 with Squid-2.6 STABLE22 and Tproxy
(cttproxy-2.6.18-2.0.6). My kernel is kernel-2.6.18-92. This is the most
reliable setup I ever made running Squid. My problem is that I am having
serious connections troubles when running squid over 155000 conntrack
connections.

 From my clients I start losing packets to router when the
connections go over 155000. My kernel is prepared to run over 260k
connections. I am sending a screenshot about the problem where I have 156k
connections and I started connections on port 80 to go through squid (bellow
I will post every rule I am using for my firewall and transparent
connections, also I will send my squid.conf).

http://imageshack.us/photo/my-images/12/problemsg.png/

 The configuration I am using:

/etc/firewall/firewall
#!/bin/bash
IPT="/sbin/iptables"
RT="/sbin/route"
SYS="/sbin/sysctl -w"
$IPT -F
$IPT -t nat -F
$IPT -t nat -X
$IPT -t mangle -F
$IPT -t mangle -X
$IPT -t filter -F
$IPT -t filter -X
$IPT -X
$IPT -F INPUT
$IPT -F FORWARD
$IPT -F OUTPUT
$SYS net.ipv4.ip_forward=1
$SYS net.ipv4.ip_nonlocal_bind=1
$SYS net.ipv4.netfilter.ip_conntrack_max=262144

/etc/firewall/squid-start
#!/bin/bash
IP="/sbin/ip"
IPT="/sbin/iptables"
FWDIR="/etc/firewall"
/etc/firewall/firewall
$IPT -t tproxy -F
for i in `cat $FWDIR/squid-no-dst`
do
$IPT -t tproxy -A PREROUTING -d $i -j ACCEPT
done
for i in `cat $FWDIR/squid-no-src`
do
$IPT -t tproxy -A PREROUTING -s $i -j ACCEPT
done
$IPT -t tproxy -A PREROUTING -p tcp --dport 80 -j TPROXY --on-port 3128

/etc/squid/squid.conf
http_port 3128 tproxy transparent
tcp_outgoing_address XXX.XXX.144.67
icp_port 0

cache_mem 128 MB

cache_swap_low 92
cache_swap_high 96
maximum_object_size 100 KB
cache_replacement_policy heap LFUDA
memory_replacement_policy heap LFUDA

cache_dir aufs /cache/01/01 47000 64 256
cache_dir aufs /cache/01/02 47000 64 256
cache_dir aufs /cache/02/01 47000 64 256
cache_dir aufs /cache/02/02 47000 64 256
cache_dir aufs /cache/03/01 47000 64 256
cache_dir aufs /cache/03/02 47000 64 256
#--[ Max Usage : by Drive ]--#
# sdb1 ( max = 228352 / usg = 95400 (41,77%) ]
# sdb1 ( max = 228352 / usg = 95400 (41,77%) ]
# sdb3 [ max = 234496 / usg = 95400 (40,68%) ]
#-- [ Max HDD sdb Usage ]--#
# sdb [ max = 923994 / aloc = 691200 (74,81%) ]

cache_store_log none
access_log /usr/local/squid/var/logs/access.log squid
client_netmask 255.255.255.255
ftp_user sq...@cnett.com.br

diskd_program /usr/local/squid/libexec/diskd
unlinkd_program /usr/local/squid/libexec/unlinkd

error_directory /usr/local/squid/share/errors/Portuguese

dns_nameservers XXX.XXX.144.14 XXX.XXX.144.6

acl all src 0.0.0.0/0
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8
acl QUERY urlpath_regex cgi-bin \?
acl SSL_ports port 443
acl Safe_ports port 80 21 443 70 210 280 488 591 777 1025-65535
acl CONNECT method CONNECT

acl ASN53226_001 src XXX.XXX.144.0/22
acl ASN53226_002 src XXX.XXX.148.0/22

http_access allow ASN53226_001
http_access allow ASN53226_002

http_access allow localhost
http_access allow to_localhost

cache deny QUERY

http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access deny all
icp_access deny all

cache_mgr supo...@cnett.com.br
cache_effective_user squid
cache_effective_group squid
visible_hostname cache
unique_hostname 02.cache

 When I first start linux and there is just a few connections
going through the squid box it works just fine. When the connections go over
155k the problems began. Is there anything I can do to solve the problem?

well this is one of the big problems of the conntrack thingy..
what you can try is to also to change the tcp to:
sysctl net.ipv4.netfilter.ip_conntrack_tcp_timeout_established=3600
cause it might causing the problem of such a huge ammount of connection 
tracking size.
the basic size is 120 minutes which can cause a lot of troubles in many 
cases of open connections.
and by the way.. do you really have 155K connections? it seems like too 
much.


hope to hear more about the situation.

Regards Eliezer


--
Att,

Nataniel Klug






Re: [squid-users] Squid 3.2.0.14 beta is available

2011-12-13 Thread Eliezer Croitoru

why dont you use the interception\transparent mode instead of TPROXY?
for your setup it seems just the perfect idea.
i'm using a range setup like this:
-A PREROUTING -p tcp -m tcp -m iprange ! -d 192.168.0.0/16 -i eth1 
--dport 80 -j REDIRECT --to-ports 3128  --src-range 
192.168.0.0-192.168.0.190


with
http_port 192.168.0.1:3128 intercept

and it works like a charm.

Regards
Eliezer



On 13/12/2011 14:53, Saleh Madi wrote:

Thanks Marcello for your reply, we have linux pppoe server work for 1000
clients , how I can use the WPAD (web proxy autodiscovery protocol) for
them.

Thanks and Best Regards,
Saleh


Il 13/12/2011 13:14, Saleh Madi ha scritto:

Thanks Henrik for your reply, but when you have 1000 clients , its
difficult to lit all clients to configure there browser with proxy, I
think the redirect rule via policy based routing or other redirect
method
is easy than the configuration of the  client bowser , have you any idea
what the best to do for the 1000 clients.

Thanks and Best Regards,
Saleh



My 2 (euro) cents, FWIW:

- WPAD (web proxy autodiscovery protocol)
- if you're using active directory, take advantage of group policy (GPO)

Google Is You Friend (TM)

:-)

--
Marcello Romani








Re: [squid-users] After reloading squid3, takes about 2 minutes to serve pages?

2011-12-19 Thread Eliezer Croitoru

On 19/12/2011 19:12, Terry Dobbs wrote:
it's an old issue from squid 3.1 to 3.2 there is nothing yet as far as i 
know that solves this issue.


Regards
Eliezer

Hi All.

I just installed squid3 after running squid2.5 for a number of years. I
find after reloading squid3 and trying to access the internet on a proxy
client it takes about 2 minutes until pages load. For example, if I
reload squid3 and try to access a page, such as www.tsn.ca it will try
to load for a minute or 2 until it finally displays. I understand I
shouldn't need to reload squid3 too much, but is there something I am
missing to make this happen? I am not using it for cacheing just for
monitoring/website control. Here is the log from when I was trying to
access the mentioned site:

1324310991.377  2 192.168.70.97 TCP_DENIED/407 2868 GET
http://www.tsn.ca/ - NONE/- text/html [Accept: image/gif, image/jpeg,
image/pjpeg, image/pjpeg, application/x-shockwave-flash,
application/xaml+xml, application/vnd.ms-xpsdocument,
application/x-ms-xbap, application/x-ms-application,
application/vnd.ms-excel, application/vnd.ms-powerpoint,
application/msword, */*\r\nAccept-Language: en-us\r\nUser-Agent:
Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR
2.0.50727; InfoPath.1)\r\nAccept-Encoding: gzip,
deflate\r\nProxy-Connection: Keep-Alive\r\nHost: www.tsn.ca\r\nCookie:
TSN=NameKey={ffc1186b-54bb-47ef-b072-097f5fafc5f2};
__utma=54771374.1383136889.1323806167.1324305925.1324309890.7;
__utmz=54771374.1323806167.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(n
one); __utmb=54771374.1.10.1324309890\r\n] [HTTP/1.0 407 Proxy
Authentication Required\r\nServer: squid/3.0.STABLE19\r\nMime-Version:
1.0\r\nDate: Mon, 19 Dec 2011 16:09:51 GMT\r\nContent-Type:
text/html\r\nContent-Length: 2485\r\nX-Squid-Error:
ERR_CACHE_ACCESS_DENIED 0\r\nProxy-Authenticate: NTLM\r\n\r]
1324310991.447  5 192.168.70.97 TCP_DENIED/407 3244 GET
http://www.tsn.ca/ - NONE/- text/html [Accept: image/gif, image/jpeg,
image/pjpeg, image/pjpeg, application/x-shockwave-flash,
application/xaml+xml, application/vnd.ms-xpsdocument,
application/x-ms-xbap, application/x-ms-application,
application/vnd.ms-excel, application/vnd.ms-powerpoint,
application/msword, */*\r\nAccept-Language: en-us\r\nUser-Agent:
Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR
2.0.50727; InfoPath.1)\r\nAccept-Encoding: gzip,
deflate\r\nProxy-Connection: Keep-Alive\r\nCookie:
TSN=NameKey={ffc1186b-54bb-47ef-b072-097f5fafc5f2};
__utma=54771374.1383136889.1323806167.1324305925.1324309890.7;
__utmz=54771374.1323806167.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(n
one); __utmb=54771374.1.10.1324309890\r\nProxy-Authorization: NTLM
TlRMTVNTUAABB4IIogAFASgKDw==\r\nHost:
www.tsn.ca\r\n] [HTTP/1.0 407 Proxy Authentication Required\r\nServer:
squid/3.0.STABLE19\r\nMime-Version: 1.0\r\nDate: Mon, 19 Dec 2011
16:09:51 GMT\r\nContent-Type: text/html\r\nContent-Length:
2583\r\nX-Squid-Error: ERR_CACHE_ACCESS_DENIED 0\r\nProxy-Authenticate:
NTLM
TlRMTVNTUAACEgASADAFgomid3FHZLqI7WsAAIoAigBCQwBPAE4A
VgBFAEMAVABPAFIAAgASAEMATwBOAFYARQBDAFQATwBSAAEACgBTAFEAVQBJAEQABAAmAGEA
cwBzAG8AYwBpAGEAdABlAGQAYgByAGEAbgBkAHMALgBjAGEAAwA0AHUAYgB1AG4AdAB1AC4A
YQBzAHMAbwBjAGkAYQB0AGUAZABiAHIAYQBuAGQAcwAuAGMAYQAA\r\n\r]




[squid-users] squid 3.2.0.14 "request-too-large"

2011-12-21 Thread Eliezer Croitoru

updated to squid 3.2.0.14 and then
i'm getting a page of "request-too-large" while trying to use facebook.

so what i did after searching for anything like that in the past of 
squid to add

reply_header_max_size 30 KB
to the config and it works but squid is moving so slow so i switched 
back to 3.2.0.8.


Thanks
Eliezer


Re: [squid-users] squid 3.2.0.14 "request-too-large"

2011-12-21 Thread Eliezer Croitoru

On 22/12/2011 03:25, Amos Jeffries wrote:

On 22/12/2011 1:13 p.m., Eliezer Croitoru wrote:

updated to squid 3.2.0.14 and then
i'm getting a page of "request-too-large" while trying to use facebook.

so what i did after searching for anything like that in the past of
squid to add
reply_header_max_size 30 KB
to the config and it works but squid is moving so slow so i switched
back to 3.2.0.8.

Thanks
Eliezer


Strange. You just *decreased* one of the size limits to make something
"too big" get through.

Other than the name of the error page being displayed, what are the
status code and page contents describing situation?

it will take me time to get the error back.
deceased? what is the default limit?

Thanks
ELiezer


Amos





Re: [squid-users] Client IP vs Proxy IP

2011-12-22 Thread Eliezer Croitoru

On 23/12/2011 05:33, Chia Wei LEE wrote:

Hi

Thanks for the advice.
This is because we are selling the Static IP to the user, user should use
their own public IP to server the internet instead using the proxy server
IP.
i really recommend to use UBUNTU as os for that cause i have too much 
good experience with it then the other os.


Regards
Eliezer



Cheers
Chia Wei

Notice
The information in this message is confidential and may be legally
privileged.  It is intended solely for the addressee.  Access to this
message by anyone else is unauthorized.  If you are not the intended
recipient, any disclosure, copying or distribution of the message, or any
action taken by you in reliance on it, is prohibited and may be unlawful.
This email is for communication purposes only. It is not intended to
constitute an offer or form a binding agreement.  Our company accepts no
liability for the content of this email, or for the consequences of any
actions taken on the basis of the information provided.  If you have
received this message in error,  please delete it and contact the sender
immediately.  Thank you.





  Amos Jeffries
 To
squid-users@squid-cache.org
  23-12-2011 10:19   cc
  AM
Subject
Re: [squid-users] Client IP vs
Proxy IP










On 23/12/2011 3:04 p.m., Chia Wei LEE wrote:

Hi

Since my solaris proxy server not support tproxy, any alternate way to
solve this ?



Enable forwarded_for header to be sent by Squid. It only works for
websites which lookup and use the header, but that is the only other way.

So, whay are you doing this anyway?

Amos


ForwardSourceID:NTB906





Re: [squid-users] squid 3.2.0.14 "request-too-large"

2011-12-23 Thread Eliezer Croitoru

what im getting is:
ERROR
The requested URL could not be retrieved

Invalid Request error was encountered while trying to process the request:

GET / HTTP/1.1
Host: facebook.com
User-Agent: Mozilla/5.0 (Windows NT 6.1; rv:9.0.1) Gecko/20100101 
Firefox/9.0.1

Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip, deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Proxy-Connection: keep-alive
Cookie: datr=6KKOTtGQTLeCfMvmitDZ4tXj; lu=RAkd5Kk1cX2qtayhUn9zBR8Q; 
p=1; c_user=1029903891; 
xs=60%3A66e234f1071c788d08062de6151741a3%3A0%3A1324408819; 
act=1324646317408%2F81%3A2; 
presence=EDvFA22A2EtimeF1324646317EstateFDutF1324646317862EvisF1EvctF0H0EblcF0EsndF1ODiFA21B00553145243A2CAcDiFA2223552591049184A2EyFA2gA2CQ1324646145EsF0CEchFDp_5f1029903891F11CEblF_7bCC


Some possible problems are:

Request is too large.

Content-Length missing for POST or PUT requests.

Illegal character in hostname; underscores are not allowed.

HTTP/1.1 "Expect:" feature is being asked from an HTTP/1.0 software.

Your cache administrator is webmaster.

Generated Fri, 23 Dec 2011 13:42:05 GMT by c (squid/3.2.0.14)

or

ERROR
The requested URL could not be retrieved

Invalid Request error was encountered while trying to process the request:

GET / HTTP/1.1
Host: www.yahoo.com
User-Agent: Mozilla/5.0 (Windows NT 6.1; rv:9.0.1) Gecko/20100101 
Firefox/9.0.1

Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip, deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Proxy-Connection: keep-alive
Cookie: B=fm82vh5797m2i&b=3&s=3o; 
fpc=d=IHP6TOQo94RKr1oKY.VClSUJRHXvm.gYtjAMfywGC2LLBqajEm8m_Vii7NRGTL6MgADNu0QhuhaPqq5ixWf0T0yKF162SJlg7X7PJODyVRKqEmuJ2n8Tb8RRFOZ8DuuLWlJ5y9H8wooXEg3h0HeEGk6E5rEKoBLlRXGp6yrzc3dYr_cg6Z72b8nCZ54HrWq5rl_Pi3E-&v=2; 
FPS=dl; 
CH=AgMNlQZ6NYZO7IsgACPoIAAmpSAAO58gAASrIAAmISAABvYgACXgIAAIUyAAKUkgADAX; 
BA=ba=2292&ip=79.181.17.57&t=1324486276

Pragma: no-cache
Cache-Control: no-cache

Some possible problems are:

Request is too large.

Content-Length missing for POST or PUT requests.

Illegal character in hostname; underscores are not allowed.

HTTP/1.1 "Expect:" feature is being asked from an HTTP/1.0 software.

Your cache administrator is webmaster.

Generated Fri, 23 Dec 2011 13:46:28 GMT by c (squid/3.2.0.14)

hope to make it work somehow.

Thanks
Eliezer

On 22/12/2011 2:35 p.m., Eliezer Croitoru wrote:

On 22/12/2011 03:25, Amos Jeffries wrote:

On 22/12/2011 1:13 p.m., Eliezer Croitoru wrote:

updated to squid 3.2.0.14 and then
i'm getting a page of "request-too-large" while trying to use facebook.

so what i did after searching for anything like that in the past of
squid to add
reply_header_max_size 30 KB
to the config and it works but squid is moving so slow so i switched
back to 3.2.0.8.

Thanks
Eliezer


Strange. You just *decreased* one of the size limits to make something
"too big" get through.

Other than the name of the error page being displayed, what are the
status code and page contents describing situation?

it will take me time to get the error back.
deceased? what is the default limit?


64KB on headers, no limit on bodies.

Amos




Re: [squid-users] squid 3.2.0.14 "request-too-large"

2011-12-23 Thread Eliezer Croitoru

On 23/12/2011 16:28, Amos Jeffries wrote:

On 24/12/2011 2:47 a.m., Eliezer Croitoru wrote:

what im getting is:
ERROR
The requested URL could not be retrieved

Invalid Request error was encountered while trying to process the
request:



Er, these are: "Invalid Request error was encountered..."

With requests in origin server format.

I will take a guess that these are status code 409 (Conflict)? In which
case you will find security ALERT in cache.log and need to revaluate the
interception and/or DNS setup carefully.

i have used it as a direct forward proxy with no interception at all.
i wil try it later and hope to soon again.

Thank
Eliezer



Amos




Re: [squid-users] Use parent proxy for some domains only

2011-12-25 Thread Eliezer Croitoru

On 25/12/2011 12:20, S.R. wrote:

I am trying to set up squid in proxy-only (no caching) mode, with
direct access except for specified domains. For specified domains I
want squid to forward the request to another proxy server. This second
part is not working! Here are the relevant config lines:

   cache_peer secondproxy.com parent 3128 0 proxy-only no-query
   cache_peer_domain secondproxy.com specialdomain1.com specialdomain2.com
   cache deny all
   prefer_direct on
   always_direct allow all

The proxy always ignores the cache_peer directive. I have Squid
2.6.STABLE21 on CentOS linux 2.6.18 kernel. Please help!

add some acl's like this:

acl proxy1 dstdomain secondproxy.com specialdomain1.com specialdomain2.com
always_direct deny proxy1
always_direct allow all
never_direct allow proxy1

just to clarify  the direct directives means that the proxy itself will 
do the query from the original host.
so we must make the squid server to not allow this and the second option 
for "direct" is to use a parent proxy.


i'm using it for the same purpose so one proxy will cache dynamic 
specific dynamic content with store_url_rewrite and the first will use 
redirector using squidgaurd.


Regards
Eliezer




Re: [squid-users] Error 502 - Bad Gateway - www.allplanlernen.de

2011-12-28 Thread Eliezer Croitoru

On 28/12/2011 17:58, Helmut Hullen wrote:
Works on squid 3.2.0.8
Eliezer


Hallo, Mario,

Du meintest am 28.12.11:


i am running Squid 3.1.0.14 and when i try to access
www.allplanlernen.de i get a 502 error.


Same here (squid 3.2.0.14):

 502 Bad Gateway
 nginx/0.7.67


It works without squid.


Same here, too.


Does anyone know why?


Seems to be a malformed web site.

I've tested (without squid)

 lynx www.allplanlernen.de/themen/impressum.html

and got the informations; looking with a browser (behind squid) onto the
side gets

 Internal Error (check logs)

and that's an error message for the website administrator, not for me.

Viele Gruesse!
Helmut




Re: [squid-users] squid 3.2.0.14 "request-too-large"

2011-12-29 Thread Eliezer Croitoru
 entries above these.
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320

#end of squid.conf

i compiled on the same machine 3.1.18 and it works fine.
i will try later to compile 3.2.0.13 and the daily updated 3.2.0.14
with what to start?

Eliezer

On 22/12/2011 02:13, Eliezer Croitoru wrote:

updated to squid 3.2.0.14 and then
i'm getting a page of "request-too-large" while trying to use facebook.

so what i did after searching for anything like that in the past of
squid to add
reply_header_max_size 30 KB
to the config and it works but squid is moving so slow so i switched
back to 3.2.0.8.

Thanks
Eliezer




Re: [squid-users] Problem Compiling Squid 1.1.8 (noob?)

2012-01-03 Thread Eliezer Croitoru
as far as i have seen the almost exact error it means you dont have all 
building dependencies to make the compilation be done.

what linux version are you using?

Eliezer

On 31/12/2011 13:02, someone wrote:

Well I copied the configuration from my at the time, current squid,
3.1.6, which doesnt isnt built with ssl support, which is why im trying
to build 3.1.8, ok and I got all my configure directives  from the
output of the squid3 -v command since im a total noob and figured that
would be easiest and I would just add the --enable-ssl option to it.

But I also ended up copying that as well. So hope that clarifies. But
still cannot build, I mean it almost finishes but this error is not
gunna fly.

On Sat, 2011-12-31 at 10:31 +0100, Helmut Hullen wrote:

Hallo, someone,

Du meintest am 30.12.11:


Problem Compiling Squid 1.1.8



deviant:/home/devadmin/source/squid-3.1.18# ./configure



-O2' --with-squid=/build/buildd-squid3_3.1.6-1.2+squeeze1-i386-_y3HlV
/squid3-3.1.6


Just for curiosity: which squid version do you really mean?

Viele Gruesse!
Helmut







[squid-users] how about releasing the major "supported" linux distros results? and what about dynamic content sites?

2012-01-03 Thread Eliezer Croitoru

i have couple of things things:
i have made a long way of testing squid for a couple of days on various 
versions of linux distors such as centos 5.7\6.0\6.2 fedora 15\16 ubuntu 
10.04.3\11.10 gentoo(on the last portage) using tproxy and forward 
proxy. (all i686 but ubuntu x64)
i couldnt find any solid info on squid to work with these systems so i 
researched.

i have used squid 3.1.18 3.2.0.8 3.2.0.13(latest daily source) 3.2.0.14.
on centos and ubuntu squid 3.2.0.14 was unable to work smoothly on 
interception mode but on regular forward mode it was fine.


on the centos 5 branch there is no tproxy support built-in the regular 
kernel so you must recompile the kernel to have tproxy support.
on the centos 6 branch there is tproxy support built-in the basic kernel 
but nothing i did (disabling selinux, loading modules and some other 
stuff) didnt make the tproxy to work.
because i started with centos i throughout that i'm doing something 
wrong but after checking ubuntu, fedora and gentoo i understood that the 
problem is with centos 6 tproxy or other things but not squid.


also i didn't found any logic README or info about tproxy that can 
explain the logic of it so in a case of problem it can be debugged.


after all this, what do you think about releasing a list of "supported" 
linux distors that seems to work fine on every squid release?

i'm talking about the major releases and not about "puppy linux" or "dsl".

this is the place to rate the linux distro you would like squid to be 
tested on.



another subject:
what dynamic content  or uncacheable sites by squid will you want to to 
be able to cache?


let say youtube. ms updates and stuff like that.
i know that cachevideo is available but i think that with some effort we 
can build some basic concept that will benefit all of us.


votes for sites will be gladly accepted.

(i will be glad to explain the reasons that makes these sites objects 
uncacheble in many cases to anyone that want to understand it.

also how and why squid is doing such a great job.)


Regards
Eliezer







Re: [squid-users] how about releasing the major "supported" linux distros results? and what about dynamic content sites?

2012-01-04 Thread Eliezer Croitoru

On 04/01/2012 11:15, Amos Jeffries wrote:

On 4/01/2012 5:32 p.m., Eliezer Croitoru wrote:

i have couple of things things:
i have made a long way of testing squid for a couple of days on
various versions of linux distors such as centos 5.7\6.0\6.2 fedora
15\16 ubuntu 10.04.3\11.10 gentoo(on the last portage) using tproxy
and forward proxy. (all i686 but ubuntu x64)
i couldnt find any solid info on squid to work with these systems so i
researched.
i have used squid 3.1.18 3.2.0.8 3.2.0.13(latest daily source) 3.2.0.14.
on centos and ubuntu squid 3.2.0.14 was unable to work smoothly on
interception mode but on regular forward mode it was fine.

on the centos 5 branch there is no tproxy support built-in the regular
kernel so you must recompile the kernel to have tproxy support.
on the centos 6 branch there is tproxy support built-in the basic
kernel but nothing i did (disabling selinux, loading modules and some
other stuff) didnt make the tproxy to work.
because i started with centos i throughout that i'm doing something
wrong but after checking ubuntu, fedora and gentoo i understood that
the problem is with centos 6 tproxy or other things but not squid.



also i didn't found any logic README or info about tproxy that can
explain the logic of it so in a case of problem it can be debugged.


http://wiki.squid-cache.org/Feature/Tproxy4 has everything there is. The
"More Info" link to Balabit is a README that covers what the kernal
internals do. The internals of Squid is only is two trivial bits;
inverting the IPs on arrival, binding the spoofed on on exit, the rest
is generic intercepted traffic handling (parsing the URL in originserver
format, and doing IP security checks on Host header). These are well
tested now and work in 3.2.0.14.

I'd like to know what Ubuntu and Gentoo versions you tested with and
what you conclude the problems are there. Both to push for fixes and
update that feature page.


ubutnu 11.10(i386) + 10.4.3(i386+x64) with latest updates.
the list of development and libs packages that i have used:
sudo apt-get install build-essential libldap2-dev libpam0g-dev libdb-dev 
dpatch cdbs libsasl2-dev debhelper libcppunit-dev libkrb5-dev comerr-dev 
libcap2-dev libexpat1-dev libxml2-dev libcap2-dev dpkg-dev curl 
libssl-dev libssl0.9.8 libssl0.9.8-dbg libcurl4-openssl-dev


the stablest version was 3.2.0.8 (there was a problem with the ssl 
dependencies that was fixed later).

since version 3.2.0.12 i had speed problems.
since version 3.2.0.13 i had a problem that some pages that are not 
supposed to be cached are being cached and on version 3.2.0.14 on 
interception mode i'm gettings "request is too long" something like that 
(there is a thread on the mailing list).


the gentoo i was using is with a month old portable with linux kernel 
2.6.36-rXXX(dont remember now)(i386)


on gentoo you have all you need to build squid with the distro.
just configure and make.(the init.d scripts was taken from gentoo 
portage and modified)


i am building my squid with:
./configure --prefix=/opt/squid32013 --includedir=/include 
--mandir=/share/man --infodir=/share/info 
--localstatedir=/opt/squid32013/var --disable-maintainer-mode 
--disable-dependency-tracking --disable-silent-rules --enable-inline 
--enable-async-io=8 --enable-storeio=ufs,aufs 
--enable-removal-policies=lru,heap --enable-delay-pools 
--enable-cache-digests --enable-underscores --enable-icap-client 
--enable-follow-x-forwarded-for 
--enable-digest-auth-helpers=ldap,password 
--enable-negotiate-auth-helpers=squid_kerb_auth 
--enable-external-acl-helpers=ip_user,ldap_group,session,unix_group,wbinfo_group 
--enable-arp-acl --enable-esi--disable-translation 
--with-logdir=/opt/squid32013/var/log 
--with-pidfile=/var/run/squid32013.pid --with-filedescriptors=65536 
--with-large-files --with-default-user=proxy --enable-linux-netfilter 
--enable-ltdl-convenience --enable-snmp


i'm changing the directory for squid by the version release.

the funny thing  is that fedora 16 with kernel 3.1.6 and squid 3.2.0.13 
from the repo just works fine.







after all this, what do you think about releasing a list of
"supported" linux distors that seems to work fine on every squid release?
i'm talking about the major releases and not about "puppy linux" or
"dsl".


You mean as part of the release? that is kind of tricky because none of
the distros does run-testing until after the release package is
available. Sometimes months or even years after, as in the case of
certain RPM based distros having an 12-18 month round-trip between our
release and bug feedback. Naturally, its way too late to bother
announcing those problems and the faster moving distros appear to have
numerous unfixed bugs in a constantly changing set, a very fuzzy
situation in overview. If I'm aware of anything problematic to a
specific distro in advance I try to mention it in the release
announcement. http

[squid-users] tproxy specific issue on squid 3.1.18 using fedora 15

2012-01-05 Thread Eliezer Croitoru
i made a squid url_rewriter for cache purposes and it works on ubunut 
and on fedora 16(i686).

also it works on fedora 15 with the 3.2.0.12 rpm from fedora 16 repo.
the problem is that when the re_rewriter is replying with the address to 
squid the session that squid is creating is : from the client to the 
server instead from the squid machine to the web server.

what is see using ss is:(tproxy is port 8081)
SYN-SENT0  1   192.168.102.100:38660 
 192.168.102.3:tproxy


but using the 3.2.0.12 and on other systems i see from 
192.168.102.3:high_port_number 		192.168.102.3:tproxy

or
127.0.0.1:hight_port_number 127.0.0.1:tproxy

and everything works fine.

the rewritter has a log function build-in and only when it's redirecting 
and with tproxy squid is doing this thing.

on regular forward proxy everything is working fine.

my config is the basic one with the exception of tproxy and the rewritter

#start lines added
http_port 3129 tproxy
url_rewrite_program /opt/nginx.cache.rb
url_rewrite_host_header off
#end lines added

so : with the 3.2 branch it works but not on 3.1.(3.1.10-3.1.18)

also i cant compile the 3.2 branch on fedora 15 cause always it ends up 
with some error.

i need to know the list of dependencies for compilation.
i had some sasl problem and i installed the sasl dev libs but now its 
stuck on ftp error:

g++: warning: switch ג-fhuge-objectsג is no longer supported
ftp.cc: In function גvoid ftpReadEPSV(FtpStateData*)ג:
ftp.cc:2371:9: error: variable גnג set but not used 
[-Werror=unused-but-set-variable]

cc1plus: all warnings being treated as errors

make[3]: *** [ftp.o] Error 1
make[3]: Leaving directory `/opt/src/squid-3.2.0.8/src'
make[2]: *** [all-recursive] Error 1
make[2]: Leaving directory `/opt/src/squid-3.2.0.8/src'
make[1]: *** [all] Error 2
make[1]: Leaving directory `/opt/src/squid-3.2.0.8/src'
make: *** [all-recursive] Error 1


Thanks
Eliezer


Re: [squid-users] Unable to resolve internally w/ Squid

2012-01-05 Thread Eliezer Croitoru

On 06/01/2012 01:48, berry guru wrote:

I'm running Squid 2.7(stable) on Ubuntu 11.10.  I'm having some
trouble with internal DNS. For some reason I get the following error:

ERROR
The requested URL could not be retrieved.
Unable to determine IP address from hose name "server name goes here"
The DNS returned:
Server Failure: The anem server was unable to process this query.

I've added dns_nameservers 192.168.100.237 which is my DNS server in
the squid.conf. I can resolve externally and get out to the Internet
just fine.

Am I missing a configuration somewhere?

seen your post here and also on ubuntu server forum:
where did you added the dns name server at? client? server?
if it's saying it couldn't resolbe ip address it can be caused by config 
or other options.

try squid3 and not the old 2.7
make sure that when are doing nslookup on the sever you are getting a 
result..

add some more info on the server.
is this an intranet or internet server?

try this:
add a debug_options directive to get more info and look at these pages 
to get some more debuging options

http://www.squid-cache.org/Versions/v2/2.HEAD/cfgman/debug_options.html
http://wiki.squid-cache.org/KnowledgeBase/DebugSections

i think that if you will use:
debug_options ALL,0,1,34,78

it will give info on the dns resolving issue.

Regards
Eliezer


[squid-users] ssorry fix to the line Re: [squid-users] Unable to resolve internally w/ Squid

2012-01-05 Thread Eliezer Croitoru

On 06/01/2012 02:51, Eliezer Croitoru wrote:

On 06/01/2012 01:48, berry guru wrote:

I'm running Squid 2.7(stable) on Ubuntu 11.10. I'm having some
trouble with internal DNS. For some reason I get the following error:

ERROR
The requested URL could not be retrieved.
Unable to determine IP address from hose name "server name goes here"
The DNS returned:
Server Failure: The anem server was unable to process this query.

I've added dns_nameservers 192.168.100.237 which is my DNS server in
the squid.conf. I can resolve externally and get out to the Internet
just fine.

Am I missing a configuration somewhere?

seen your post here and also on ubuntu server forum:
where did you added the dns name server at? client? server?
if it's saying it couldn't resolbe ip address it can be caused by config
or other options.
try squid3 and not the old 2.7
make sure that when are doing nslookup on the sever you are getting a
result..
add some more info on the server.
is this an intranet or internet server?

try this:
add a debug_options directive to get more info and look at these pages
to get some more debuging options
http://www.squid-cache.org/Versions/v2/2.HEAD/cfgman/debug_options.html
http://wiki.squid-cache.org/KnowledgeBase/DebugSections

i think that if you will use:
debug_options ALL,0,1,34,78

debug_options ALL,1 0,6 1,6 34,6 78,6
and if you whould like to get more details change the sixes to 9

Eliezer


it will give info on the dns resolving issue.

Regards
Eliezer




[squid-users] got some debugging logs

2012-01-05 Thread Eliezer Croitoru
i put squid on debug section: 89 to follow tproxy and 17 to see what is 
going on inside other stuff and i found out this:


section 89 fine not showing anything about using the client ip as 
192.168.102.100 :


2012/01/06 04:23:54.072| IpIntercept.cc(381) NatLookup: address BEGIN: 
me= 212.179.154.226:80, client= 212.179.154.226:80, dst= 
192.168.102.100:1063, peer= 192.168.102.100:1063
2012/01/06 04:23:54.074| IpIntercept.cc(166) NetfilterTransparent: 
address TPROXY: me= 212.179.154.226:80, client= 192.168.102.100



section 17 show abnormail thing:
(the outgoing address to the server is the client address and not one of 
the server address)


2012/01/06 04:28:36.782| store_client::copy: 
7DEA6A0583B90AB461F576C6AEE4AA50, from 0, for length 4096, cb 1, cbdata 
0x882b5b8

2012/01/06 04:28:36.783| storeClientCopy2: 7DEA6A0583B90AB461F576C6AEE4AA50
2012/01/06 04:28:36.784| store_client::doCopy: Waiting for more
2012/01/06 04:28:36.785| FwdState::start() 'http://link
2012/01/06 04:28:36.787| fwdStartComplete: http://link
2012/01/06 04:28:36.789| fwdConnectStart: http://1link
2012/01/06 04:28:36.791| fwdConnectStart: got outgoing addr 
192.168.102.100, tos 0

2012/01/06 04:28:36.791| fwdConnectStart: got TCP FD 13


so the main problem is that the request that comes from squid is not 
using the right address in tproxy mode.


Thanks
Eliezer




On 05/01/2012 17:20, Eliezer Croitoru wrote:

i made a squid url_rewriter for cache purposes and it works on ubunut
and on fedora 16(i686).
also it works on fedora 15 with the 3.2.0.12 rpm from fedora 16 repo.
the problem is that when the re_rewriter is replying with the address to
squid the session that squid is creating is : from the client to the
server instead from the squid machine to the web server.
what is see using ss is:(tproxy is port 8081)
SYN-SENT 0 1 192.168.102.100:38660 192.168.102.3:tproxy

but using the 3.2.0.12 and on other systems i see from
192.168.102.3:high_port_number 192.168.102.3:tproxy
or
127.0.0.1:hight_port_number 127.0.0.1:tproxy

and everything works fine.

the rewritter has a log function build-in and only when it's redirecting
and with tproxy squid is doing this thing.
on regular forward proxy everything is working fine.

my config is the basic one with the exception of tproxy and the rewritter

#start lines added
http_port 3129 tproxy
url_rewrite_program /opt/nginx.cache.rb
url_rewrite_host_header off
#end lines added

so : with the 3.2 branch it works but not on 3.1.(3.1.10-3.1.18)

also i cant compile the 3.2 branch on fedora 15 cause always it ends up
with some error.
i need to know the list of dependencies for compilation.
i had some sasl problem and i installed the sasl dev libs but now its
stuck on ftp error:
g++: warning: switch ג-fhuge-objectsג is no longer supported
ftp.cc: In function גvoid ftpReadEPSV(FtpStateData*)ג:
ftp.cc:2371:9: error: variable גnג set but not used
[-Werror=unused-but-set-variable]
cc1plus: all warnings being treated as errors

make[3]: *** [ftp.o] Error 1
make[3]: Leaving directory `/opt/src/squid-3.2.0.8/src'
make[2]: *** [all-recursive] Error 1
make[2]: Leaving directory `/opt/src/squid-3.2.0.8/src'
make[1]: *** [all] Error 2
make[1]: Leaving directory `/opt/src/squid-3.2.0.8/src'
make: *** [all-recursive] Error 1


Thanks
Eliezer




Re: [squid-users] got some debugging logs

2012-01-05 Thread Eliezer Croitoru



section 17 show abnormail thing:
(the outgoing address to the server is the client address and not one
of the server address)

2012/01/06 04:28:36.782| store_client::copy:
7DEA6A0583B90AB461F576C6AEE4AA50, from 0, for length 4096, cb 1,
cbdata 0x882b5b8
2012/01/06 04:28:36.783| storeClientCopy2:
7DEA6A0583B90AB461F576C6AEE4AA50
2012/01/06 04:28:36.784| store_client::doCopy: Waiting for more
2012/01/06 04:28:36.785| FwdState::start() 'http://link
2012/01/06 04:28:36.787| fwdStartComplete: http://link
2012/01/06 04:28:36.789| fwdConnectStart: http://1link
2012/01/06 04:28:36.791| fwdConnectStart: got outgoing addr
192.168.102.100, tos 0


"outgoing addr" is the address Squid assigns to its end of the
squid->server connection. This appears to be correct for TPROXY.


2012/01/06 04:28:36.791| fwdConnectStart: got TCP FD 13


so the main problem is that the request that comes from squid is not
using the right address in tproxy mode.

Thanks
Eliezer




On 05/01/2012 17:20, Eliezer Croitoru wrote:

i made a squid url_rewriter for cache purposes and it works on ubunut
and on fedora 16(i686).



also it works on fedora 15 with the 3.2.0.12 rpm from fedora 16 repo.
the problem is that when the re_rewriter is replying with the address to
squid the session that squid is creating is : from the client to the
server instead from the squid machine to the web server.
what is see using ss is:(tproxy is port 8081)
SYN-SENT 0 1 192.168.102.100:38660 192.168.102.3:tproxy


I'm unclear what this is and what you mean by "squid session". I assume
that is the details Squid sent to the helper?
If so that is a second strong sign of the loop mentioned below.



but using the 3.2.0.12 and on other systems i see from
192.168.102.3:high_port_number 192.168.102.3:tproxy
or
127.0.0.1:hight_port_number 127.0.0.1:tproxy

and everything works fine.


Er, this looks like the TPROXY looping traffic like so:


Client --(TPROXY)> ... Internet
. |
\. Squid --> Rewriter --(TPROXY)> ... Internet
| \---<<|
|
\---> Internet


well i do know how tproxy works and
the redirected address is 127.0.0.1 on ports other then the tproxy.
on the squid 3.2 branch it dosnt happens.
by the way the rewriter makes it without sending headers to the client 
and it shouldnt bind 127.0.0.1 adress to access the localhost.
it's not a tproxy issue because it works on the 3.2 branch and also 
there is no trace for the looping in the log.
the log shows two different requests just to show the concepts of the 
problem.
as i mentioned tproxy works on http (port 80) and the tproxy target 
mentioned in ss is port 8081 so this is not looping but something in 
squid while getting the file from the server.


Thanks
Eliezer





Because, the re-writer is not sending its background fetch requests out
of a socket the kernel has marked as Squids for TPROXY. It needs Squid
or the rewriter to bypass the rewriter fetch if the request was coming
from the re-writer on a Squid IP (or, localhost). Or to byass TPROXY
globally traffic generated internally by the Squid box.



the rewritter has a log function build-in and only when it's redirecting
and with tproxy squid is doing this thing.
on regular forward proxy everything is working fine.

my config is the basic one with the exception of tproxy and the
rewritter

#start lines added
http_port 3129 tproxy
url_rewrite_program /opt/nginx.cache.rb
url_rewrite_host_header off


If the domain is being changed in the URL Host: header re-writing ON is
critical if the traffic is going back to a tproxy or intercept port.
Given the above loop is likely, this could be the problem.


#end lines added

so : with the 3.2 branch it works but not on 3.1.(3.1.10-3.1.18)

also i cant compile the 3.2 branch on fedora 15 cause always it ends up
with some error.
i need to know the list of dependencies for compilation.


Your guess is as good as mine. It is specific to the features you are
building. The official Fedora RPM or its documentation should be a good
guideline for what Fedora packages are related or needed.


i had some sasl problem and i installed the sasl dev libs but now its
stuck on ftp error:
g++: warning: switch ג-fhuge-objectsג is no longer supported
ftp.cc: In function גvoid ftpReadEPSV(FtpStateData*)ג:
ftp.cc:2371:9: error: variable גnג set but not used
[-Werror=unused-but-set-variable]
cc1plus: all warnings being treated as errors


Aha. That was fixed as part of a later update. There was a missing
condition in the if() statement around line 2440. The code there should
contain the following, with line 2371 the definition of "int n;" moved
down as shown:

char h1, h2, h3, h4;
unsigned short port;
int n = sscanf(buf, "(%c%c%c%hu%c)", &h1, &h2, &h3, &port, &h4);

if (n < 4 || h1 != h2 || h1 != h3 || h1 != h4) {
debugs(9, DBG_IMPORTANT, "Invalid EPSV reply from " <<


Amos




Re: [squid-users] How many proxies to run?

2012-01-13 Thread Eliezer Croitoru

On 12/01/2012 19:58, Gerson Barreiros wrote:

I have an unique server doing this job. My scenario is most the same
as mentioned above.

I just want to know if i can make this server a Virtual Machine, that
will use shared hard disk / memory / cpu with another VMs.

web proxy on a vm is not the best choice in a mass load environment.
you can use on a vm but in most cases it will mean lower performances.

i have a place with 40Mbps atm line that is using 2 squid servers on a 
vm and it works fine.
another one is an ISP with 4 machines with a total of 800Mbps output to 
the clients.


statistics is one of the musts before getting a conclusion.

Eliezer



Re: [squid-users] how about releasing the major "supported" linux distros results? and what about dynamic content sites?

2012-01-24 Thread Eliezer Croitoru

On 23/01/2012 21:56, Henrik Nordström wrote:

ons 2012-01-04 klockan 12:48 +0200 skrev Eliezer Croitoru:


the funny thing  is that fedora 16 with kernel 3.1.6 and squid 3.2.0.13
from the repo just works fine.


And have nothing special for making Squid run at all.. except not
mucking around with it and staying as close to upstream as possible.
the problem is that not everyone can upgrade their systems and can 
follow the progress the programming of squid.


Eliezer



Regards
Henrik





Re: [squid-users] Help-me recompile squid

2012-02-11 Thread Eliezer Croitoru

On 11/02/2012 19:03, João Paulo Ferreira wrote:

Is there any way to know what parameters were used by the YUM installation?

2012/2/11 Andrew Beverley:

On Sat, 2012-02-11 at 11:36 -0200, João Paulo Ferreira wrote:

Does anyone know how do I recompile my squid that was installing the
tool using yum (centos)?


I've never used yum, but you should be able to recompile by downloading
the packaged sources. The following page will probably help:

http://wiki.centos.org/HowTos/RebuildSRPM

Andy







use the squid -v to get version options.

Eliezer


Re: [squid-users] Delay response

2012-02-14 Thread Eliezer Croitoru

On 14/02/2012 20:32, Wladner Klimach wrote:

Squid users,

when I try to get this url www.tcu.gov.br it takes too long when it
even does. Look at my set up configs in squid.conf:

it's not your squid configuration.
i am using squid and it doing the same but also using just wget gave me 
the same result... taking a lot of time to resolve the site and then 
seconds to download the page.

the site you have is a redirection to this other page so:
what i got is:
time wget http://portal2.tcu.gov.br/TCU
--2012-02-15 02:41:13--  http://portal2.tcu.gov.br/TCU
Resolving portal2.tcu.gov.br... 189.114.57.177, 189.21.130.177
Connecting to portal2.tcu.gov.br|189.114.57.177|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 85113 (83K) [text/html]
Saving to: `TCU.1'

100%[=>] 
85,113  94.4K/s   in 0.9s


2012-02-15 02:42:10 (94.4 KB/s) - `TCU.1' saved [85113/85113]


real0m56.487s
user0m0.000s
sys 0m0.020s

so it's a total of 56.4 seconds minus 0.9 seconds to actually download 
the file and you will get 55.5 seconds to only resolve the site name\ip.


but without squid and using aria2
 time aria2c http://portal2.tcu.gov.br/TCU
[#1 SIZE:0B/0B CN:1 SPD:0Bs]
2012-02-15 02:45:29.830431 NOTICE - #1 - Download has already completed: 
/tmp/TCU


2012-02-15 02:45:29.830763 NOTICE - Download complete: /tmp/TCU

Download Results:
gid|stat|avg speed  |path/URI
===++===+===
  1|  OK|n/a|/tmp/TCU

Status Legend:
 (OK):download completed.

real0m2.397s
user0m0.180s
sys 0m0.020s

it takes only 2 seconds.

so i think it's something related to http 1.1 and http 1.0 thing and not 
squid only problem.


Regrads
Eliezer





cache_store_log none
maximum_object_size 16384 KB
minimum_object_size 0 KB
maximum_object_size_in_memory 50 KB
cache_swap_low 95
cache_swap_high 98
ipcache_size6000
ipcache_low 90
ipcache_high 92
fqdncache_size 6000
memory_replacement_policy heap GDSF
cache_replacement_policy heap LFUDA
half_closed_clients off
memory_pools off
quick_abort_min 0 KB
quick_abort_max 0 KB
client_db off
buffered_logs on

Could be some of this configs the reason of it?


Att,

Wladner




Re: [squid-users] finding the bottleneck

2012-02-16 Thread Eliezer Croitoru

hey there Eli(i think i know you)
any ssl interception will make the connection slower but it can be tricky.
gmail is one big example of a site that has problems while working on 
plain http and on https will work better also will solve many problems 
because most ISP's wont do ssl interception.

it also depends on your ISP interception machines.
if they have a lot of users and less powerfull machines it will cause 
slowdowns!
if you want to make sure it's not from your side connect a server in you 
infrastructure and use apache tools to test the connection to load on 
the server just to understand the differences of squid.

also you can try to use iptables rules to bypass squid for tests.
that way you can clearly see if your infrastructure is the cause for the 
slowdowns.


Regards
Eliezer


On 17/02/2012 01:11, E.S. Rosenberg wrote:

2012/2/16 Luis Daniel Lucio Quiroz:

Comments, behind...

Le 12 janvier 2012 06:53, E.S. Rosenberg  a écrit :

2012/1/11 jeffrey j donovan:


On Jan 10, 2012, at 7:45 AM, E.S. Rosenberg wrote:


Hi,
We run a setup where our users are passing through 0-2 proxies before
reaching the Internet:
- https 0
- http transparent 1 (soon also 2)
- http authenticated 2

Lately we are experiencing some (extreme) slowness even-though the
load on the line is only about half the available bandwidth, we know
that on the ISP side our traffic is also passing through all kinds of
proxies/filters etc.

So , your ISP does extra filtering? (just to state clear)

Yes.



I would like to somehow be able to see where the slowdowns are
happening to rule out that it's not our side at fault, but I don't
really know what tool/tools I could use to see what is going on here.

Have you identify what kind of traffic is slow and what it is not?
This is very important.

Most of the traffic is regular http.




We suspect that the slowness may be related to the ISP doing
Man-in-the-Middle on non-banking SSL traffic (as per request of
management), but I really want to rule our side out first

This statement is almost impossible, if they were all your HTTPS
conexion will comply about certificate issues. Is that happens?

It does, the stations that we control have the ISPs CA installed so
they actually don't get warned but wireless clients do.

For now we turned of the https filtering and it seems that at least
some of the slowness complaints may have been due to unrelated
infrastructure problems.
But I still would like to know where I can look, we have cacti
graphing our throughput so I have an idea of the load on the line
itself, we have cachemgr installed on all the proxies but I'm afraid I
am not so good at reading&  understanding all of it's output.

Regards and thanks,
Eli


Thanks,
Eli



Hi eli, are you caching ? or going direct.


Hi, sorry for the slow reply.
We are doing some caching, so far we have not optimized it, Calamaris
reports our efficiency between 6-10% on different proxies...

Too low, a good proxy shall give you 30% aprox.


Thanks,
Eliyahu - אליהו


LD




Re: [squid-users] URL rewrite on Squid 3.1.6 as ReverseProxy for Exchange OWA

2012-02-20 Thread Eliezer Croitoru

On 20/02/2012 17:59, Fried Wil wrote:
the simple way is to use a redirection page on the ows web server.
change the index.html page on the "/" .
some sources for that:
http://www.web-source.net/html_redirect.htm
http://www.quackit.com/html/html_redirect.cfm
http://billstclair.com/html-redirect2.html

Regards,
Eliezer

Hello Guys,

I'have a problem with a Squid 3.1.6 as reverse proxy for an exchange
usage ... (rpc not compatible with apache2). I would  like  to redirect
the "/" to "/owa". How can i do that ? thx guys

This is my configuration of squid.conf just for OWA Access.

$
https_port SQUID_IP:443 accel cert=/etc/squid3/external_webmail.crt
key=/etc/squid3/server.key defaultsite=webmail.domain.foo

cache_peer IP_EXCHANGE_SERVER parent 443 0 no-query originserver
login=PASS ssl sslcert=/etc/squid3/EXCHANGE_server.pem
sslflags=DONT_VERIFY_PEER name=exchangeServer

acl url_allow url_regex -i ^https://webmail.domain.foo/.*$
acl url_allow url_regex -i ^https://webmail.domain.foo/rpc.*$
acl url_allol url_regex -i ^https://webmail.domain.foo/exchange.*$
acl url_allow url_regex -i ^https://webmail.domain.foo/exchweb.*$
acl url_allow url_regex -i
^https://webmail.domain.foo/Microsoft-Server-ActiveSync.*$
acl url_allow url_regex -i ^https://webmail.domain.foo/owa.*$
acl url_allow url_regex -i ^https://webmail.domain.foo/EWS.*$
acl url_allow url_regex -i ^https://webmail.domain.foo/autodiscover.*$



acl OWA dstdomain webmail.domain.foo
acl OWA-SITE urlpath_regex
(\/rpc\/|\/owa\/|\/oab\/|\/autodiscover\/|\/Microsoft-Server-ActiveSync|\/public\/|\/exchweb\/|\/EWS\/|\/exchange\/)
acl OWA-DIRS url_regex ^https://EXCHANGE_SERVER/owa/

cache_peer_access exchangeServer allow OWA
cache_peer_access exchangeServer deny all
never_direct allow OWA

cache_peer_access exchangeServer allow OWA-SITE
cache_peer_access exchangeServer deny all
never_direct allow OWA-SITE

cache_peer_access exchangeServer allow OWA-DIRS
cache_peer_access exchangeServer deny all
never_direct allow OWA-DIRS

I wanna just to redirect the https://webmail.domain.foo/ to
https://EXCHANGE_SERVER/owa/

I saw "url_rewrite_program" but it doesn't works :(

Thx in adavance.

Wilfried




Re: [squid-users] Page seems to load for ever

2012-02-23 Thread Eliezer Croitoru

also no problem on squid 3.2.0.8

Eliezer

On 23/02/2012 15:54, karj wrote:

Hi  all,

I’ have a problem with the first page of a site, that’s behind squid.
The page of the site www.tovima.gr seems to load forever (using chrome and
firefox).
When I bring squid out the equation (direct link to the site is
akawww.tovima.gr) the page loads normally.
I can’t find what is going wrong.

My Squid Version under linux is: 2.7/STABLE9

Any help is greatly appreciate.

Thanks in advance
Yiannis





Re: [squid-users] external acl code examples

2012-02-27 Thread Eliezer Croitoru

On 27/02/2012 14:49, E.S. Rosenberg wrote:

Hi all,
I would like to create a small external acl program for use in our
organization and I was wondering are there any code examples of
existing external acls in the public domain?
I have tried searching a bit but other than the spec I haven't really
found a lot.

The most promising bit I found so far is this example from the squid
2.5 days, is it still valid?:
http://etutorials.org/Server+Administration/Squid.+The+definitive+guide/Chapter+12.+Authentication+Helpers/12.5+External+ACLs/

Thanks,
Eli

it is still vaild as long as i know.
the vaild are in the wiki at squid documentation:
http://www.squid-cache.org/Doc/config/external_acl_type/

a more advanced configuration is being documented at:
http://www.visolve.com/system_services/opensource/squid/squid30/externalsupport-3.php#external_acl_type

it looks better then on the squid documentation (my opinion) but on 
etutorials they have more samples.



Regards,
Eliezer


Re: [squid-users] external acl code examples

2012-02-27 Thread Eliezer Croitoru

On 27/02/2012 19:35, E.S. Rosenberg wrote:

2012/2/27 Eliezer Croitoru:

On 27/02/2012 14:49, E.S. Rosenberg wrote:


Hi all,
I would like to create a small external acl program for use in our
organization and I was wondering are there any code examples of
existing external acls in the public domain?
I have tried searching a bit but other than the spec I haven't really
found a lot.

The most promising bit I found so far is this example from the squid
2.5 days, is it still valid?:

http://etutorials.org/Server+Administration/Squid.+The+definitive+guide/Chapter+12.+Authentication+Helpers/12.5+External+ACLs/

Thanks,
Eli


it is still vaild as long as i know.
the vaild are in the wiki at squid documentation:
http://www.squid-cache.org/Doc/config/external_acl_type/

a more advanced configuration is being documented at:
http://www.visolve.com/system_services/opensource/squid/squid30/externalsupport-3.php#external_acl_type

it looks better then on the squid documentation (my opinion) but on
etutorials they have more samples.


Regards,
Eliezer


I meant more in the trend of the perl script of the page I link to, I
wanted to get a feel of what the script/program has to behave like.
But I think the perl script actually given enough of an idea, I'm
going to see if I can create something like that, I don't know yet
what language though, speed will be important...


what is the purpose of the script?
if you can give me more details i can try to help you with it.

i was using some ruby scripts for url rewriting and it worked quite fast.
and the program is quite simple:
Read info from input method(aka user input\keyboard)
split the input
check each part of the input to match credentials
and by the result of the match send to the user(aka terminal\squid) OK 
or ERR in a case of good or bad match.


it took me awhile to understand the url rewriting way of work but both 
of them are has the same idea.


if you do want to see other stuff you can look at the "basic_db_auth" 
script supplied with squid.
it meant for mysql authentication so it will take input from the 
user(terminal\squid) and will return OK or ERR.

quiet simple

Regards,
Eliezer





Thanks,
Eliyahu - אליהו




Re: [squid-users] Implement Tproxy on Debian squeeze

2012-03-02 Thread Eliezer Croitoru

it's a linux module and you should first check if it exists or loaded.
use:
lsmod |grep -i tproxy

to see if it's loaded

to check if the kernel has a built module you should run:
modprobe -l |egrep -i "tproxy|socket"

you should have 2 modules for tproxy and also some iptable socket moduels.

if you didnt did any of the above before running the iptables command 
these should give you the answer if you have tproxy support as a kernel 
module.


Regrads,
Eliezer

On 02/03/2012 19:33, David Touzeau wrote:


There is bad news, backports did not change something according Tproxy
Only kernel 3.2x is available on backports repository.

apt-get install -t squeeze-backports linux-image-3.2.0-0.bpo.1-686-pae
apt-get install -t squeeze-backports upgrade
reboot
my kernel is now
Linux squid32.localhost.localdomain 3.2.0-0.bpo.1-686-pae #1 SMP Sat Feb
11 14:57:20 UTC 2012 i686 GNU/Linux

iptables -t tproxy -A PREROUTING -i eth0 -p tcp -m tcp --dport 80 -j
TPROXY --on-port 80
WARNING: All config files need .conf: /etc/modprobe.d/fuse, it will be
ignored in a future release.
iptables v1.4.8: can't initialize iptables table `tproxy': Table does
not exist (do you need to insmod?)
Perhaps iptables or your kernel needs to be upgraded

grep -i iptables /boot/config-`uname -r`
CONFIG_IP_NF_IPTABLES=m
CONFIG_IP6_NF_IPTABLES=m
# iptables trigger is under Netfilter config (LED target)

SNIF, SNIF


Le 02/03/2012 17:03, David Touzeau a écrit :

iptables -t tproxy -A PREROUTING -i eth0 -p tcp -m tcp --dport 80 -j
TPROXY --on-port 80




Re: [squid-users] SQUID TPROXY option does not work when URL is on the same machine as SQUID

2012-03-07 Thread Eliezer Croitoru

you need to add a the first rule such as:
ip6tables -t mangle -A PREROUTING -p tcp -d (IP of the machine) --dport 
80 -j ACCEPT

= here all the other iptables rules =

Regards
Eliezer

On 05/03/2012 20:09, Vignesh Ramamurthy wrote:

Hello,

We are using squid to transparently proxy the traffic to a captive
portal that is residing on the same machine as the squid server. The
solution was working based on a NAT REDIRECT . We are moving the
solution to TPROXY based now as part of migration to IPv6. The TPROXY
works fine in intercepting traffic and also successfully able to allow
/ deny traffic to IPv6 sites. We are facing a strange issue when we
try to access a URL in the same machine that hosts the squid server.
The acces hangs and squid is not able to connect to the URL. We are
having AOL webserver to host the webpage.

All the configurations as recommended by the squid sites are done.
->  Firewall rules with TPROXY and DIVERT chian has been setup as below

ip6tables -t mangle -N DIVERT
ip6tables -t mangle -A DIVERT -j MARK --set-mark 1
ip6tables -t mangle -A DIVERT -j ACCEPT
ip6tables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
ip6tables -t mangle -A PREROUTING -m tos --tos 0x20 -j ACCEPT
ip6tables -t mangle -A PREROUTING  -i eth0.20 -p tcp --dport 80 -j
TPROXY --tproxy-mark 0x1/0x1 --on-port 8085
ip6tables -t mangle -A PREROUTING -j ACCEPT

->  Policy routing to route proxied traffic to the local box is also
done as recommended
16383:  from all fwmark 0x1 lookup 100
16390:  from all lookup local
32766:  from all lookup main

ip -6 route show table 100
local default dev lo  metric 1024
local default dev eth0.20  metric 1024


Squid configuration used is standard and have provided below a
snapshot of cache.log. Running squid in full debug level with max
logging. I have provided the final set of logs for this transaction.
The URL accessed in the test is
http://[2001:4b8:1::549]/sample_page.adp.

Appreciate any assistance / pointers to solve this. Please do let me
know if any additional information is required.

2012/03/05 04:29:26.320 kid1| HTTP Server REQUEST:
-
GET /sample_page.adp HTTP/1.1
User-Agent: w3m/0.5.2
Accept: text/html, text/*;q=0.5, image/*, application/*, audio/*, multipart/*
Accept-Encoding: gzip, compress, bzip, bzip2, deflate
Accept-Language: en;q=1.0
Host: [2001:4b8:1::549]
Via: 1.0 nmd.tst26.aus.wayport.net (squid/3.2.0.15-20120228-r11519)
X-Forwarded-For: 2001:4b8:1:5:250:56ff:feb2:2cfc
Cache-Control: max-age=259200
Connection: keep-alive


--
2012/03/05 04:29:26.320 kid1| Write.cc(21) Write:
local=[2001:4b8:1:5:250:56ff:feb2:2cfc]:43673
remote=[2001:4b8:1::549]:80 FD 13 flags=25: sz 417: asynCall
0x871f6e8*1
2012/03/05 04:29:26.320 kid1| ModPoll.cc(149) SetSelect: FD 13,
type=2, handler=1, client_data=0x84df560, timeout=0
2012/03/05 04:29:26.320 kid1| HttpStateData status out: [ job7]
2012/03/05 04:29:26.321 kid1| leaving AsyncJob::start()
2012/03/05 04:29:26.321 kid1| event.cc(252) checkEvents: checkEvents
2012/03/05 04:29:26.321 kid1| The AsyncCall MaintainSwapSpace
constructed, this=0x871ff48 [call204]
2012/03/05 04:29:26.321 kid1| event.cc(261) will call
MaintainSwapSpace() [call204]
2012/03/05 04:29:26.321 kid1| entering MaintainSwapSpace()
2012/03/05 04:29:26.321 kid1| AsyncCall.cc(34) make: make call
MaintainSwapSpace [call204]
2012/03/05 04:29:26.321 kid1| event.cc(344) schedule: schedule: Adding
'MaintainSwapSpace', in 1.00 seconds
2012/03/05 04:29:26.321 kid1| leaving MaintainSwapSpace()
2012/03/05 04:29:27.149 kid1| event.cc(252) checkEvents: checkEvents
2012/03/05 04:29:27.149 kid1| The AsyncCall memPoolCleanIdlePools
constructed, this=0x871ff48 [call205]
2012/03/05 04:29:27.149 kid1| event.cc(261) will call
memPoolCleanIdlePools() [call205]
2012/03/05 04:29:27.149 kid1| entering memPoolCleanIdlePools()
2012/03/05 04:29:27.149 kid1| AsyncCall.cc(34) make: make call
memPoolCleanIdlePools [call205]
2012/03/05 04:29:27.150 kid1| event.cc(344) schedule: schedule: Adding
'memPoolCleanIdlePools', in 15.00 seconds
2012/03/05 04:29:27.150 kid1| leaving memPoolCleanIdlePools()
2012/03/05 04:29:27.165 kid1| event.cc(252) checkEvents: checkEvents
2012/03/05 04:29:27.165 kid1| The AsyncCall fqdncache_purgelru
constructed, this=0x871ff48 [call206]
2012/03/05 04:29:27.165 kid1| event.cc(261) will call
fqdncache_purgelru() [call206]
2012/03/05 04:29:27.165 kid1| entering fqdncache_purgelru()
2012/03/05 04:29:27.165 kid1| AsyncCall.cc(34) make: make call
fqdncache_purgelru [call206]
2012/03/05 04:29:27.165 kid1| event.cc(344) schedule: schedule: Adding
'fqdncache_purgelru', in 10.00 seconds
2012/03/05 04:29:27.166 kid1| leaving fqdncache_purgelru()




Re: [squid-users] whitelisted IP problem

2012-03-19 Thread Eliezer Croitoru

On 19/03/2012 18:58, Vijay S wrote:

Hi

I have a my server box hosting apache and squid on centos machine.
When I send my request for clients feeds it works as they have
whitelisted my IP address, and when I make the call via squid its give
me invalid IP. I checked the access log for more information and found
out instead of sending my IP address its sending the localhost IP
address (127.0.0.1).

i'm still trying to understand your network infrastructure.
you have one apache server that also hosts squid?
can you give the logs output?
what is the /etc/hosts content?
by clients you mean you clients of squid?
what do you mean by whitelisted your ip address?
is the apache server is listening on port 80?
can you access it directly by ip + port 80? (no proxy)
when with proxy its not working?
if its so then try to change the hosts file with the hostname in it to
external_ip www.hostname.domain

Regards,
Eliezer


I googled a little and found that using tcp_outgoing_address directive
I can control the outgoing IP address  and to my bad luck this didn’t
work

My configuration file is as follows

acl all src all
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/32
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports

http_access allow localhost
http_access deny all

icp_access allow all

http_port 3128

visible_hostname loclahost
debug_options ALL,1 33,2 28,9
tcp_outgoing_address 122.166.1.184

Can somebody help me with configuration for the my servers. It will be
of great help.

Thanks&  Regards
Vijay




--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
elilezer  ngtech.co.il



Re: [squid-users] whitelisted IP problem

2012-03-19 Thread Eliezer Croitoru

On 20/03/2012 00:36, Vijay S wrote:

Sorry i cannot share the url and hence im replacing the feed as
http://feeds.example.com/newsfeeds.xml

On Tue, Mar 20, 2012 at 1:37 AM, Eliezer Croitoru  wrote:

On 19/03/2012 18:58, Vijay S wrote:


Hi

I have a my server box hosting apache and squid on centos machine.
When I send my request for clients feeds it works as they have
whitelisted my IP address, and when I make the call via squid its give
me invalid IP. I checked the access log for more information and found
out instead of sending my IP address its sending the localhost IP
address (127.0.0.1).


i'm still trying to understand your network infrastructure.
you have one apache server that also hosts squid?

Yes


can you give the logs output?

1332194292.909  1 192.168.1.10 TCP_DENIED/403 3480 CONNECT
feeds.example.com:80 - NONE/- text/html
1332194335.536  1 192.168.1.10 TCP_DENIED/403 3480 CONNECT
feeds.example.com:80 - NONE/- text/html
1332194399.852  1 192.168.1.10 TCP_DENIED/403 3480 CONNECT
feeds.example.com:80 - NONE/- text/html



what is the /etc/hosts content?

122.166.1.184 localhost
122.166.1.184 reactmedia.com
122.166.1.184 rm117



by clients you mean you clients of squid?

there is no squid im accessing a feeds URL
http://feeds.example.com/newsfeeds.xml


what do you mean by whitelisted your ip address?

request from my ip only can access this feeds. which they have
configured. it opens when i access from browser but when i called from
squid using php curl. it doesnot works


is the apache server is listening on port 80?

Yes


this part made me  understand the problem.
if you do want to understand the problem try get into this address:
http://www1.ngtech.co.il/myip.php
i think the problem is that the proxy is forwarding a "x_forward" header 
on the http request what's making the problem.

if your proxy is using "the x_forward" you will see it in the page.

in order to disable this header you can add to your squid.conf this 
directive:

request_header_access X-Forwarded-For deny Safe_ports

if it is indeed what caused the problem you should be ok.

Regards,
Eliezer




can you access it directly by ip + port 80? (no proxy)

yes


when with proxy its not working?

True


if its so then try to change the hosts file with the hostname in it to
external_ip www.hostname.domain

its not the domain to ip mapping issue, when my request is sent its
sent as 192.168.1.10 instead 122.166.1.184. and hence the client url
is blocking me considering as the ip is not listed in there
whitelisted IP's opend for me to access.




Regards,
Eliezer



I googled a little and found that using tcp_outgoing_address directive
I can control the outgoing IP address  and to my bad luck this didn’t
work

My configuration file is as follows

acl all src all
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/32
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports

http_access allow localhost
http_access deny all

icp_access allow all

http_port 3128

visible_hostname loclahost
debug_options ALL,1 33,2 28,9
tcp_outgoing_address 122.166.1.184

Can somebody help me with configuration for the my servers. It will be
of great help.

Thanks&    Regards
Vijay





--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
elilezer    ngtech.co.il




--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
elilezer  ngtech.co.il


Re: [squid-users] whitelisted IP problem

2012-03-19 Thread Eliezer Croitoru

On 20/03/2012 01:40, Vijay S wrote:

Hi Eliezer

I did access your url and it gave me the output as

Your IP address is : 122.166.1.184

I also tried doing
request_header_access X-Forwarded-For deny Safe_ports

Still no luck, log is as follows
1332199742.075  2 192.168.1.117 TCP_DENIED/403 3481 CONNECT
feeds.example.com:80 - NONE/- text/html
1332199746.551  1 192.168.1.117 TCP_DENIED/403 3481 CONNECT
feeds.example.com:80 - NONE/- text/html

can you access my site using the proxy?
just notice you'r proxy config is wrong and must give you this 403 denied.

the logs are saying you are denied to use the proxy.
try to add the following to the proxy squid.conf settings.
at :
after> acl all src all
add> acl localnet 192.168.10/24

after> acl CONNECT method CONNECT
add> http_access allow localnet Safe_ports


and i'm trying to understand...
is this a php script?
just to understand another thing:
you are using the proxy on a gateway machine and this other machine is 
accessing from the lan to the internet?

as far i understand from the log you are trying to use SSL over port 80?
if so then you must specify a rule at the http_access to allow it such as:
http_access allow localnet CONNECT Safe_ports

but to add the rules i wrote you before should give you the right response.

Regards,
Eliezer


this is what i did

$filePath = 'http://feeds.example.com/newsfeeds.xml';
$s = curl_init($filePath);
curl_setopt($s,CURLOPT_RETURNTRANSFER,1);
curl_setopt($s, CURLOPT_HEADER, false);

curl_setopt($s, CURLOPT_HTTPPROXYTUNNEL, TRUE);
curl_setopt($s, CURLOPT_PROXY, "http://192.168.1.117:3128";);
curl_setopt($s, CURLOPT_HTTP_VERSION, CURL_HTTP_VERSION_1_1);
curl_setopt($s, CURLOPT_URL, $filePath);

// Make the request
$xml = '';
$xml = curl_exec($s);
$xml = trim($xml);
curl_close($s);




On Tue, Mar 20, 2012 at 5:00 AM, Eliezer Croitoru  wrote:

On 20/03/2012 00:36, Vijay S wrote:


Sorry i cannot share the url and hence im replacing the feed as
http://feeds.example.com/newsfeeds.xml

On Tue, Mar 20, 2012 at 1:37 AM, Eliezer Croitoru
  wrote:


On 19/03/2012 18:58, Vijay S wrote:



Hi

I have a my server box hosting apache and squid on centos machine.
When I send my request for clients feeds it works as they have
whitelisted my IP address, and when I make the call via squid its give
me invalid IP. I checked the access log for more information and found
out instead of sending my IP address its sending the localhost IP
address (127.0.0.1).



i'm still trying to understand your network infrastructure.
you have one apache server that also hosts squid?


Yes


can you give the logs output?


1332194292.909  1 192.168.1.10 TCP_DENIED/403 3480 CONNECT
feeds.example.com:80 - NONE/- text/html
1332194335.536  1 192.168.1.10 TCP_DENIED/403 3480 CONNECT
feeds.example.com:80 - NONE/- text/html
1332194399.852  1 192.168.1.10 TCP_DENIED/403 3480 CONNECT
feeds.example.com:80 - NONE/- text/html



what is the /etc/hosts content?


122.166.1.184 localhost
122.166.1.184 reactmedia.com
122.166.1.184 rm117



by clients you mean you clients of squid?


there is no squid im accessing a feeds URL
http://feeds.example.com/newsfeeds.xml


what do you mean by whitelisted your ip address?


request from my ip only can access this feeds. which they have
configured. it opens when i access from browser but when i called from
squid using php curl. it doesnot works


is the apache server is listening on port 80?


Yes



this part made me  understand the problem.
if you do want to understand the problem try get into this address:
http://www1.ngtech.co.il/myip.php
i think the problem is that the proxy is forwarding a "x_forward" header on
the http request what's making the problem.
if your proxy is using "the x_forward" you will see it in the page.

in order to disable this header you can add to your squid.conf this
directive:
request_header_access X-Forwarded-For deny Safe_ports

if it is indeed what caused the problem you should be ok.

Regards,
Eliezer





can you access it directly by ip + port 80? (no proxy)


yes


when with proxy its not working?


True


if its so then try to change the hosts file with the hostname in it to
external_ip www.hostname.domain


its not the domain to ip mapping issue, when my request is sent its
sent as 192.168.1.10 instead 122.166.1.184. and hence the client url
is blocking me considering as the ip is not listed in there
whitelisted IP's opend for me to access.




Regards,
Eliezer



I googled a little and found that using tcp_outgoing_address directive
I can control the outgoing IP address  and to my bad luck this didn’t
work

My configuration file is as follows

acl all src all
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/32
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports po

[squid-users] trying to debug 3.2.0.16 behavier with workers.

2012-03-19 Thread Eliezer Croitoru
i have a small Gentoo X86_64 with kernel 3.2.1 with squid 3.2.0.16 that 
is crashing after a while when using workers.


i'm trying to debug it but have no clue on what debug flags to use in 
order to get some data on it.
i have the cache logs stored so i can extract some basic data but i dont 
know what to look for.


i am getting every couple minutes something like that:
2012/03/18 15:29:59 kid2| Failed to select source for 
'http://www.crunchyroll.com/favicon.ico'


what the meaning of the statement? (not with this particular address).

Thanks,
Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
elilezer  ngtech.co.il


Re: [squid-users] trying to debug 3.2.0.16 behavier with workers.

2012-03-19 Thread Eliezer Croitoru

On 20/03/2012 04:50, Amos Jeffries wrote:

On 20.03.2012 15:34, Eliezer Croitoru wrote:

i have a small Gentoo X86_64 with kernel 3.2.1 with squid 3.2.0.16
that is crashing after a while when using workers.

i'm trying to debug it but have no clue on what debug flags to use in
order to get some data on it.
i have the cache logs stored so i can extract some basic data but i
dont know what to look for.


If you can get it to produce core dumps that is easiest. The usual gdb
analysis of the core gives trace and state details.

Otherwise you have to attach gdb to the running worker by PID number.
Workers log their PID to cache.log during the startup sequence, or you
can pick one from the system process listing (ps).

For that matter what is the cache.log entries produced by the worker/kid
as it died?



i am getting every couple minutes something like that:
2012/03/18 15:29:59 kid2| Failed to select source for
'http://www.crunchyroll.com/favicon.ico'

what the meaning of the statement? (not with this particular address).


It means Squid is not able to identify any cache_peer entries or DNS
A/ records which can be used to fetch that URL.
weird cause the site is working but i had some problems with bind for 
the last week.


i will try to get a core dump using:
./squid -NCd1

the server is a forward with no interception in a way so i should get 
the info from the core dump.

i increased the workers to 4 as the number of cpus in the system.

Thanks,
Eliezer



Amos




--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
elilezer  ngtech.co.il


Re: [squid-users] whitelisted IP problem

2012-03-20 Thread Eliezer Croitoru
8
answer=0
2012/03/20 10:14:23.892| ACLChecklist::checkCallback: 0x19f0128 answer=0
2012/03/20 10:14:23.892| aclIsProxyAuth: called for SSL_ports
2012/03/20 10:14:23.892| ACL::FindByName 'SSL_ports'
2012/03/20 10:14:23.892| aclIsProxyAuth: returning 0
2012/03/20 10:14:23.892| Gadgets.cc(57) aclGetDenyInfoPage: got called for
SSL_ports
2012/03/20 10:14:23.892| aclGetDenyInfoPage: no match
2012/03/20 10:14:23.892| FilledChecklist.cc(168) ~ACLFilledChecklist:
ACLFilledChecklist destroyed 0x19f0128
2012/03/20 10:14:23.892| ACLChecklist::~ACLChecklist: destroyed 0x19f0128
2012/03/20 10:14:23.893| FilledChecklist.cc(168) ~ACLFilledChecklist:
ACLFilledChecklist destroyed 0x19f0128
2012/03/20 10:14:23.893| ACLChecklist::~ACLChecklist: destroyed 0x19f0128
2012/03/20 10:14:23.893| ConnStateData::swanSong: FD 11



Thanks&  Regards
Vijay



--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
elilezer  ngtech.co.il


Re: [squid-users] transparent caching

2012-03-20 Thread Eliezer Croitoru

On 20/03/2012 18:23, Zhu, Shan wrote:

Hi, all,

I have a fundamental question that, after studying books and on-line documents, 
I still cannot answer it myself.

That is, when configuring Squid for transparent caching, why do we need to 
forward HTTP from Port 80 to Port 3128? What makes it necessary? If we just let 
Squid to listen on Port 80, what would make the difference.

Can anyone help answer this question?
have you though about it that the client is not asking for port 80 of 
the squid server\gateway?

so...
if you dont understand it i will be glad to explain it to you on the 
squid irc channel or via email.


Regards,
Eliezer


Thanks,
Shan



--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
elilezer  ngtech.co.il


Re: [squid-users] Enabling x-forward address in logs?

2012-03-20 Thread Eliezer Croitoru

On 20/03/2012 18:47, Peter Gaughran wrote:

Hi folks,

We use two pfSense boxes for our wireless networks, all working well
with our proxy setup. The problem is, the squid access log only records
the IP address of the pfSense machines, not the actual originating IP?

Follow_x_forwarded has not been disabled anywhere?

maybe you are using masquerading on the pfsense server?

Regards,
Eliezer


Any ideas?

Cheers!



--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
elilezer  ngtech.co.il


Re: [squid-users] trying to debug 3.2.0.16 behavier with workers.

2012-03-21 Thread Eliezer Croitoru

On 20/03/2012 04:50, Amos Jeffries wrote:

On 20.03.2012 15:34, Eliezer Croitoru wrote:

i have a small Gentoo X86_64 with kernel 3.2.1 with squid 3.2.0.16
that is crashing after a while when using workers.

i'm trying to debug it but have no clue on what debug flags to use in
order to get some data on it.
i have the cache logs stored so i can extract some basic data but i
dont know what to look for.


If you can get it to produce core dumps that is easiest. The usual gdb
analysis of the core gives trace and state details.

Otherwise you have to attach gdb to the running worker by PID number.
Workers log their PID to cache.log during the startup sequence, or you
can pick one from the system process listing (ps).

For that matter what is the cache.log entries produced by the worker/kid
as it died?



i am getting every couple minutes something like that:
2012/03/18 15:29:59 kid2| Failed to select source for
'http://www.crunchyroll.com/favicon.ico'

what the meaning of the statement? (not with this particular address).


It means Squid is not able to identify any cache_peer entries or DNS
A/ records which can be used to fetch that URL.

Amos


this is the day 3 that i'm running squid with 4 workers and nothing is 
wrong until now.

i suppose it's a dns issue that was causing the problem.
if i will find more info i will update.

Thanks,
Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
elilezer  ngtech.co.il


Re: [squid-users] Re: Squid with more storage!

2012-03-21 Thread Eliezer Croitoru

On 20/03/2012 18:07, GarethC wrote:

Ghassan,

I'm not sure adding additional hard drives will improve the performance of
Squid a great deal, if you're caching a lot of large objects over a long
period of time then it may prove beneficial. But if you're looking for
performance you would be better investing in more memory, if you can serve
the majority of 'hot objects' (frequently accessed pages / content) from
memory your response times will improve greatly. The response time for
retrieving an object from disk will be much greater due to the overhead in
disk / controller I/O.
maybe it will be slower then ram but a 15k sas drives will speed up the 
HDD doubles two the SATA ones.

in any way on a busy network with a log of objects it worth every peny!!
on a cache of a friend he used SATA drives for a very busy network and 
there was a lot of difference between the 15k SAS to the 7.2K SATA drives.


Regards,
Eliezer


Hope that helps
Gareth


-
Follow me on...

My Blog
Twitter
LinkedIn
Facebook

--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-with-more-storage-tp4462800p4489355.html
Sent from the Squid - Users mailing list archive at Nabble.com.



--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
elilezer  ngtech.co.il


Re: [squid-users] SSL sites bypass authentication

2012-03-21 Thread Eliezer Croitoru

On 20/03/2012 07:31, Vishal Agarwal wrote:

Hi Amos,

You are right.

Will this work with transferring all  the traffic to http port from iptables ?

Iptables -t nat -A PREROUTING -s 192.168.1.0/24 -p tcp --dport 80 -j REDIRECT 
--to-destination serverip:3128


you do recall that https is suppose to be on port 443 ? right?
just block the https\443 for users outside the proxy with:
iptables -t filter -I FORWARD 1 -s 192.168.1.0/24 -p tcp --dport 443 -j DROP

this will make this DROP rule first and will force users\clients to use 
the proxy for ssl connections.


Regards,
Eliezer


And further checking the traffic in squid

Acl safe_ports port 443 # Secure port
http_access allow safe_ports



Thanks/regards,
Vishal Agarwal


-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz]
Sent: Tuesday, March 20, 2012 11:11 AM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] SSL sites bypass authentication

On 20/03/2012 5:26 p.m., Vishal Agarwal wrote:

Hi,

You require to deny the db_auto just after the allow statement (See below ). I 
hope that will work.


That should be meaningless: if logged in will allow, else if logged in
will deny.

Missing a '!' ?

The final diagnosis of this problem is that the traffic was not even
entering Squid. No amount of Squid config will cause it to respond to
packets which dont even arrive.

Amos





--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
elilezer  ngtech.co.il


Re: [squid-users] Re: Allow access to just one site, blocking all the others.

2012-03-22 Thread Eliezer Croitoru

On 22/03/2012 12:38, GarethC wrote:

Hi Carlos,

Yes you can achieve this by using an ACL (Access control list), for
example...

acl allowedsites dstdomain google.com
http_access allow allowedsites



i would recommend to use another acl for matching the source like:
acl localnet src 192.168.0.0/24
acl allowedsites dstdomain google.com
http_access allow localnet allowedsites
. other http_access rules
http_access deny all

or else you will become open proxy for this site.
Regards,
Eliezer


This will allow users to connect to www.google.com (and any other subdomain
of google.com), and it will deny access to all other websites.

Hope that helps.
Gareth

-
Follow me on...

My Blog
Twitter
LinkedIn
Facebook

--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Allow-access-to-just-one-site-blocking-all-the-others-tp4494965p4495135.html
Sent from the Squid - Users mailing list archive at Nabble.com.



--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
elilezer  ngtech.co.il


Re: [squid-users] limiting connections

2012-03-26 Thread Eliezer Croitoru

On 26/03/2012 23:13, Carlos Manuel Trepeu Pupo wrote:


#!/bin/bash
result=`squidclient -h 192.168.19.19 mgr:active_requests | grep -c "$1"`
if [ $result -eq 0 ]
then
echo 'OK'
else
echo 'ERR'
fi

the code should be something like that:

#!/bin/bash
while read line; do
result=`squidclient -h 192.168.19.19 mgr:active_requests | grep -c 
"$line"`
if [ $result -eq 0 ]
then
echo 'OK'
else
echo 'ERR'
fi
done

but as i was looking at the mgr:active_requests in noticed that squid 
responses very slow and it can take a while to get answer from it sometimes.


Regards,
Eliezer





# If I have the same URI then I denied. I make a few test and it work
for me. The problem is when I add the rule to the squid. I make this:

acl extensions url_regex "/etc/squid3/extensions"
external_acl_type one_conn %URI /home/carlos/script
acl limit external one_conn

# where extensions have:

\.(iso|avi|wav|mp3|mp4|mpeg|swf|flv|mpg|wma|ogg|wmv|asx|asf|deb|rpm|exe|zip|tar|tgz|rar|ppt|doc|tiff|pdf)$

http_access deny extensions limit


So when I make squid3 -k reconfigure the squid stop working

What can be happening ???



* The helper needs to be running in a constant loop.
You can find an example
http://bazaar.launchpad.net/~squid/squid/3.2/view/head:/helpers/url_rewrite/fake/url_fake_rewrite.sh
although that is re-writer and you do need to keep the OK/ERR for external
ACL.


Sorry, this is my first helper, I do not understand the meaning of
running in a constant loop, in the example I see something like I do.
Making some test I found that without this line :
result=`squidclient -h 192.168.19.19 mgr:active_requests | grep -c "$1"`
the helper not crash, dont work event too, but do not crash, so i
consider this is in some way the problem.



* "eq 0" - there should always be 1 request matching the URL. Which is the
request you are testing to see if its>1 or not. You are wanting to deny for
the case where there are *2* requests in existence.


This is true, but the way I saw was: "If the URL do not exist, so
can't be duplicate", I think isn't wrong !!



* ensure you have manager requests form localhost not going through the ACL
test.


I was making this wrong, the localhost was going through the ACL, but
I just changed !!! The problem persist, What can I do ???




Amos




--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [squid-users] Problems with squid in a campus setup

2012-03-27 Thread Eliezer Croitoru

On 27/03/2012 12:25, Christian Loth wrote:

Hello,

first of all, thank you for your recommendations.

On Monday 26 March 2012 16:34:21 Marcus Kool wrote:

Youtube may be hogging your pipe but it is better to know than to guess.


Of course, before we decided for the proxy setup we investigated bandwidth
usage. HTTP Traffic was about 60-70% of our traffic, and a good chunk of that 
was
youtube. That's why we decided, that we would try a squid setup with youtube
caching. However it took a while before we found a solution for caching
youtube, as 3.1 hasn't implemented the necessary features yet.


The access.log shows content sizes so with a simple awk script it should
be easy to find out.

I have also seen many sites where advertisements and trackers consume 15%
bandwidth. This may vary. So blocking ads and trackers is a thing to
consider.


Thanks for this insight! This would of course be a welcome saving of bandwidth
in my personal opinion. I'm just not sure if we're allowed to do this, as the
patron of the proxy is a public-law institution and as such bound to anti-
censorship laws. Need to check with a Legalese translator.



Do not expect too much from web caching. More and more websites
  intentionally make their sites not cacheable. Look at the percentage of
  TCP_MISS in access.log or use a second awk script to find out more about
  cacheability.


Every bit counts. Before we apply for an increase of (expensive) uplink
bandwidth we want to play every trick we have up our sleeve. At the moment our
cache is still cold, because for getting the proxy running again I had to
completely wipe the cache. At the moment we have a hit:miss ratio of about
1:5.  For youtube caching we have a saved bandwidth around 100 GB for the 27th
of march (one video in particular had a size of 768 MB and was watched 19
times). Online lectures are currently en vogue.



I recommend going for a newer Squid: 3.1.19 is stable and fixes issues that
3.1.10 has.


Will do so.


On Linux, aufs has a better performance than diskd


Thanks again for this tip!




Additional memory for storing objects is 2048 MB:

cache_mem 2048 MB


Seems right. But you also need virtual memory for Squid being able to
fork processes without issues. Do have have 8 GB swap ?


Yes. 10 GB actually.



But read the FAQ about memory usage and a large disk cache:
http://wiki.squid-cache.org/SquidFaq/SquidMemory
Squid uses an additional 512*14 MB = 7.1 GB for the index of the disk
  cache. I suggest to downsize to 1 GB in-memory index which implies to
use only 73 GB disk cache.


Ah okay, here's one of my initial mistakes. I used only 10 MB for my
calculation but of course we use a 64bit squid. Out of curiosity and because I
want to learn: the reasoning for shrinking the disk cache from 512 GB to 73 GB
is because a big cache as we have it at the moment only leads to lots of stale
objects in cache which additionally burden the CPU and RAM because of in-
memory metadata?
mostly because for a squid cache (not nginx) of 512GB will consume a lot 
of memory on index while you will prefer to serve other stuff from 
memory and because of the stale objects.
if you are using nginx for youtube caching remember to eliminate the 
caching on youtube videos.
if you wont what will happen is that nginx will create new headers for 
the cached objects and then squid will cache them but will never use 
them again.
(if you didnt changed the basic not cache for objects contains "?" and 
"cgi-bin" you are safe).
by the way if you have some time to analyze the proxy logs you can 
manage to find some other sites you can use nginx to serve some stale files.
i have used it to cache also windows updates and some other video sites 
but the pattern was similar so i had a huge list i could serv most of 
the stale content from nginx what was leaving a lot of ram for squid index.


p.s.
forgot to say that you always must not get into a point that your proxy 
ram is at the limit or else swapiness will happen and will slow the 
server down.


Regards,
Eliezer



Best regards,
  - Christian Loth




--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


[squid-users] i was wondering about caching sourceforge.net files

2012-03-27 Thread Eliezer Croitoru
if it's related to anything directly with squid i would say it refers to 
the store_url_rewite of V2.7.

so the thing is related to caching and not directly to squid.

there are many download mirrors to sourceforge.net and i was wondering 
about he pattern of the domains in order to make a url rewriting for it.


so it's like a "cdn" network.
is ther any other then the store_url_rewrite or squid+nginx means to 
cache the files?(examples to the url structure down)



one sample is the dia tool links.

the download page is :
http://sourceforge.net/projects/portableapps/files/Dia%20Portable/DiaPortable_0.97.2.paf.exe/download

and the download links are:
http://downloads.sourceforge.net/project/portableapps/Dia%20Portable/DiaPortable_0.97.2.paf.exe?r=http%3A%2F%2Fportableapps.com%2Fapps%2Foffice%2Fdia_portable&ts=1332855745&use_mirror=garr

# the above is redirecting to the next:
http://garr.dl.sourceforge.net/project/portableapps/Dia%20Portable/DiaPortable_0.97.2.paf.exe


#this is from another mirror:
http://downloads.sourceforge.net/project/portableapps/Dia%20Portable/DiaPortable_0.97.2.paf.exe?r=http%3A%2F%2Fsourceforge.net%2Fprojects%2Fportableapps%2Ffiles%2FDia%2520Portable%2FDiaPortable_0.97.2.paf.exe%2Fdownload&ts=1332856258&use_mirror=internode

#thats redirecting to:
http://internode.dl.sourceforge.net/project/portableapps/Dia%20Portable/DiaPortable_0.97.2.paf.exe

so i'm guessing it's really simple:
all the downloads have the same uri but and host except the low level 
domain.


so just to replace the low level domain with something else.

means a "cdn" like thing and very simple to cache using nginx and squid.

Thanks,
Eliezer



--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [squid-users] limiting connections

2012-03-27 Thread Eliezer Croitoru

On 27/03/2012 17:27, Carlos Manuel Trepeu Pupo wrote:

On Mon, Mar 26, 2012 at 5:45 PM, Amos Jeffries  wrote:

On 27.03.2012 10:13, Carlos Manuel Trepeu Pupo wrote:


On Sat, Mar 24, 2012 at 6:31 PM, Amos Jeffries
wrote:


On 25/03/2012 7:23 a.m., Carlos Manuel Trepeu Pupo wrote:


On Thu, Mar 22, 2012 at 10:00 PM, Amos Jeffries wrote:



On 23/03/2012 5:42 a.m., Carlos Manuel Trepeu Pupo wrote:



I need to block each user to make just one connection to download
specific extension files, but I dont know how to tell that can make
just one connection to each file and not just one connection to every
file with this extension.

i.e:
www.google.com #All connection that required
www.any.domain.com/my_file.rar #just one connection to that file
www.other.domain.net/other_file.iso #just connection to this file
www.other_domain1.com/other_file1.rar #just one connection to that
file

I hope you understand me and can help me, I have my boss hurrying me
!!!




There is no easy way to test this in Squid.

You need an external_acl_type helper which gets given the URI and
decides
whether it is permitted or not. That decision can be made by querying
Squid
cache manager for the list of active_requests and seeing if the URL
appears
more than once.



Hello Amos, following your instructions I make this external_acl_type
helper:

#!/bin/bash
result=`squidclient -h 192.168.19.19 mgr:active_requests | grep -c "$1"`
if [ $result -eq 0 ]
then
echo 'OK'
else
echo 'ERR'
fi

# If I have the same URI then I denied. I make a few test and it work
for me. The problem is when I add the rule to the squid. I make this:

acl extensions url_regex "/etc/squid3/extensions"
external_acl_type one_conn %URI /home/carlos/script
acl limit external one_conn

# where extensions have:



\.(iso|avi|wav|mp3|mp4|mpeg|swf|flv|mpg|wma|ogg|wmv|asx|asf|deb|rpm|exe|zip|tar|tgz|rar|ppt|doc|tiff|pdf)$

http_access deny extensions limit


So when I make squid3 -k reconfigure the squid stop working

What can be happening ???




* The helper needs to be running in a constant loop.
You can find an example


http://bazaar.launchpad.net/~squid/squid/3.2/view/head:/helpers/url_rewrite/fake/url_fake_rewrite.sh
although that is re-writer and you do need to keep the OK/ERR for
external
ACL.



Sorry, this is my first helper, I do not understand the meaning of
running in a constant loop, in the example I see something like I do.
Making some test I found that without this line :
result=`squidclient -h 192.168.19.19 mgr:active_requests | grep -c "$1"`
the helper not crash, dont work event too, but do not crash, so i
consider this is in some way the problem.




Squid starts helpers then uses the STDIN channel to pass it a series of
requests, reading STDOUt channel for the results. The helper once started is
expected to continue until a EOL/close/terminate signal is received on its
STDIN.

Your helper is exiting without being asked to be Squid after only one
request. That is logged by Squid as a "crash".






* "eq 0" - there should always be 1 request matching the URL. Which is
the
request you are testing to see if its>1 or not. You are wanting to deny
for
the case where there are *2* requests in existence.



This is true, but the way I saw was: "If the URL do not exist, so
can't be duplicate", I think isn't wrong !!



It can't not exist. Squid is already servicing the request you are testing
about.

Like this:

  receive HTTP request ->  (count=1)
  - test ACL (count=1 ->  OK)
  - done (count=0)

  receive a HTTP request (count-=1)
  - test ACL (count=1 ->  OK)
  receive b HTTP request (count=2)
  - test ACL (count=2 ->  ERR)
  - reject b (count=1)
  done a (count=0)


With your explanation and code from Eliezer Croitoru I made this:

#!/bin/bash

while read line; do
result=`squidclient -h 192.168.19.19 mgr:active_requests | grep
-c "$line"`

echo "$line">>  /home/carlos/guarda   # ->  Add this line to
see in a file the $URI I passed to the helper

if [ $result -eq 1 ]   # ->
With your great explain you made me, I change to "1"
then
echo 'OK'
else
echo 'ERR'
fi
done

It's look like it's gonna work, but, here another miss.
1- The "echo "$line">>  /home/carlos/guarda" do not save anything to the file.
2- When I return 'OK' then in my .conf I can't make a rule like I
wrote before, I have to make something like this: "http_access deny
extensions !limit", in the many helps you bring me guys, I learn that
the name "limit" here its not functional. The deny of "limit" its
because when there are just one connection I cant block the page.
3- With the script just like Eliezer tape it the page with the URL to
download stay loadin

Re: [squid-users] squid refresh_pattern - different url with same XYZ package

2012-04-03 Thread Eliezer Croitoru

On 03/04/2012 09:37, Amos Jeffries wrote:

On 3/04/2012 5:57 a.m., Mohsen Saeedi wrote:

Hi

I have a problem with squid refresh_pattern. i used regex on
refresh_pattern and every exe file for example cached and then clients
can download it with high rate. but when someone download from some
website(for example mozilla or filehippo) , they redirect to different
url but the same XYZ exe file. for example firefox-version.exe cached
to the disk but when another clients send new request, it redirect
automatically to different url for downloading same firefox. how can i
configure squid for this condition?



By altering your regex pattern to work with both URL. Or adding a
different pattern to match the alternative URL.

if you have some examples for patterns it can help simplify the problem.
i think i do understand and if so i just recently implemented cache for 
sourceforge using nginx.

as of filehippo it's different.
let say i am downloading one file of total commander the links from 
couple of servers will be:


http://fs31.filehippo.com/7077/9965e6338ead4f6fb9d81ac695eae99a/tc80beta24.exe

http://fs30.filehippo.com/6386/9965e6338ead4f6fb9d81ac695eae99a/tc80beta24.exe

http://fs33.filehippo.com/6957/9965e6338ead4f6fb9d81ac695eae99a/tc80beta24.exe

so there is  a basic url match pattern but you must break the path and 
it's a bit complicated.

but as for source forge i will share the method i have used.
the following is the nginx site.conf content:
#start
server {
  listen   127.0.0.1:8086;

  location / {
root /usr/local/www/nginx_cache/files;
try_files "/sf$uri" @proxy_sf.net;
  }

  location @proxy_sf.net {
resolver 192.168.10.201;
proxy_pass http://$host$request_uri;
proxy_temp_path "/usr/local/www/nginx_cache/tmp";
proxy_store "/usr/local/www/nginx_cache/files/sf$uri";

proxy_set_header X-SourceForge-Cache "elie...@ngtech.co.il";
proxy_set_header Accept "*/*";
proxy_set_header User-Agent "sourceforge Cacher (nginx)";
proxy_set_header Accept-Encoding "";
proxy_set_header Accept-Language "";
proxy_set_header Accept-Charset "";
proxy_set_header Cache-Control "";
access_log /var/log/nginx/sf.net.access_log;
  }
}

#end of nginx site.conf

#on squid.conf i used:
acl sfdl dstdom_regex (dl\.sourceforge\.net)$

cache_peer local_sf parent 8086 0 no-query no-digest proxy-only
cache_peer_access local_sf allow sfdl
cache_peer_access local_sf deny all

never_direct allow sfdl
never_direct deny all

cache deny sfdl
cache allow all

#on the hosts file i added:
127.0.0.1   local_sf

#done
the mail problem with nginx as a proxy that it will in any case will 
download the full file from the source.
means if your client abort the download it will still download the file 
as far as i remeber.(not 100% sure if what caused it was squid or nginx).


i also used nginx for other sites that stores images such as 
imageshack.us with almost the same method cause it seems like nginx will 
serve the files very very fast and because of the huge amount of objects 
i spared from squid index file to store the objects information on the 
images.


Regards,
Eliezer



NOTE: refresh_pattern has nothing to do with where squid caches the
object or where it fetches from. Only how long cacheable things get stored.

Amos


--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [squid-users] bash/mysql script not working

2012-04-03 Thread Eliezer Croitoru

On 03/04/2012 16:04, Osmany Goderich wrote:

Thanx a lot.

This is the outcome. Finally it works

#/bin/bash
while read url
do
if [ `echo "select site from porn where site='$url'"|mysql squid -u squid 
-psquidpass|grep -v "site"` ]
then
echo OK
else
echo ERR
fi
done



you can use another query with a "like" or "regex" but you need to first 
think of what you want to check and then try to let say split the url 
into domain and url and then.
you can use sed or other stuff to use regular expressions but i would 
recommend you warmly on trying to take the "basic_db_auth" perl script 
and to modify it for your needs.

in a case of regular expression splitting it will be much easier ther.
this is the helper:

http://squid.cvs.sourceforge.net/viewvc/squid/squid3/helpers/basic_auth/DB/basic_db_auth.pl.in?revision=1.4

or:

http://bazaar.launchpad.net/~squid/squid/3-trunk/view/head:/helpers/basic_auth/DB/basic_db_auth.pl.in


good luck,
Eliezer



But this is really not what I´m looking for. This scrip only compares what´s 
coming in from squid with what´s in the mysql table. That means that if the url 
is not exactly the same as the request it really does not work. So I´m looking 
to make something that works more like a url_regex acl that takes an element in 
the mysql table and sees if the request's url contains that element. With this 
working I'm sure it would make more sense since the comparisons would succeed 
without the need to fillup the table with millions of urls that otherwise would 
be blocked with just a limited number of strings

-Mensaje original-
De: Andrew Beverley [mailto:a...@andybev.com]
Enviado el: Monday, April 02, 2012 3:45 PM
Para: Osmany Goderich
CC: squid-users@squid-cache.org
Asunto: Re: [squid-users] bash/mysql script not working

On Mon, 2012-04-02 at 14:28 -0400, Osmany Goderich wrote:

Please have a look at this bash/mysql external helper. Can anyone tell
me why is it not working?

...

is there anyway I can test this directly on the server's shell



Yes, just run it on the shell as you would any other script, and input the 
expected values (as specified in squid.conf) followed by a carriage return. The 
script should return OK or ERR as appropriate.

Andy





--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [squid-users] limiting connections

2012-04-03 Thread Eliezer Croitoru

On 03/04/2012 18:30, Carlos Manuel Trepeu Pupo wrote:

On Mon, Apr 2, 2012 at 6:43 PM, Amos Jeffries  wrote:

On 03.04.2012 02:21, Carlos Manuel Trepeu Pupo wrote:


Thanks a looot !! That's what I'm missing, everything work
fine now. So this script can use it cause it's already works.

Now, I need to know if there is any way to consult the active request
in squid that work faster that squidclient 



ACL types are pretty easy to add to the Squid code. I'm happy to throw an
ACL patch your way for a few $$.

Which comes back to me earlier still unanswered question about why you want
to do this very, very strange thing?

Amos




OK !! Here the complicate and strange explanation:

Where I work we have 128 Kbps for the use of almost 80 PCs, a few of
them use download accelerators and saturate the channel. I began to
use the ACL maxconn but I have still a few problems. 60 of the clients
are under an ISA server that I don't administrate, so I can't limit
the maxconn to them like the others. Now with this ACL, everyone can
download but with only one connection. that's the strange main idea.

what do you mean by only one connection?
if it's under one isa server then all of them share the same external IP.

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [squid-users] Squid and FTP

2012-04-03 Thread Eliezer Croitoru

On 04/04/2012 08:12, Colin Coe wrote:

Hi all

I'm trying to get our squid proxy server to allow clients to do
outbound FTP.  The problem is that our corporate proxy uses tcp/8200
for http/https traffic and port 221 for FTP traffic.

Tailing the squid logs I see that squid is attempting to send all FTP
requests direct instead of going through the corporate proxy.

Any ideas how I'd configure squid to use the corp proxy for FTP
instead of going direct?

Thanks

CC


if you have parent proxy you should use the never_direct acl.



acl ftp_ports port 21
acl ftp_ports port (some other ftp ports you are usinf)

cache_peer corp_proxy_ip parent 8085 0 no-query no-digest proxy-only
cache_peer_access corp_proxy_ip allow ftp_ports
#or add another acls to use the corporate proxy
cache_peer_access corp_proxy_ip deny all


never_direct allow ftp_ports
#or add another acls to use the corporate proxy
never_direct deny all


Regards,
Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [squid-users] Allowing linked sites - NTLM and un-authenticated users

2012-04-04 Thread Eliezer Croitoru

On 04/04/2012 13:07, JC Putter wrote:

Jasper,

Sorry to jump in here as the email was addressed to Amos,

We run a configuration very similar to what you want, we use NTLM auth
with squid and dansguardian,

Client>  dansguardian>  Squid>  internet

and for cases which dosnt have any danshguardian in place, what about an 
external_acl that can help with AD integration?


Regards,
Eliezer

Dangurdian has the capability to filter traffic based on the username,
there is a perl script also available which can pull the usernames from
your AD group into a specified filter group.

So we have different filter groups for different users..

Hope it helps.




--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [squid-users] Squid and FTP

2012-04-05 Thread Eliezer Croitoru

On 05/04/2012 10:25, Colin Coe wrote:

On Wed, Apr 4, 2012 at 7:40 PM, Amos Jeffries  wrote:

On 4/04/2012 6:01 p.m., Eliezer Croitoru wrote:


On 04/04/2012 08:12, Colin Coe wrote:


Hi all

I'm trying to get our squid proxy server to allow clients to do
outbound FTP.  The problem is that our corporate proxy uses tcp/8200
for http/https traffic and port 221 for FTP traffic.

Tailing the squid logs I see that squid is attempting to send all FTP
requests direct instead of going through the corporate proxy.

Any ideas how I'd configure squid to use the corp proxy for FTP
instead of going direct?

Thanks

CC


if you have parent proxy you should use the never_direct acl.



acl ftp_ports port 21



Make that "20 21" (note the space between)


Amos


Hi all

I've made changes based on these suggestions but it still doesn't
work.  My squid.conf looks like:
---
cache_peer 172.22.0.7 parent 8200 0 default no-query no-netdb-exchange
proxy-only no-digest no-delay name=other
cache_peer 172.22.0.7 parent 221 0 default no-query no-netdb-exchange
proxy-only no-digest no-delay  name=ftp

cache_dir ufs /var/cache/squid 4900 16 256

http_port 3128

hierarchy_stoplist cgi-bin ?

refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320

acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1

acl localnet src 10.0.0.0/8 # RFC 1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC 1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC 1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly
plugged) machines

acl ftp_ports port 21 20

acl SSL_ports port 443 21 20
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

cache_peer_access ftp allow ftp_ports
cache_peer_access ftp deny all
never_direct allow ftp_ports
cache_peer_access other deny ftp_ports

acl Prod dst 172.22.106.0/23
acl Prod dst 172.22.176.0/23
acl Dev dst 172.22.102.0/23

acl BOM dstdomain .bom.gov.au
cache deny BOM

always_direct allow Dev
always_direct allow Prod
never_direct allow all

http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost
http_access allow localnet
http_access deny all
---

On the proxy server, when I do a 'tcpdump host client and port 3128' I
get nothing more than
---
15:22:19.515518 IP 172.22.106.23.48052>  172.22.106.10.3128: Flags
[S], seq 2995762959, win 5840, options [mss 1460,sackOK,TS val
1681190449 ecr 0,nop,wscale 7], length 0
15:22:19.515567 IP 172.22.106.10.3128>  172.22.106.23.48052: Flags
[S.], seq 1966725410, ack 2995762960, win 14480, options [mss
1460,sackOK,TS val 699366121 ecr 1681190449], length 0
15:22:19.515740 IP 172.22.106.23.48052>  172.22.106.10.3128: Flags
[.], ack 1, win 5840, options [nop,nop,TS val 1681190449 ecr
699366121], length 0
15:23:49.606087 IP 172.22.106.23.48052>  172.22.106.10.3128: Flags
[F.], seq 1, ack 1, win 5840, options [nop,nop,TS val 1681280540 ecr
699366121], length 0
15:23:49.606163 IP 172.22.106.10.3128>  172.22.106.23.48052: Flags
[.], ack 2, win 14480, options [nop,nop,TS val 699456212 ecr
1681280540], length 0
15:23:49.606337 IP 172.22.106.10.3128>  172.22.106.23.48052: Flags
[F.], seq 1, ack 2, win 14480, options [nop,nop,TS val 699456212 ecr
1681280540], length 0
15:23:49.606465 IP 172.22.106.23.48052>  172.22.106.10.3128: Flags
[.], ack 2, win 5840, options [nop,nop,TS val 1681280540 ecr
699456212], length 0
---

Nothing goes into the access.log file from this connection either.


so what is your problem now?
that nothing goes into the access log?
let's go two steps back.
i didnt make sure but you do have:

acl Prod dst 172.22.106.0/23
acl Prod dst 172.22.176.0/23
acl Dev dst 172.22.102.0/23

always_direct allow Dev
always_direct allow Prod

and if you dont get anything in the access log it probably means that 
the clients are not connecting to the server.

how you are directing the ftp clients to squid proxy server?
you do know that squid is not intercepting ftp protocol by itself?
there was some kind of ftp interception tool as far as i remember.

so just a sec state your goals again and what you have done so far.

Regards,
Eliezer

Any ideas?

CC




--
Eliezer Croitoru
h

Re: Fwd: [squid-users] Squid and FTP

2012-04-05 Thread Eliezer Croitoru

On 05/04/2012 12:14, Colin Coe wrote:

Oops, and send to list.

On Thu, Apr 5, 2012 at 4:26 PM, Eliezer Croitoru  wrote:

On 05/04/2012 10:25, Colin Coe wrote:


On Wed, Apr 4, 2012 at 7:40 PM, Amos Jeffries
  wrote:


On 4/04/2012 6:01 p.m., Eliezer Croitoru wrote:



On 04/04/2012 08:12, Colin Coe wrote:



Hi all

I'm trying to get our squid proxy server to allow clients to do
outbound FTP.  The problem is that our corporate proxy uses tcp/8200
for http/https traffic and port 221 for FTP traffic.

Tailing the squid logs I see that squid is attempting to send all FTP
requests direct instead of going through the corporate proxy.

Any ideas how I'd configure squid to use the corp proxy for FTP
instead of going direct?

Thanks

CC


if you have parent proxy you should use the never_direct acl.



acl ftp_ports port 21




Make that "20 21" (note the space between)


Amos



Hi all

I've made changes based on these suggestions but it still doesn't
work.  My squid.conf looks like:
---
cache_peer 172.22.0.7 parent 8200 0 default no-query no-netdb-exchange
proxy-only no-digest no-delay name=other
cache_peer 172.22.0.7 parent 221 0 default no-query no-netdb-exchange
proxy-only no-digest no-delay  name=ftp

cache_dir ufs /var/cache/squid 4900 16 256

http_port 3128

hierarchy_stoplist cgi-bin ?

refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320

acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1

acl localnet src 10.0.0.0/8 # RFC 1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC 1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC 1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly
plugged) machines

acl ftp_ports port 21 20

acl SSL_ports port 443 21 20
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

cache_peer_access ftp allow ftp_ports
cache_peer_access ftp deny all
never_direct allow ftp_ports
cache_peer_access other deny ftp_ports

acl Prod dst 172.22.106.0/23
acl Prod dst 172.22.176.0/23
acl Dev dst 172.22.102.0/23

acl BOM dstdomain .bom.gov.au
cache deny BOM

always_direct allow Dev
always_direct allow Prod
never_direct allow all

http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost
http_access allow localnet
http_access deny all
---

On the proxy server, when I do a 'tcpdump host client and port 3128' I
get nothing more than
---
15:22:19.515518 IP 172.22.106.23.48052>172.22.106.10.3128: Flags
[S], seq 2995762959, win 5840, options [mss 1460,sackOK,TS val
1681190449 ecr 0,nop,wscale 7], length 0
15:22:19.515567 IP 172.22.106.10.3128>172.22.106.23.48052: Flags
[S.], seq 1966725410, ack 2995762960, win 14480, options [mss
1460,sackOK,TS val 699366121 ecr 1681190449], length 0
15:22:19.515740 IP 172.22.106.23.48052>172.22.106.10.3128: Flags
[.], ack 1, win 5840, options [nop,nop,TS val 1681190449 ecr
699366121], length 0
15:23:49.606087 IP 172.22.106.23.48052>172.22.106.10.3128: Flags
[F.], seq 1, ack 1, win 5840, options [nop,nop,TS val 1681280540 ecr
699366121], length 0
15:23:49.606163 IP 172.22.106.10.3128>172.22.106.23.48052: Flags
[.], ack 2, win 14480, options [nop,nop,TS val 699456212 ecr
1681280540], length 0
15:23:49.606337 IP 172.22.106.10.3128>172.22.106.23.48052: Flags
[F.], seq 1, ack 2, win 14480, options [nop,nop,TS val 699456212 ecr
1681280540], length 0
15:23:49.606465 IP 172.22.106.23.48052>172.22.106.10.3128: Flags
[.], ack 2, win 5840, options [nop,nop,TS val 1681280540 ecr
699456212], length 0
---

Nothing goes into the access.log file from this connection either.


so what is your problem now?
that nothing goes into the access log?
let's go two steps back.
i didnt make sure but you do have:


acl Prod dst 172.22.106.0/23
acl Prod dst 172.22.176.0/23
acl Dev dst 172.22.102.0/23

always_direct allow Dev
always_direct allow Prod

and if you dont get anything in the access log it probably means that the
clients are not connecting to the server.
how you are directing the ftp clients to squid proxy server?
you do know that squid is not intercepting ftp protocol by itself?
there was some kind of ftp interception tool as far as i

Re: Fwd: [squid-users] Squid and FTP

2012-04-05 Thread Eliezer Croitoru

On 05/04/2012 14:51, Colin Coe wrote:



OK, I did
export ftp_proxy=http://benpxy1p:3128
wget ftp://ftp2.bom.gov.au/anon/gen/fwo
--2012-04-05 19:43:38--  ftp://ftp2.bom.gov.au/anon/gen/fwo
Resolving benpxy1p... 172.22.106.10
Connecting to benpxy1p|172.22.106.10|:3128... connected.
Proxy request sent, awaiting response... ^C

An entry appeared in access.log only after I hit ^C.

Changing ftp_proxy to ftp://benpxy1p:3128 did not change anything.

CC

well if a access_log entry appears it means that the client is 
contacting the squid server.

did you notice that the size of this list\dir is about 1.8 MB?
take something simple such as:
ftp://ftp.freebsd.org/pub
it should be about 2.9Kb.
then if it didnt go within 10 secs try using without upper stream proxys.
maybe something is setup wrong on the cache_peer.
there are options to debug with a lot of output from squid that can 
simplify the problem.

but i would go to minimum settings and up.
use only one proxy and without a name.
just use the ip for the cache_peer acls.
you can use the debug sections:
http://wiki.squid-cache.org/KnowledgeBase/DebugSections
to make more use of it.
use like this:
debug_options ALL,1 section,verbosity_level
debug_options ALL,1 9,6

there are couple of sections that will provide you with more network 
layer info that will help you find the source of the problem.


to see the log tail the cahce.log file.

well i gave you kind of the worst case scenario i could think of.
if you need more help i'm here.

Regards,
Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: Fwd: [squid-users] Squid and FTP

2012-04-05 Thread Eliezer Croitoru

On 05/04/2012 16:21, Colin Coe wrote:

On Thu, Apr 5, 2012 at 8:32 PM, Eliezer Croitoru  wrote:

On 05/04/2012 14:51, Colin Coe wrote:




OK, I did
export ftp_proxy=http://benpxy1p:3128
wget ftp://ftp2.bom.gov.au/anon/gen/fwo
--2012-04-05 19:43:38--  ftp://ftp2.bom.gov.au/anon/gen/fwo
Resolving benpxy1p... 172.22.106.10
Connecting to benpxy1p|172.22.106.10|:3128... connected.
Proxy request sent, awaiting response... ^C

An entry appeared in access.log only after I hit ^C.

Changing ftp_proxy to ftp://benpxy1p:3128 did not change anything.

CC


well if a access_log entry appears it means that the client is contacting
the squid server.
did you notice that the size of this list\dir is about 1.8 MB?
take something simple such as:
ftp://ftp.freebsd.org/pub
it should be about 2.9Kb.
then if it didnt go within 10 secs try using without upper stream proxys.
maybe something is setup wrong on the cache_peer.
there are options to debug with a lot of output from squid that can simplify
the problem.
but i would go to minimum settings and up.
use only one proxy and without a name.
just use the ip for the cache_peer acls.
you can use the debug sections:
http://wiki.squid-cache.org/KnowledgeBase/DebugSections
to make more use of it.
use like this:
debug_options ALL,1 section,verbosity_level
debug_options ALL,1 9,6

there are couple of sections that will provide you with more network layer
info that will help you find the source of the problem.

to see the log tail the cahce.log file.

well i gave you kind of the worst case scenario i could think of.
if you need more help i'm here.

Regards,
Eliezer



As a test I pointed the client at the corporate proxy.

# export ftp_proxy=http://172.22.0.7:221
# wget ftp://ftp2.bom.gov.au/anon/gen/fwo/IDY02128.dat
--2012-04-05 20:43:53--  ftp://ftp2.bom.gov.au/anon/gen/fwo/IDY02128.dat
Connecting to 172.22.0.7:221... connected.
Proxy request sent, awaiting response... 200 No headers, assuming HTTP/0.9
Length: unspecified
Saving to: “IDY02128.dat”

[
 <=>
] 232 --.-K/s   in 2m 0s

2012-04-05 20:45:52 (1.94 B/s) - “IDY02128.dat” saved [232]

It took a while but it definitely works.  I added the debug lines to
the squid.conf (and restarted).  When pointing the client at the squid
server (for doing the FTP), there were no additional lines logged in
either cache.log or access.log.

Again, doing a tcpdump on the squid server shows the client _is_
connecting to the squid server.

CC


as i was saying...it's not about if it's connecting to the squid server 
but what happens from squid to the world.

try to disable the cache_peer settings on squid...
try to use squid as regular proxy without going to the parent bluecoat 
and see how it works.
just to see if you do have any problem on squid settings that are not 
related to the cache_peer settings.


as you know i and many more people are using squid for ftp and it works 
with no problem.


i cant point exactly about the point of failure in your setup but one 
thing i do know..

i am using 3 cache peers and it works excellent for me.
just for you i will put a setup to see how my basic settings for squid 
works with a parent proxy. (it will take some time )


most likely that if in any point you see access log entry it means that 
you are not configuring something right on your squid.


try the next:
in hosts file add the entry:
172.22.0.7  ftp_proxy
172.22.0.7  http_proxy

then in squid.conf add:
cache_peer ftp_proxy parent 221 0 no-query no-digest proxy-only
cache_peer_access ftp_proxy allow ftp_ports
cache_peer_access ftp_proxy deny all

cache_peer http_proxy parent 8200 0 no-query no-digest proxy-only
cache_peer_access http_proxy deny ftp
cache_peer_access http_proxy allow all

#remove the :
#always_direct allow Dev
#always_direct allow Prod

#and add only:
never_direct allow all


Regards,
Eliezer


--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [squid-users] Strange problem with 503 errors.

2012-04-06 Thread Eliezer Croitoru

On 06/04/2012 07:55, Michael D. Setzer II wrote:

My college recently switched from ISP provided IPs to its own.
202.128.71.x,  202.128.72.x, 202.128.73.x and 202.128.79.x from
ISP to there own 203.215.52.0/22.

The switch seemed to go fine, but have found a few sites that are
giving us 503 errors.

I have gotten around the problem by using rinetd to redirect a port
to the squid server on my home machine for these sites, and one
of our ISPs has given access to there proxy server so now using
that, but ideally would like to come up with what is causing the
problem.

I have squid servers on campus, and was wondering if there was
a way to have them in the event of not being able to connect to a
site, it they could automatically try going thru the ISPs proxy?

The traceroutes to the sites that don't work get to the point right
before they should, 1 hop less. The latest problem site is strange.

Going to tinyurl.com sometimes gives the IP address
64.62.243.89, which works fine. But other times it gives
64.62.243.91, which doesn't from the college. Both work fine from
home. In running wireshark, the 89 address will send and receive
pings and responses, but the 91 only shows the sends with no
receive responses.

Well the problem is on the network layer... ip.. if you can't ping you 
can't connect.

so your college need to contact his ISP and get some answers about it.
as more he has more detailed info about the problem the more he can get 
with.
also he must get someone that actually knows something about the ISP 
network and not to just talk with someone who will not have any 
privilege or knowledge about the system.


Regards,
Eliezer


nmap shows no open ports on the 91 from college, but from home
it does? Perhaps someone might have a way of figuring out what
the issue is. Our IT guys say they are not blocking these IPs, and
even they have found sites that don't work for them.

Thanks.

+--+
   Michael D. Setzer II -  Computer Science Instructor
   Guam Community College  Computer Center
   mailto:mi...@kuentos.guam.net
   mailto:msetze...@gmail.com
   http://www.guam.net/home/mikes
   Guam - Where America's Day Begins
   G4L Disk Imaging Project maintainer
   http://sourceforge.net/projects/g4l/
+--+

http://setiathome.berkeley.edu (Original)
Number of Seti Units Returned:  19,471
Processing time:  32 years, 290 days, 12 hours, 58 minutes
(Total Hours: 287,489)

BOINC@HOME CREDITS
SETI12029945.909740   |   EINSTEIN 7623371.809852
ROSETTA  4388616.446766   |   ABC     12124980.377137




--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [squid-users] Youtube, storeurl and 302 redirection

2012-04-09 Thread Eliezer Croitoru

On 09/04/2012 04:34, Paolo Malfatti wrote:

Hi, i’m using storeurl.pl script to cache youtube videos files and I
followed instructions in:

http://wiki.squid-cache.org/ConfigExamples/DynamicContent/YouTube#ConfigExamples.2BAC8-DynamicContent.2BAC8-YouTube.2BAC8-Discussion.FixedThis
patch don't worked for me, so I changed it a bit:1) I'd like to ignore
redirects, only when request is a storeurl request.2) I dont like to
force a MISS to "304 NOT MODIFIED" responsesWhat do you think
about?ThanksPaolo MalfattiCIDIS CamiriIndex:
src/client_side.c===---
src/client_side.c (revision 134)+++ src/client_side.c (working copy)@@
-2408,6 +2408,17 @@ is_modified = 0; } }+ /* bug fix for 302
moved_temporarily loop bug when using storeurl*/+ if (r->store_url &&
rep->sline.status >= 300 && rep->sline.status <400 && rep->sline.status
!= 304) {+ if (httpHeaderHas(&e->mem_obj->reply->header, HDR_LOCATION))
{+ debug(33, 2) ("clientCacheHit: Redirect Loop
Detected:%s\n",http->uri);+ http->log_type = LOG_TCP_MISS;+
clientProcessMiss(http);+ return;+ }+ }+ /* bug fix end here*/ stale =
refreshCheckHTTPStale(e, r); debug(33, 2) ("clientCacheHit:
refreshCheckHTTPStale returned %d\n",stale); if (stale == 0) {


on what version of squid are you trying to do it?
it works poorly and only on squid 2.X
you can try this:
http://code.google.com/p/youtube-cache/
with nginx or php.
works like a charm.

Regards,
Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [squid-users] Youtube, storeurl and 302 redirection

2012-04-10 Thread Eliezer Croitoru

On 10/04/2012 03:41, Paolo Malfatti wrote:

Yes, but i saw that the secuence of parameters changes : sometimes "id"
is before "range" and sometimes after.

-Mensaje original- From: Hasanen AL-Bana
Sent: Monday, April 09, 2012 2:33 PM
To: Paolo Malfatti
Subject: Re: [squid-users] Youtube, storeurl and 302 redirection

ahh...so if we include the range parameter in the storeurl , we might
get something working.
what you should do is use some regex patterns to limit the usage of the 
storeurl..

this will give you much more effective url rewriting.
dont ever cache the "range" urls.
and for this.. nginx very very good.
i must say that nginx did the job so good.
in a case of redirection it will not cache it.
and so on many cases i have tested.
i have used two squid instances with main squid3.2 branch and the second 
instance squid2.7 for usage with store_url.

there is also a "redirect" argument that is being used by some ISP's.
you can try to analyze the urls a bit more to get some accurate data on it.
i was just sitting on my ass for a long time while trying couple methods 
with store_url and i wrote my own (dont know where it is) 
store_url_rewite helper on java.
it is indeed was hell of a project but nginx does it so good.. so i just 
used it.


Regards,
Eliezer



On Mon, Apr 9, 2012 at 7:10 PM, Paolo Malfatti  wrote:

I know, but when the flash player asks a segment, it puts a range
parameter
in the GET request. The script captures and saves these fragments in the
cache individually.


From: Hasanen AL-Bana
Sent: Monday, April 09, 2012 1:19 AM
To: Paolo Malfatti
Cc: squid-users@squid-cache.org

Subject: Re: [squid-users] Youtube, storeurl and 302 redirection

you will face a major problem , recently youtube decided to split videos
into 1.7MB segments,making it harder for people to download the streamed
videos, most of the newly uploaded videos are coming
into segments now, causing squid to cache only one segment and it thinks
that the video has been fully downloaded, this will either give you
error in
flash player or you might get few seconds of the video
only.





On Mon, Apr 9, 2012 at 4:34 AM, Paolo Malfatti  wrote:

Hi, i’m using storeurl.pl script to cache youtube videos files and I
followed instructions in:


http://wiki.squid-cache.org/ConfigExamples/DynamicContent/YouTube#ConfigExamples.2BAC8-DynamicContent.2BAC8-YouTube.2BAC8-Discussion.FixedThis

patch don't worked for me, so I changed it a bit:
1) I'd like to ignore redirects, only when request is a storeurl
request.
2) I dont like to force a MISS to "304 NOT MODIFIED" responses

What do you think about?
Thanks

Paolo Malfatti
CIDIS Camiri
Index: src/client_side.c
===
--- src/client_side.c (revision 134)
+++ src/client_side.c (working copy)
@@ -2408,6 +2408,17 @@
is_modified = 0;
}
}
+ /* bug fix for 302 moved_temporarily loop bug when using
storeurl*/
+ if (r->store_url && rep->sline.status >= 300 && rep->sline.status
<400 && rep->sline.status != 304) {
+ if (httpHeaderHas(&e->mem_obj->reply->header, HDR_LOCATION))
{
+ debug(33, 2) ("clientCacheHit: Redirect Loop
Detected:%s\n",http->uri);
+ http->log_type = LOG_TCP_MISS;+ clientProcessMiss(http);
+ return;
+ }
+ }
+ /* bug fix end here*/
stale = refreshCheckHTTPStale(e, r);
debug(33, 2) ("clientCacheHit: refreshCheckHTTPStale returned
%d\n",stale);
if (stale == 0) {












--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: Fwd: [squid-users] Squid and FTP

2012-04-14 Thread Eliezer Croitoru

On 14/04/2012 08:34, Colin Coe wrote:

On Thu, Apr 5, 2012 at 10:07 PM, Eliezer Croitoru  wrote:

On 05/04/2012 16:21, Colin Coe wrote:


On Thu, Apr 5, 2012 at 8:32 PM, Eliezer Croitoru
  wrote:


On 05/04/2012 14:51, Colin Coe wrote:




OK, I did
export ftp_proxy=http://benpxy1p:3128
wget ftp://ftp2.bom.gov.au/anon/gen/fwo
--2012-04-05 19:43:38--  ftp://ftp2.bom.gov.au/anon/gen/fwo
Resolving benpxy1p... 172.22.106.10
Connecting to benpxy1p|172.22.106.10|:3128... connected.
Proxy request sent, awaiting response... ^C

An entry appeared in access.log only after I hit ^C.

Changing ftp_proxy to ftp://benpxy1p:3128 did not change anything.

CC


well if a access_log entry appears it means that the client is contacting
the squid server.
did you notice that the size of this list\dir is about 1.8 MB?
take something simple such as:
ftp://ftp.freebsd.org/pub
it should be about 2.9Kb.
then if it didnt go within 10 secs try using without upper stream proxys.
maybe something is setup wrong on the cache_peer.
there are options to debug with a lot of output from squid that can
simplify
the problem.
but i would go to minimum settings and up.
use only one proxy and without a name.
just use the ip for the cache_peer acls.
you can use the debug sections:
http://wiki.squid-cache.org/KnowledgeBase/DebugSections
to make more use of it.
use like this:
debug_options ALL,1 section,verbosity_level
debug_options ALL,1 9,6

there are couple of sections that will provide you with more network
layer
info that will help you find the source of the problem.

to see the log tail the cahce.log file.

well i gave you kind of the worst case scenario i could think of.
if you need more help i'm here.

Regards,
Eliezer



As a test I pointed the client at the corporate proxy.

# export ftp_proxy=http://172.22.0.7:221
# wget ftp://ftp2.bom.gov.au/anon/gen/fwo/IDY02128.dat
--2012-04-05 20:43:53--  ftp://ftp2.bom.gov.au/anon/gen/fwo/IDY02128.dat
Connecting to 172.22.0.7:221... connected.
Proxy request sent, awaiting response... 200 No headers, assuming HTTP/0.9
Length: unspecified
Saving to: “IDY02128.dat”

[
 <=>
] 232 --.-K/s   in 2m 0s

2012-04-05 20:45:52 (1.94 B/s) - “IDY02128.dat” saved [232]

It took a while but it definitely works.  I added the debug lines to
the squid.conf (and restarted).  When pointing the client at the squid
server (for doing the FTP), there were no additional lines logged in
either cache.log or access.log.

Again, doing a tcpdump on the squid server shows the client _is_
connecting to the squid server.

CC



as i was saying...it's not about if it's connecting to the squid server but
what happens from squid to the world.
try to disable the cache_peer settings on squid...
try to use squid as regular proxy without going to the parent bluecoat and
see how it works.
just to see if you do have any problem on squid settings that are not
related to the cache_peer settings.

as you know i and many more people are using squid for ftp and it works with
no problem.

i cant point exactly about the point of failure in your setup but one thing
i do know..
i am using 3 cache peers and it works excellent for me.
just for you i will put a setup to see how my basic settings for squid works
with a parent proxy. (it will take some time )

most likely that if in any point you see access log entry it means that you
are not configuring something right on your squid.

try the next:
in hosts file add the entry:
172.22.0.7  ftp_proxy
172.22.0.7  http_proxy

then in squid.conf add:
cache_peer ftp_proxy parent 221 0 no-query no-digest proxy-only
cache_peer_access ftp_proxy allow ftp_ports
cache_peer_access ftp_proxy deny all

cache_peer http_proxy parent 8200 0 no-query no-digest proxy-only
cache_peer_access http_proxy deny ftp
cache_peer_access http_proxy allow all

#remove the :
#always_direct allow Dev
#always_direct allow Prod

#and add only:
never_direct allow all



Regards,
Eliezer



Hi Eliezer (and thanks for your patience)

I think the problem has been with the BlueCoat the whole time.  The
BlueCoat admin has setup a service account for me and I've configured
squid so that all FTP requests are served through the cache_parent
hard coded with the service account details.

Its working now so were going to leave it like this.

Thanks again for your help and patience.

CC


i'm happy you solved the problem.
if you need something always glad to help.

Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [squid-users] Re: Block access to consumer accounts and services while allowing access to Google Apps for your organization

2012-04-17 Thread Eliezer Croitoru

On 17/04/2012 11:57, Raul Caballero Girol wrote:

Hello everybody,

I need to implement this procedure:

http://support.google.com/a/bin/answer.py?hl=en&answer=1668854

Is possible with squid?. I have tried a lot of posibilities but it doesn't work

Raul Caballero Girol


What is your setup?
have you tried to use "ssl-bump"? because it's a requirement for this to 
work.


what did you tried to do on squid until now?
post the squid.conf

Regards,
Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [squid-users] Re: Youtube, storeurl and 302 redirection

2012-04-21 Thread Eliezer Croitoru

On 21/04/2012 22:52, babajaga wrote:

the above link is a request for 1.7M video segment I was tracing this

morning.<

I an just facing the same problem for my installation, also using the
patched squid-2.7 setup and the URL-rewriter.

After 1.7MB download of the video, the video screen of youtube displays an
error.

Did you find any solution ?

Will the nginx/squid setup solve this problem ?

--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Youtube-storeurl-and-302-redirection-tp4541941p4577027.html
Sent from the Squid - Users mailing list archive at Nabble.com.

it worked for me for a long time.
some times i found that some file download got corrupted so it's not a 
100% in any case.


Regards,
Elizer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [squid-users] Re: Youtube, storeurl and 302 redirection

2012-04-22 Thread Eliezer Croitoru

On 22/04/2012 11:48, x-man wrote:

I think youtube changed something in the player behavior recently, even now
for me http://code.google.com/p/youtube-cache/ is not working with squid
2.7.

Is it working for you now?

--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Youtube-storeurl-and-302-redirection-tp4541941p4577847.html
Sent from the Squid - Users mailing list archive at Nabble.com.

It was working last month so i'm pretty sure they didnt changed a thing.
i will try it on the next day just to make sure it works again.

Regards,
Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [squid-users] Re: Youtube, storeurl and 302 redirection

2012-04-22 Thread Eliezer Croitoru

On 22/04/2012 11:48, x-man wrote:

I think youtube changed something in the player behavior recently, even now
for me http://code.google.com/p/youtube-cache/ is not working with squid
2.7.

Is it working for you now?

--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Youtube-storeurl-and-302-redirection-tp4541941p4577847.html
Sent from the Squid - Users mailing list archive at Nabble.com.
yes i have seen some changes on the behavior of the yt player but i want 
sure it has any effect on the caching.
they indeed changed the thing that yt player wont load\download more 
data of the video file to preserve bandwidth.

this is causing the problem and i will try to analyze it a bit.
if someone has done any research about it i will be glad to hear about it.

Regards,
Eliezer


--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [squid-users] Re: Youtube, storeurl and 302 redirection

2012-04-22 Thread Eliezer Croitoru

On 22/04/2012 15:25, Eliezer Croitoru wrote:

On 22/04/2012 11:48, x-man wrote:

I think youtube changed something in the player behavior recently,
even now
for me http://code.google.com/p/youtube-cache/ is not working with squid
2.7.

Is it working for you now?

--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/Youtube-storeurl-and-302-redirection-tp4541941p4577847.html

Sent from the Squid - Users mailing list archive at Nabble.com.

yes i have seen some changes on the behavior of the yt player but i want
sure it has any effect on the caching.
they indeed changed the thing that yt player wont load\download more
data of the video file to preserve bandwidth.
this is causing the problem and i will try to analyze it a bit.
if someone has done any research about it i will be glad to hear about it.

Regards,
Eliezer


well this issue was easily resolved by "disallowing" the client to abort 
the download using the directive:

proxy_ignore_client_abort off;
on nginx.

and also you must understand that the caching is being done by "nginx" 
and not "squid" so you must apply some cache storage management 
mechanism that can erase youtube cached files.


Regards,
Eliezer


--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [squid-users] Re: Youtube, storeurl and 302 redirection

2012-04-22 Thread Eliezer Croitoru

On 22/04/2012 11:48, x-man wrote:

I think youtube changed something in the player behavior recently, even now
for me http://code.google.com/p/youtube-cache/ is not working with squid
2.7.

Is it working for you now?

--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Youtube-storeurl-and-302-redirection-tp4541941p4577847.html
Sent from the Squid - Users mailing list archive at Nabble.com.

ok as for now the cache with nginx as a cache_peer works for the next sites:
youtube
imdb(480p+720p) (not the rtsp\low quality)
sourceforge downloads (all mirrors saved into one cache object)
vimeo
bliptv
facebook

(you must apply good restrictions acls to prevent from the wrong urls to 
get into nginx cache)


i have seen that videocache(cachevideos) claims to cache somesites that 
are using rtsp and unless they developed some special software they 
cannot be cached by webserver\http.


it's very simple to use some perl regex on nginx to make a file cacheble.

so break.com and Wrzuta.pl can be easy cached also.


sample pattern for wrzuta.pl video:
http://c.wrzuta.pl/wv15404/e14f5b450003f3b24f9333d1/0

#the last 0 is for the range if i will jump it will look like this:

http://c.wrzuta.pl/wv15404/e14f5b450003f3b24f9333d1/4479241

Regards,
Eliezer


--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [squid-users] Re: DNS & Squid tree with parent - child

2012-04-23 Thread Eliezer Croitoru

On 23/04/2012 16:42, anita wrote:

Hi Amos,

Thanks for the reply.
I have another query now.
If the squid is configured in the transparent mode,
a. if a url say yahoo.com is requested through a browser like IE to squid,
will the IE itself initiate a DNS lookup before forwarding the request to
squid or will it simply forward the request to the squid and let the squid
do the DNS look up?

Hey anita,
The end client will do a dns lookup in this case.

Regards,
Eliezer


Thanks.
-Anita

--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/DNS-Squid-tree-with-parent-child-tp4573394p4580445.html
Sent from the Squid - Users mailing list archive at Nabble.com.



--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [squid-users] slow internet browsing.

2012-04-23 Thread Eliezer Croitoru

On 23/04/2012 18:38, Muhammad Yousuf Khan wrote:

well i have been experiencing slow Internet browsing. not very slow
but comparatively slower then IPCOP firewall. i can not understand how
come i diagnose the issue.
i mean. i increase the RAM , i checked the DNS every thing is fine but
my browser stuck at "connecting" ones it start download it do it fast
but then stop for something then start. i am not getting the clear
picture. can anyone help

i am suing debian 6.0.4  with 2.7 stable squid.

Thanks,

MYK

what is your exact problem? slow downloads?
what is your squid setup?transparent ?regular forward proxy?
what browser are you using?
do you have some squid logs? or squid.conf?
what dns server are you using?

Regards,
Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [squid-users] slow internet browsing.

2012-04-24 Thread Eliezer Croitoru
ow localhost
http_access deny all
icp_access allow localnet
icp_access deny all
http_port 3128
hierarchy_stoplist cgi-bin ?
access_log /var/log/squid/access.log squid
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern (Release|Package(.gz)*)$0   20% 2880
refresh_pattern .   0   20% 4320
acl shoutcast rep_header X-HTTP09-First-Line ^ICY\s[0-9]
upgrade_http0.9 deny shoutcast
acl apache rep_header Server ^Apache
broken_vary_encoding allow apache
extension_methods REPORT MERGE MKACTIVITY CHECKOUT
hosts_file /etc/hosts
coredump_dir /var/spool/squid

##ykhan squid redirection to squidguard

#redirect_program /usr/bin/squidGuard
#url_rewrite_program /usr/bin/squidGuard
#url_rewrite_children 5


On Mon, Apr 23, 2012 at 8:42 PM, Eliezer Croitoru  wrote:

On 23/04/2012 18:38, Muhammad Yousuf Khan wrote:


well i have been experiencing slow Internet browsing. not very slow
but comparatively slower then IPCOP firewall. i can not understand how
come i diagnose the issue.
i mean. i increase the RAM , i checked the DNS every thing is fine but
my browser stuck at "connecting" ones it start download it do it fast
but then stop for something then start. i am not getting the clear
picture. can anyone help

i am suing debian 6.0.4  with 2.7 stable squid.

Thanks,

MYK


what is your exact problem? slow downloads?
what is your squid setup?transparent ?regular forward proxy?
what browser are you using?
do you have some squid logs? or squid.conf?
what dns server are you using?

Regards,
Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il



--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [squid-users] slow internet browsing.

2012-04-24 Thread Eliezer Croitoru

On 24/04/2012 18:14, Muhammad Yousuf Khan wrote:

ok i trim down config file  to this as you suggested of blocking
whitelist to local net.. let see how things work tommorw. ill update.
but block list is like 10MB big do you think it could be the
problem.as every query has to be matched with 10 MB database.

?

in any case a dstdomain of 10MB is a very bad idea from what i know.
one thing about dstdomain is that squid must validate the request dns 
records and it will take more bandwidth on dns queries.
if you still dont have local dns server for cahing only this is the time 
to add it.


i think that 10MB of domains can be optimized into some basic DST 
DOMAINS REGEX and some blacklist DSTDOMS REGEX.


i think that some db application for this kind of amount of dstdoms can 
much more effective.

you can also use squidguard for that.

if you can share some (1MB) of the dstdoms of the whole list i might be 
able to try to optimize it in a way.



Regards,
Eliezer





#-Allow All ACL-
acl aci_lan src 10.51.100.0/24
acl aci_general src 10.51.100.0/24

#-Assurety  Whitelist---
acl aci_whitelist  dstdomain "/blocklist/aci_list/whitelist"
http_access allow aci_whitelist aci_general

#--TimeDomainBlock
acl aci_dest dstdomain "/blocklist/aci_list/time_block_domains"

#--General Timing Normal Days Working hours--
acl aci_working_hours time MTWH 10:04-13:04
acl aci_working_hours time MTWH 14:04-18:04
#--General Timing-Friday
acl aci_working_hours time F 10:04-13:04
acl aci_working_hours time F 15:04-18:04

http_access deny  aci_dest aci_working_hours aci_general


On Tue, Apr 24, 2012 at 1:11 PM, Eliezer Croitoru  wrote:

are you taking about the delay pools rules?
also if it's a proxy that is open to the internet i would limit the access
to port 3128 to only lan.
your http_access rules are allowing anyone to use the proxy for the
whitelist.

Regards,
Eliezer



On 24/04/2012 09:06, Muhammad Yousuf Khan wrote:


ok i just disabled all the rules and it works for me now ill test
which rule is making a problem and let you know also.

Thanks

On Mon, Apr 23, 2012 at 11:20 PM, Muhammad Yousuf Khan
  wrote:


here is the log for bbc.co.uk . first and last msg of log

so you can see the time delay.

335205033.183841 10.51.100.240 TCP_MISS/200 24506 GET
http://www.bbc.co.uk/ - DIRECT/212.58.244.66 text/html
1335205057.936328 10.51.100.240 TCP_REFRESH_HIT/304 435 GET
http://static.bbci.co.uk/wwhomepage-3.5/1.0.41/img/broadcast-sprite.png
- DIRECT/80.239.148.70 image/png


On Mon, Apr 23, 2012 at 11:12 PM, Muhammad Yousuf Khan
  wrote:


Here you go with my squid.conf

acl all src all
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl SSL_ports port 443  # https
acl SSL_ports port 563  # snews
acl SSL_ports port 873  # rsync
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl Safe_ports port 631 # cups
acl Safe_ports port 873 # rsync
acl Safe_ports port 901 # SWAT
acl purge method PURGE
acl CONNECT method CONNECT

# sqstat
acl manager proto cache_object
acl webserver src 10.51.100.206/255.255.255.255
http_access allow manager webserver
http_access deny manager



# Skype
acl numeric_IPs dstdom_regex

^(([0-9]+\.[0-9]+\.[0-9]+\.[0-9]+)|(\[([0-9af]+)?:([0-9af:]+)?:([0-9af]+)?\])):443
acl Skype_UA browser ^skype
acl validUserAgent browser \S+

# for cheetah only

#acl usman src 10.51.100.107
#delay_pools 1
#delay_class 1 1
#delay_parameters 1 22000/22000
#delay_access 1 allow usman



#-Allow All ACL-
acl aci_lan src 10.51.100.0/24
acl aci_general src 10.51.100.0/24


#My ip
acl my_ip src 10.51.100.240
http_access allow my_ip



# Testing delay pool
delay_pools 1
delay_class 1 1
delay_parameters 1 22000/1024
delay_access 1 allow aci_general




#-Assurety  Whitelist---
acl aci_whitelist  dstdomain "/blocklist/aci_list/whitelist"
http_access allow aci_whitelist

#--Senior Allow Domainlist--
acl aci_seniors dstdomain "/blocklist/aci_list/whitelist_seniors"
#-#See
implimentation in ACI implimentation section

#Assurety

[squid-users] anyone knows some info about youtube "range" parameter?

2012-04-24 Thread Eliezer Croitoru
as for some people asking me recently about youtube cache i have checked 
again and found that youtube changed their video uris and added an 
argument called "range" that is managed by the youtube player.
the original url\uri dosnt include range but the youtube player is using 
this argument to save bandwidth.


i can implement the cahing with ranges on nginx but i dont know yet the 
way that range works.

it can be based on user bandwidth or "fixed" size of chunkes.

if someone up to the mission of analyzing it a bit more to understand it 
so the "range" cache will be implemented i will be happy to get some 
help with it.


Thanks,
Eliezer


--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [squid-users] anyone knows some info about youtube "range" parameter?

2012-04-25 Thread Eliezer Croitoru

On 25/04/2012 06:02, Amos Jeffries wrote:

On 25/04/2012 6:02 a.m., Eliezer Croitoru wrote:

as for some people asking me recently about youtube cache i have
checked again and found that youtube changed their video uris and
added an argument called "range" that is managed by the youtube player.
the original url\uri dosnt include range but the youtube player is
using this argument to save bandwidth.

i can implement the cahing with ranges on nginx but i dont know yet
the way that range works.
it can be based on user bandwidth or "fixed" size of chunkes.

if someone up to the mission of analyzing it a bit more to understand
it so the "range" cache will be implemented i will be happy to get
some help with it.

Thanks,
Eliezer




I took a look at it a while back...

I got as far as determining that the "range" was roughly byte-ranges as
per the HTTP spec BUT (and this is a huge BUT). Each response was
prefixed with some form of file intro bytes. Meaning the rages were not
strictly sub-bytes of some original object. At this point there is no
way for Squid to correctly generate the intro bytes, or to merge/split
these "ranges" for servicing other clients.

When used the transfer is relatively efficient, so the impact of
bypassing the storeurl cache feature is not too bad. The other option is
to re-write the URL without range and simply reply with the whole video
regardless. It is a nasty mapping problem with bandwidth waste either way.


they have changed something in the last month or so.
the was using a "begin"
and now they are usinn " "rang=13-X" 13 is the first..
i was thinking also on rewriting the address cause it works perfectly 
with my testing.


will update more later.

Eliezer

That was a year or two ago, so it may be worth re-investigating.

Amos



--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [squid-users] anyone knows some info about youtube "range" parameter?

2012-04-25 Thread Eliezer Croitoru
imes it worth it.


about the id itag and range order in the url, because i was using nginx 
i didnt have this problem at all.

ngnix uses the info from the url args simple and smoothly.
generally the store_url_rewrite has much more potential to be cache 
effective then nginx proxy_store as the ngnix proxy_store is a permanent 
store mechanism without any time limit calculation.
as for now nginx has the option to integrate with perl that can be used 
for many things such as requests manipulation.


another option i was thinking is the icap to rewrite the url or do some 
other stuff.


but as for until now nginx was fine i was working on it.

Regards,
Eliezer




Ghassan

On 4/25/12, Eliezer Croitoru  wrote:




--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [squid-users] anyone knows some info about youtube "range" parameter?

2012-04-25 Thread Eliezer Croitoru

On 25/04/2012 20:48, Hasanen AL-Bana wrote:

wouldn't be better if we save the video chunks ? youtube is streaming
files with 1.7MB flv chunks, youtube flash player knows how to merge
them and play themso the range start and end will alaways be the
same for the same video as long as user doesn't fast forward it or do
something nasty...even in that case , squid will just cache that
chunk...that is possible by rewriting the STORE_URL and including the
range start&  end


as i already answered a detailed answer to Ghassan.
i think that caching the chunks if possible is pretty good thing.
i tried it with nginx but havnt got the option to try it with 
store_url_rewrite.


hope to try it somewhere next week.
Regards,
Eliezer


On Wed, Apr 25, 2012 at 8:39 PM, Ghassan Gharabli
  wrote:


Hello,

As i remember I already discussed this subject before mentioning that
Youtube several months ago added a new variable/URI "RANGE". I tried
to deny all URLs that comes with "RANGE" to avoid presenting the error
at Youtube Player butb tried to investigate more and came with a
solution like that :



# youtube 360p itag=34 ,480p itag=35 [ITAG/ID/RANGE]
  if 
(m/^http:\/\/([0-9.]{4}|.*\.youtube\.com|.*\.googlevideo\.com|.*\.video\.google\.com)\/.*(itag=[0-9]*).*(id=[a-zA-Z0-9]*).*(range\=[0-9\-]*)/)
{
print $x . "http://video-srv.youtube.com.SQUIDINTERNAL/"; . $3 . "&" .
$4 . "\n";

# youtube 360p itag=34 ,480p itag=35 [ID/ITAG/RANGE]
} elsif 
(m/^http:\/\/([0-9.]{4}|.*\.youtube\.com|.*\.googlevideo\.com|.*\.video\.google\.com)\/.*(id=[a-zA-Z0-9]*).*(itag=[0-9]*).*(range\=[0-9\-]*)/)
{
print $x . "http://video-srv.youtube.com.SQUIDINTERNAL/"; . $2 . "&" .
$4 . "\n";

# youtube 360p itag=34 ,480p itag=35 [RANGE/ITAG/ID]
} elsif 
(m/^http:\/\/([0-9.]{4}|.*\.youtube\.com|.*\.googlevideo\.com|.*\.video\.google\.com)\/.*(range\=[0-9\-]*).*(itag=[0-9]*).*(id=[a-zA-Z0-9]*)/)
{
print $x . "http://video-srv.youtube.com.SQUIDINTERNAL/"; . $4 . "&" .
$2 . "\n";
--

I already discovered that by rewriting them and save them as
videplayback?id=000&range=00-00 would solve the problem but
the thing is the cache folder would be increased faster because we are
not only saving one file as we are saving multiple files for one ID!.

AS for me , it saves alot of bandwidth but bigger cache . If you check
and analyze it more then you will notice same ID or same videop while
watching the link changes for example :

It starts [ITAG/ID/RANGE] then changes to [ID/ITAG/RANGE] and finally
to [RANGE/ITAG/ID] so with my script you can capture the whole
places!.


Ghassan

On 4/25/12, Eliezer Croitoru  wrote:




--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [squid-users] anyone knows some info about youtube "range" parameter?

2012-04-26 Thread Eliezer Croitoru

On 26/04/2012 11:15, johan firdianto wrote:

This range behaviour related with error occured in youtube flash player ?
i'm using url rewrite if any range parameter in uri videos, i stripp off.
i can save the whole file of video to directory, by parsing store.log
and retrieve the same video using curl/wget.
The problem, when the same video is requested, and my rewrite script
issues 302  to the file that directory located (under web directory
using nginx).
i think that the worst way to redirect into other cache object is to use 
a 302 for a cache applications that has crossdomain restrictions.
in any way i have tried redirect with a 302 caused a problem for this 
kind of websites\apps.
the best way is to use it will be a "transparent" redirection using 
squid capabilities.
i will try to build some store_url_rewriter with range support and see 
if i get the same bad results i was having with nginx.


i will try to use ruby or perl for it.
i have good experience with ruby and perl so i will try them both.

Regards,
Eliezer


error occured message appears in flash player.
I refresh many times, still error occured, unless i choose different
quality (example: 480p or 240p).
any suggest ?


On Thu, Apr 26, 2012 at 2:41 PM, Christian Loth
  wrote:



--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [squid-users] Re: how the big guys are doing it - Caching dynamic content with squid

2012-04-26 Thread Eliezer Croitoru

the big guys such as bluecoat? or other squid users?
i think bluecoat have some business contact with youtube and also they 
do have a very powerful framework and a lot of resources.
many other squid users are using "cachevideo" that is using the 
"url_rewite" option of squid.


icap can do many things but the problem is that you must program 
something on icap.


i have used icap server "greasyspoon".
it can user java ruby and javascripts on any request and response.
i have tried to implement dynamic content caching that combines the icap 
server but i need more programing knowledge to do whatever i wanted so i 
stopped working on it.


if someone is up to the mission i think a lot of people will like it.

Regards,
Eliezer


On 26/04/2012 20:53, Marcus Kool wrote:



On 04/26/2012 01:07 PM, x-man wrote:

Hi Marcus, thanks for reply.

I just came to know from you about the ICAP solution.

As far as I get it, it will adapt the content so the SQUID can cache it
itself, by making the dynamic stuff in appropriate way so that squid can
consume it as non dynamic? Is that right?


The best argument to use an ICAP-based solution is that it is
powerful. It will be able to undo all tricks that content providers
use to make content uncacheable.


I was thinking about a solution that will stay aside the squid, and will
deal with dynamic content, that's why I was thinking about connecting
this
with cache peer to a squid. The squid will deal with the static
content and
whatever is good for.

I'm also expecting someone from the squid team to also suggest some
way of
proper doing it



--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [squid-users] anyone knows some info about youtube "range" parameter?

2012-04-26 Thread Eliezer Croitoru

On 25/04/2012 20:48, Hasanen AL-Bana wrote:

wouldn't be better if we save the video chunks ? youtube is streaming
files with 1.7MB flv chunks, youtube flash player knows how to merge
them and play themso the range start and end will alaways be the
same for the same video as long as user doesn't fast forward it or do
something nasty...even in that case , squid will just cache that
chunk...that is possible by rewriting the STORE_URL and including the
range start&  end

On Wed, Apr 25, 2012 at 8:39 PM, Ghassan Gharabli
  wrote:



i have written a small ruby store_url_rewrite that works with range 
argument in the url.

(on the bottom of this mail)

it's written in ruby and i took some of andre work at
http://youtube-cache.googlecode.com

it's not such a fancy script and ment only for this specific youtube 
problem.


i know that youtube didnt changed the this range behavior for the whole 
globe cause as for now i'm working from a remote location that still has 
no "range" at all in the url.

so in the same country you can get two different url patterns.

this script is not cpu friendly (uses more the same amount of regex 
lookups always) but it's not what will bring your server down!!!


this is only a prototype and if anyone wants to add some more domains 
and patterns i will be more then glad to make this script better then 
it's now.


this is one hell of a regex nasty script and i could have used the uri 
and cgi libs in order to make the script more user friendly but i choose 
to just build the script skeleton and move on from there using the basic 
method and classes of ruby.


the idea of this script is to extract each of the arguments such as id 
itag and ragne one by one and to not use one regex to extract them all 
because there are couple of url structures being used by youtube.


if someone can help me to reorganize this script to allow it to be more 
flexible for other sites with numbered cases per 
site\domain\url_structure i will be happy to get any help i can.


planned for now to be added into this scripts are:
source forge catch all download mirrors into one object
imdb HQ (480P and up) videos
vimeo videos

if more then just one man will want:
bliptv
some of facebook videos
some other images storage sites.

if you want me to add anything to my "try to cache" list i will be help 
to hear from you on my e-mail.


Regards,
Eliezer


##code start##
#!/usr/bin/ruby
require "syslog"

class SquidRequest
attr_accessor :url, :user
attr_reader :client_ip, :method

def method=(s)
@method = s.downcase
end

def client_ip=(s)
@client_ip = s.split('/').first
end
end

def read_requests
# URL  client_ip "/" fqdn  user  method [ 
kvpairs]

STDIN.each_line do |ln|
r = SquidRequest.new
r.url, r.client_ip, r.user, r.method, *dummy = 
ln.rstrip.split(' ')

(STDOUT << "#{yield r}\n").flush
end
end

def log(msg)
Syslog.log(Syslog::LOG_ERR, "%s", msg)
end

def main
Syslog.open('nginx.rb', Syslog::LOG_PID)
log("Started")

read_requests do |r|
idrx = /.*(id\=)([A-Za-z0-9]*).*/
itagrx = /.*(itag\=)([0-9]*).*/
rangerx = /.*(range\=)([0-9\-]*).*/

newurl = "http://video-srv.youtube.com.SQUIDINTERNAL/id_"; + 
r.url.match(idrx)[2] + "_itag_" + r.url.match(itagrx)[2] + "_range_" + 
r.url.match(rangerx)[2]


log("YouTube Video [#{newurl}].")

newurl
end
end

main
##code end#


--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [squid-users] Cache that will not grow in size

2012-04-26 Thread Eliezer Croitoru

On 27/04/2012 08:37, Mark Engels wrote:

Hello everyone,

Ive been working on a squid cache appliance for a few weeks now (on and off) 
and things appear to be working. However I seem to have an issue where the 
cache size simply refuses to grow in size. My first attempt had the cache stall 
at 2.03gb and with this latest build im stalling at 803mb.

I haven’t a clue on where to go or what to look at for determining what could 
be wrong and im hopeing you could be of assistance ☺ also any tips for better 
performance or improved caching would be greatly appreciated. (Yes I have 
googled and I think ive applied what I could but it’s a little over my head a 
few weeks in and no deep linux experience)


Some facts:

Ive been determining the cache size with the following command, du –hs 
/var/spool/squid
Squid is running on a centOS6.2 machine
Squid is version 3.1.10
CentOS is running in a hyperV virtual machine with integration services 
installed
VM has 4gb ram and a 60gb HDD allocated
Squid is acting as a cache/error page handler box only. There is the main proxy 
sitting one step downstream with squid setup in a
“T” network (the main cache can skip squid and go direct to the net if squid 
falls over on me, hyperV issue)


Config file:

Acl downstream src 192.168.1.2/32
http_access allow downstream

cache_mgr protectedem...@moc.sa.edu.au

<  all the standard acl rules here>

http_access allow localnet
http_access allow localhost
http_access deny all

# Squid normally listens to port 3128
http_port 8080

# We recommend you to use at least the following line.
hierarchy_stoplist cgi—bin ?

# Uncomment and adjust the following to add a disk cache directory.
cache_dir ufs /var/spool/squid 3 16 256

# Leave coredumps in the first cache dir
coredump_dir /var/spool/squid

# Change maxinum object size
maxinum_object_size 4 GB

# Define max cache_mem
cache_mem 512 MB

#Lousy attempt at youtube caching
quick_abort_min -1 KB
acl youtube dstdomain .youtube.com
cache allow youtube

from the next refresh patterns it seems you might not quite understand 
the meaning of the patterns syntax and options.

the first thing i suggest is to look at:
http://www.squid-cache.org/Doc/config/refresh_pattern/
a more "readable" place is :
http://etutorials.org/Server+Administration/Squid.+The+definitive+guide/Chapter+7.+Disk+Cache+Basics/7.7+refresh_pattern/

and try to read it once or twice so you will know how to benefit from it.
also try to read some info about caching here:
http://www.mnot.net/cache_docs/
and a tool that will help you to analyze pages for cachebility is redbot:
http://redbot.org/

there is a maximum time that a object can stay in the cache server as 
it's a cache server and not hosting service.
it's max of 365 days (total of 525600 minutes) if i remember right so 
it's useless to use "99" as a max time for object freshness.
if you want to cache youtube videos you lack a little bit of knowledge 
about it so just start with basic caching tweaking.
also you should check your users browsing habits in order to gain 
maximum cache efficiency.
until then you will get a solid caching goals you wont need to shoot so 
hard.

one very good tool to analyze your users habits is "sarg".

if you need some help i can assist you with it.
just as an example this site: http://djmaza.com/ is heaven of cache 
proxy server but until you wont analyze it you wont know what to do with it.
you can see in this link: 
http://redbot.org/?descend=True&uri=http://djmaza.com/

how the page is built.

Regards,
Eliezer


# Add any of your own refresh_pattern entries above these.
refresh_pattern -i \.flv$ 10080 90% 99 ignore-no-cache override-expire 
ignore-private
refresh_pattern -i \.(gif|png|jpg|jpeg|ico|bmp)$ 40320 90% 40330
refresh_pattern -i 
\.(iso|avi|wav|mp3|mp4|mpeg|swf|x-flv|mpg|wma|ogg|wmv|asx|asf|dmg|zip|exe|rar)$ 
40320 90% 40330
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
refresh_pattern . 0 20% 4320


  Mark Engels l ICT Manager - Network l Mark Oliphant College (B-12) l 
i...@moc.sa.edu.au<mailto:info@>  www.moc.sa.edu.au<http://www.moc.sa.edu.au>  
l Ph: (08) 8209 1600 l Fax: (08) 8209 1650



--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [squid-users] anyone knows some info about youtube "range" parameter?

2012-04-27 Thread Eliezer Croitoru

On 27/04/2012 09:52, Hasanen AL-Bana wrote:

On Fri, Apr 27, 2012 at 7:43 AM, Eliezer Croitoru  wrote:

On 25/04/2012 20:48, Hasanen AL-Bana wrote:


wouldn't be better if we save the video chunks ? youtube is streaming
files with 1.7MB flv chunks, youtube flash player knows how to merge
them and play themso the range start and end will alaways be the
same for the same video as long as user doesn't fast forward it or do
something nasty...even in that case , squid will just cache that
chunk...that is possible by rewriting the STORE_URL and including the
range start&end


On Wed, Apr 25, 2012 at 8:39 PM, Ghassan Gharabli
wrote:




i have written a small ruby store_url_rewrite that works with range argument
in the url.
(on the bottom of this mail)

it's written in ruby and i took some of andre work at
http://youtube-cache.googlecode.com

it's not such a fancy script and ment only for this specific youtube
problem.

i know that youtube didnt changed the this range behavior for the whole
globe cause as for now i'm working from a remote location that still has no
"range" at all in the url.
so in the same country you can get two different url patterns.

this script is not cpu friendly (uses more the same amount of regex lookups
always) but it's not what will bring your server down!!!


That is why I am going to write it in perl, in my server I might need
to run more than 40 instances on the script and perl is like the
fastest thing I have ever tested

i have tried couple of languages to do almost the same thing.
what i do want is that the code will be readable and efficient.
i have used until now ruby perl python and JAVA for this specific task 
ruby was fast and usable but JAVA was much more superior to all the 
others combining most of what i needed.
regex on JAVA was so different then perl and the others so i used the 
basic string classes of JAVA to implement these features.


i hope to see your perl code.

by the way 40 instances are not really needed for most of the servers i 
have seen until now.

20 should be more then you need.

what is the "size" of this server? req per sec? bandwidth?cpu? ram? 
cache space?


Regards,
Eliezer





this is only a prototype and if anyone wants to add some more domains and
patterns i will be more then glad to make this script better then it's now.

this is one hell of a regex nasty script and i could have used the uri and
cgi libs in order to make the script more user friendly but i choose to just
build the script skeleton and move on from there using the basic method and
classes of ruby.

the idea of this script is to extract each of the arguments such as id itag
and ragne one by one and to not use one regex to extract them all because
there are couple of url structures being used by youtube.

if someone can help me to reorganize this script to allow it to be more
flexible for other sites with numbered cases per site\domain\url_structure i
will be happy to get any help i can.

planned for now to be added into this scripts are:
source forge catch all download mirrors into one object
imdb HQ (480P and up) videos
vimeo videos

if more then just one man will want:
bliptv
some of facebook videos
some other images storage sites.

if you want me to add anything to my "try to cache" list i will be help to
hear from you on my e-mail.

Regards,
Eliezer


##code start##
#!/usr/bin/ruby
require "syslog"

class SquidRequest
attr_accessor :url, :user
attr_reader :client_ip, :method

def method=(s)
@method = s.downcase
end

def client_ip=(s)
@client_ip = s.split('/').first
end
end

def read_requests
# URL  client_ip "/" fqdn  user  method [
kvpairs]
STDIN.each_line do |ln|
r = SquidRequest.new
r.url, r.client_ip, r.user, r.method, *dummy =
ln.rstrip.split(' ')
(STDOUT<<  "#{yield r}\n").flush
end
end

def log(msg)
Syslog.log(Syslog::LOG_ERR, "%s", msg)
end

def main
Syslog.open('nginx.rb', Syslog::LOG_PID)
log("Started")

read_requests do |r|
idrx = /.*(id\=)([A-Za-z0-9]*).*/
itagrx = /.*(itag\=)([0-9]*).*/
rangerx = /.*(range\=)([0-9\-]*).*/

newurl = "http://video-srv.youtube.com.SQUIDINTERNAL/id_"; +
r.url.match(idrx)[2] + "_itag_" + r.url.match(itagrx)[2] + "_range_" +
r.url.match(rangerx)[2]

log("YouTube Video [#{newurl}].")

newurl
end
end

main
##code end#



--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il



--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [squid-users] anyone knows some info about youtube "range" parameter?

2012-04-28 Thread Eliezer Croitoru

On 27/04/2012 11:56, Hasanen AL-Bana wrote:

I get around 40,000 req/min, the server is Dell R510 with Xeon cpu and
48GB of RAM, all disks are SAS (1.2TB)
Reducing the number of url_rewriters cause squid to stop working and
cache.log says more url_rewriters are needed...ah I forgot to say that
I have many URL_REWRITERS beside my store_url rewriters.

i must say i'm impressed!
it's the second server i'm hearing about in this size and quality of system.

if you do have 40,000 req/min it's makes more sense.
for this kind of system a "compiled" solution is much better with 
performance and memory print.

JAVA is one step above the interpreted scripts\programs.
my opinion is that in your case you should use something else then perl
as a url_rewriter store_url_rewrite if the system has kind of static 
options.


Regards,
Eliezer


On Fri, Apr 27, 2012 at 10:04 AM, Eliezer Croitoru  wrote:

On 27/04/2012 09:52, Hasanen AL-Bana wrote:


On Fri, Apr 27, 2012 at 7:43 AM, Eliezer Croitoru
  wrote:


On 25/04/2012 20:48, Hasanen AL-Bana wrote:



wouldn't be better if we save the video chunks ? youtube is streaming
files with 1.7MB flv chunks, youtube flash player knows how to merge
them and play themso the range start and end will alaways be the
same for the same video as long as user doesn't fast forward it or do
something nasty...even in that case , squid will just cache that
chunk...that is possible by rewriting the STORE_URL and including the
range start&  end


On Wed, Apr 25, 2012 at 8:39 PM, Ghassan Gharabli
  wrote:





i have written a small ruby store_url_rewrite that works with range
argument
in the url.
(on the bottom of this mail)

it's written in ruby and i took some of andre work at
http://youtube-cache.googlecode.com

it's not such a fancy script and ment only for this specific youtube
problem.

i know that youtube didnt changed the this range behavior for the whole
globe cause as for now i'm working from a remote location that still has
no
"range" at all in the url.
so in the same country you can get two different url patterns.

this script is not cpu friendly (uses more the same amount of regex
lookups
always) but it's not what will bring your server down!!!



That is why I am going to write it in perl, in my server I might need
to run more than 40 instances on the script and perl is like the
fastest thing I have ever tested


i have tried couple of languages to do almost the same thing.
what i do want is that the code will be readable and efficient.
i have used until now ruby perl python and JAVA for this specific task ruby
was fast and usable but JAVA was much more superior to all the others
combining most of what i needed.
regex on JAVA was so different then perl and the others so i used the basic
string classes of JAVA to implement these features.

i hope to see your perl code.

by the way 40 instances are not really needed for most of the servers i have
seen until now.
20 should be more then you need.

what is the "size" of this server? req per sec? bandwidth?cpu? ram? cache
space?

Regards,
Eliezer






this is only a prototype and if anyone wants to add some more domains and
patterns i will be more then glad to make this script better then it's
now.

this is one hell of a regex nasty script and i could have used the uri
and
cgi libs in order to make the script more user friendly but i choose to
just
build the script skeleton and move on from there using the basic method
and
classes of ruby.

the idea of this script is to extract each of the arguments such as id
itag
and ragne one by one and to not use one regex to extract them all because
there are couple of url structures being used by youtube.

if someone can help me to reorganize this script to allow it to be more
flexible for other sites with numbered cases per
site\domain\url_structure i
will be happy to get any help i can.

planned for now to be added into this scripts are:
source forge catch all download mirrors into one object
imdb HQ (480P and up) videos
vimeo videos

if more then just one man will want:
bliptv
some of facebook videos
some other images storage sites.

if you want me to add anything to my "try to cache" list i will be help
to
hear from you on my e-mail.

Regards,
Eliezer


##code start##
#!/usr/bin/ruby
require "syslog"

class SquidRequest
attr_accessor :url, :user
attr_reader :client_ip, :method

def method=(s)
@method = s.downcase
end

def client_ip=(s)
@client_ip = s.split('/').first
end
end

def read_requests
# URLclient_ip "/" fqdnusermethod [
kvpairs]
STDIN.each_line do |ln|
r = SquidRequest.new
r.url, r.client_ip, r.user, r.method, *dummy =
ln.rstrip.split(' ')
(STDOUT<<"#{yield r

Re: [squid-users] slow internet browsing.

2012-04-29 Thread Eliezer Croitoru

On 29/04/2012 08:49, Muhammad Yousuf Khan wrote:

IT seems that things are doing good with out huge domain list. so now
my next goal is squidguard.

but the problem with squid guard  was that i tried it configuring and
i saw many online manuals but it didnt activated so i just started
using domain list. however if thing doesnt work ill update the status.

Thanks you all for your kind help.

Thanks

On Fri, Apr 27, 2012 at 1:09 PM, Muhammad Yousuf Khan  wrote:


i have used squidguard from source and it seems to work very well.
it took me a while to understand and configure but it works perfectly.
have a look at:
http://www.visolve.com/squid/whitepapers/redirector.php#Configuring_Squid_for_squidGuard



--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [squid-users] anyone knows some info about youtube "range" parameter?

2012-04-29 Thread Eliezer Croitoru

On 24/04/2012 21:02, Eliezer Croitoru wrote:

as for some people asking me recently about youtube cache i have checked
again and found that youtube changed their video uris and added an
argument called "range" that is managed by the youtube player.
the original url\uri dosnt include range but the youtube player is using
this argument to save bandwidth.

i can implement the cahing with ranges on nginx but i dont know yet the
way that range works.
it can be based on user bandwidth or "fixed" size of chunkes.

if someone up to the mission of analyzing it a bit more to understand it
so the "range" cache will be implemented i will be happy to get some
help with it.

Thanks,
Eliezer


as for now the "minimum_object_size 512 bytes" wont do the trick for 302 
redirection on squid2.7 because the 302 response is 963 big size.

so i have used:
minimum_object_size 1024 bytes
just to make sure it will work.
and also this is a youtube videos dedicated server so it's on with this 
limit.


Regards,
Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [squid-users] anyone knows some info about youtube "range" parameter?

2012-04-29 Thread Eliezer Croitoru

On 30/04/2012 02:18, Ghassan Gharabli wrote:

Hello Eliezer,

Are you trying to save all video chunks into same parts or capture /
download the whole video object through CURL or whatever! but i dont
think it should work since it will occure an error with the new
Youtube Player.

What I have reached lately is saving same youtube video chunks  that
has  youtube 360p itag=34 ,480p itag=35 without saving its itag since
i want to save more bandwidth (thats why i only wrote scripts to you
as an example) which means if someone wants to watch 480p then he
would get the cached 360p contents thats why i didnt add the itag but
if he thought of watching 720p and above then another script would
catch it matching ITAG 37 , 22 ... I know that is not the best
solution but at least its working pretty well with no erros at all as
long as the client can always fast forward .

Im using Squid 2.7 Stable9 compiled on windows 64-bit with PERL x64.

Regarding the 302 Redirection  .. I have made sure to update the
source file client_side.c to fix the loop 302 Redirection but really I
dont have to worry now about anything so what is your target regarding
Youtube with argument Range and whats the problem till now ?

I have RAID with 5 HDD and the average HTTP Requests per minute :
2732.6 and because I want to save more bandwidth I try to analyze HTTP
Requests so i can always update my perl script to match most wanted
websites targetting Videos , Mp3 etc.

For a second I thought of maybe someone would help to compile an
intelligent external helper script that would capture  the whole
byte-range and I know it is really hard to do that since we are
dealing with byte-range.

I only have one question that is always teasing me .. what are the
comnparison between SQUID and BLUE COAT so is it because it is a
hardware perfromance or just it has more tricks to cache everything
and reach a maximum ratio ?



Ghassan
i was messing with store_url_rewrite and url_rewrite quite some time 
just for knowledge.


i was researching every concept exists with squid until now.
a while back (year or more)  i wrote store_url_rewrite using java and 
posted the code somewhere.
the reason i was using java was because it's the fastest and simples 
from all other languages i know (ruby perl python).

i was saving bandwidth using nginx because it was simple to setup.
i dont really like the idea of faking my users about the quality and 
also it can make a very problematic state that the user can get partial 
HQ content what will break the user from watching videos.


i dont really have any problems with the ranges.
i just noticed that there are providers in my country that are not using 
the "range" parameter but the "begin" parameter...


i will be very happy to get the client_side.c patch to fix the 302 loop.

the problem with external_helper is that it's not really needed.
if you need something you can use ICAP for that but you still there is a 
lot of work to be dont so as far it seems that store_url_rewrite is the 
best option.


BLUECOAT has option to relate objects also using ETAG and other objects 
parameters so it makes it a very robust caching system.


nginx does the work just fine for youtube videos but it seems like some 
headers are messing and should be added.


by the way,
What video\mp3 sites are you caching using your scripts?


Eliezer





On Mon, Apr 30, 2012 at 1:29 AM, Eliezer Croitoru  wrote:

On 24/04/2012 21:02, Eliezer Croitoru wrote:


as for some people asking me recently about youtube cache i have checked
again and found that youtube changed their video uris and added an
argument called "range" that is managed by the youtube player.
the original url\uri dosnt include range but the youtube player is using
this argument to save bandwidth.

i can implement the cahing with ranges on nginx but i dont know yet the
way that range works.
it can be based on user bandwidth or "fixed" size of chunkes.

if someone up to the mission of analyzing it a bit more to understand it
so the "range" cache will be implemented i will be happy to get some
help with it.

Thanks,
Eliezer



as for now the "minimum_object_size 512 bytes" wont do the trick for 302
redirection on squid2.7 because the 302 response is 963 big size.
so i have used:
minimum_object_size 1024 bytes
just to make sure it will work.
and also this is a youtube videos dedicated server so it's on with this
limit.

Regards,

Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il



--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [squid-users] Transparent proxy and IP address rotation

2012-04-30 Thread Eliezer Croitoru

On 30/04/2012 23:44, Kirk Hoganson wrote:

I would like to configure our squid proxy (Version 3.0.STABLE19 on Linux
Ubuntu 10.04) to use a pool of addresses for outgoing connections. I
setup squid as a transparent proxy using "http_port 3128 transparent" in
the squid.conf, and then I setup an iptables to provide source nat
address rotation for the multiple interfaces the proxy has available.

The connections failed when attempting to source nat on the proxy. Would
this work if I were able to use tproxy instead of transparent on the
proxy server? Or is there another solution within squid that would allow
it to rotate through all available interfaces?

Thanks,
Kirk
if you just need couple of outgoing addresses and not the clients IP 
address intercept is fine.(not tproxy)

this kind of LB should be done using the os routing system.
a pool of addresses can be tricky because it can be done using 2 or 200 
IP addresses.


i have written some good sample for "multihoming" option that is like 
this and just needed to be tweaked a bit.

have a look at:
http://www.squid-cache.org/mail-archive/squid-dev/201204/0019.html

i do remember that something could have been done using iptables also 
but it dont remember how it should be done.


what did you tried to do on iptables?

i also found this nice iptables method sample:
http://www.pmoghadam.com/homepage/HTML/Round-robin-load-balancing-NAT.html

Regards,
Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [squid-users] A Web site view problem , diffrences through 2 squid

2012-04-30 Thread Eliezer Croitoru

On 30/04/2012 10:24, a bv wrote:

Hi,

There are 2 squids running behind 2 different firewalls and 2 diffrent
internet connections (same isp, different network ranges). Users
report that they get problems viewing the site.

When i look at that site through 2 proxies and with different browsers
i get different results (and its changing) , especially i get some
errors about the web site codes from ie but
  when i switch the proxy through ie i see diffent results. Sometimes i
get the error 400 and after i clear the cache of the browser and
request the site it turns back.  1 firewall has IPS running 1 not but
couldnt find anything at the ips logs either. The web sites owners
didnt answer to my questions yet. What do you recommend to anaylyze
and fix the issue? Other sites are viewed well through bot of them.


Regards

did you tried to disable cache for this site?
if by different browsers it show different data it can be because the 
site has some browser based templates.


Regards,
Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [squid-users] Duplicate If-None-Match headers

2012-04-30 Thread Eliezer Croitoru

On 30/04/2012 14:32, Andy Taylor wrote:

Hi,

I'm having a number of problems with Squid at the moment with duplicate
Etags in the headers. I'm using Squid as an accelerator to forward
traffic to Apache, which serves up a Drupal installation.

After roughly 3 days, a number of pages on the site start to fail with
400 Bad Request errors; it starts with just a few and then slowly
spreads to more pages. I did a tcpdump of the requests coming from Squid
to Apache, and Apache is spitting out a 400 error because of the header
size. Hundreds of etags are appearing in the If-None-Match headers
field, which hits Apache's header size limit, causing the error. The
only way I've found to 'fix' this so far is to either:

1. Flush Squid cache entirely
2. Purge the affected pages

But then after a few days the problem comes back again. I've been using
Squid as an accelerator to Drupal installations for years and this
hasn't happened before. I'm using the following version of Squid:

Squid Cache: Version 2.6.STABLE21

which is the latest version available in the CentOS 5 repositories. The
only difference between this installation of Squid/Apache/Drupal and
others which have worked fine in the past is the version of Drupal -
Drupal 7. Supposedly Drupal 7 has significantly altered cache handling,
but I can't work out why this would cause this problem with Squid.

The only thing I can think of at the moment is something to do with
Squid's cache rotation (specifically the LRU functionality), so that
when Squid rotates its cache, something ends up corrupted or malformed.

Any help or suggestions would be much appreciated!

Thanks,

Andy Taylor
i suppose that there are many changes since squid 2.6 and for me squid 
3119 and 3216 works fine with drupal so i suppose you can try to compile 
a more advanced and supported version of squid as a starter.


Eliezer
--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [squid-users] squid3.1.15 + publish.ovi.com

2012-05-01 Thread Eliezer Croitoru

On 01/05/2012 08:54, Gerson Barreiros wrote:

I'm using squid 3.1.15 (amos ppa) + ubuntu 10.04

And we can't open the 'register' link (
https://publish.ovi.com/register/country_and_account_type ) located
at https://publish.ovi.com/login

When we click 'register' the login page get refreshed.

Any ideas?

on 3119 seems to works fine.
and what about some logs?
or squid.conf?

Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [squid-users] anyone knows some info about youtube "range" parameter?

2012-05-01 Thread Eliezer Croitoru

well a nice list.
all the filesharing sites are not something i really want to cache 
because the effort to understand most of the sites is too much then the 
benefit.


megavideo got shutdown.

what i'm and others are very interested is microsoft windows updates.
they are using a 206 range request that make it's impossible to cache 
with squid as far as i understand.


Eliezer

On 01/05/2012 13:41, Ghassan Gharabli wrote:

Cached Contents are so many 

-Youtube

-GoogleSyndication

-ADS and it will always give me a strong headache to follow every ADby
caching its content not to block it because blocking any ADS URL would
present a javascript error at the browser's side!.

-VideoZer

- android Google ( MARKET .. etc )

- Ziddu

- xHamster

- SoundCloud

- Some websites that have a cdn folder which generates a key or long
key in the middle of the URL which everytime you refresh the website
then it will generate a dynamic folder incase if it has an extension
jpg or any multimedia extension and sometimes you see URL which has an
end like that abc.jpg;blablabla or abc.jpg?blablabla so I try to
remove that CDN Folder and at the same time i also remove everything
which comes after the extension jpg if has ";" or "?" .

- wlxrs.com

- reverbnation , megavideo , xaxtube

- NOGOMI/ AHLASOOT .. for example to save bandwidth !!! . usually
before you download the MP3 file they allow you to listen online
through their web player which has the same file size and same
file_name but if you download the file then you get a different domain
name which made me think just to rewrite the URL which matches and
save the same URL into cache if a client wanted to listen or download
and also alot of websites has the same idea.

- vkontakte , depositfiles , eporner , 4shared , letitbit , sendspace
, filesonic , uploaded , uploading , turbobit, wupload , redtubefiles
, filehippo , oron , rapishare , tube8 , pornhub , xvideos , telstra ,
scribdassets , przeklej , hardsextube , fucktube , imageshack , beeg ,
yahoo videos , youjizz , gcdn

for example , look at this URL :
#http://b2htks6oia9cm5m4vthd6hhulo.gcdn.biz/d/r/*/FileName WIthout
Extension
(*) DYNAMIC CONTENT and just look into the sub domain how its like!

  and so many websites but unfotunately  EASY-SHARE is using POST
response and I cant cache it.

Lately I was monitoring NOKIA Phones and I added OVI STORE.

Do you have any website that its not cacheable or using CDN or
something because Im really interested to look at :) .



Ghassan

On Mon, Apr 30, 2012 at 2:53 AM, Eliezer Croitoru  wrote:

On 30/04/2012 02:18, Ghassan Gharabli wrote:


Hello Eliezer,

Are you trying to save all video chunks into same parts or capture /
download the whole video object through CURL or whatever! but i dont
think it should work since it will occure an error with the new
Youtube Player.

What I have reached lately is saving same youtube video chunks  that
has  youtube 360p itag=34 ,480p itag=35 without saving its itag since
i want to save more bandwidth (thats why i only wrote scripts to you
as an example) which means if someone wants to watch 480p then he
would get the cached 360p contents thats why i didnt add the itag but
if he thought of watching 720p and above then another script would
catch it matching ITAG 37 , 22 ... I know that is not the best
solution but at least its working pretty well with no erros at all as
long as the client can always fast forward .

Im using Squid 2.7 Stable9 compiled on windows 64-bit with PERL x64.

Regarding the 302 Redirection  .. I have made sure to update the
source file client_side.c to fix the loop 302 Redirection but really I
dont have to worry now about anything so what is your target regarding
Youtube with argument Range and whats the problem till now ?

I have RAID with 5 HDD and the average HTTP Requests per minute :
2732.6 and because I want to save more bandwidth I try to analyze HTTP
Requests so i can always update my perl script to match most wanted
websites targetting Videos , Mp3 etc.

For a second I thought of maybe someone would help to compile an
intelligent external helper script that would capture  the whole
byte-range and I know it is really hard to do that since we are
dealing with byte-range.

I only have one question that is always teasing me .. what are the
comnparison between SQUID and BLUE COAT so is it because it is a
hardware perfromance or just it has more tricks to cache everything
and reach a maximum ratio ?



Ghassan


i was messing with store_url_rewrite and url_rewrite quite some time just
for knowledge.

i was researching every concept exists with squid until now.
a while back (year or more)  i wrote store_url_rewrite using java and posted
the code somewhere.
the reason i was using java was because it's the fastest and simples from
all other languages i know (ruby perl python).
i was saving bandwidth using nginx because it was simple to setup.
i dont really like the idea 

Re: [squid-users] Re: anyone knows some info about youtube "range" parameter?

2012-05-01 Thread Eliezer Croitoru

On 01/05/2012 17:09, x-man wrote:

I like the option to use nginx as cache_peer, who is doing the youtube
handling and I'm keen on using it.

The only think I don't know in this case is how the nginx will mark the
traffic as CACHE HIT or CACHE MISS, because I want to have the CACHE HIT
traffic marked with DSCP so I can use the Zero penalty hit in the NAS and
give high speed to users for the cached videos?

Anyone has idea about that?

--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/anyone-knows-some-info-about-youtube-range-parameter-tp4584388p4600792.html
Sent from the Squid - Users mailing list archive at Nabble.com.
i do remember that nginx logged the file when a hit happens on the main 
access log file but i'm not sure about it.
i have found  that store_url_rewrite is much more effective then nginx 
cache with ranges but didnt had the time to analyze the reason yet.

by the way you can use squid2.7 instance as a cache_peer instead of nginx.

did you tried my code(ruby)?
i will need to make some changes to make sure it will fit more videos 
that doesn't use range parameter(there are couple).


Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [squid-users] Re: anyone knows some info about youtube "range" parameter?

2012-05-01 Thread Eliezer Croitoru

On 02/05/2012 01:47, Amos Jeffries wrote:

On 02.05.2012 10:12, Ghassan Gharabli wrote:

AS I understand that nginx removes Range header as well as several
other headers
before passing request to upstream to make sure full response will be
cached and other requests for the same uri will be served correctly.



stripping the range header -> wonderful benefit



Squid at least in versions 2.7 and up, refuses to cache 206 Partial
Content responses by default. It is possible to use a combination of
the range_offset_limit and quick_abort_min to force Squid to cache the
request, but Squid will still strip off the Range header and try to
request the entire object but It is still not a good solution.


stripping the range header -> not good

Which one is it?




Do you know anything about VARNISH which has experimental support for
range requests as you can enable it via the “http_range” runtime
parameter.

i do now about VARNISH that supports range requests and i never tried 
using it.

you cant every theoretically consider varnish for youtube caching.

well microsoft updates is another story so it can be considered.


What I have already found is that Varnish will try to request the
entire file from the back-end server, and only if the entire file is
in the cache, will it successfully respond with a partial content
response and not contact the back-end server.

Does Squid has the ability to remove Range Header ?


Squid-3.2:

range_offset_limit none youtube

or for older Squid versions:

request_header_access Range deny youtube
in any case the range in hot used using plain "headers" but using a url 
argument on youtube.

so it wont help anyway.

Eliezer



Amos




--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [squid-users] can't access cachemgr

2012-05-02 Thread Eliezer Croitoru

On 02/05/2012 17:37, Jeff MacDonald wrote:

Hi,

I've seen this similar issue for a lot of people around the web, and have tried 
my best to debug my access rules.

The error message I get is :

1335968823.335  8 127.0.0.1 TCP_DENIED/407 2201 GET 
cache_object://localhost/ j...@bignose.ca NONE/- text/html

I'm pretty sure I'm missing something miniscule, but need help finding it.

Here are my access rules in my squid.conf


try to move the access rules of the manager to the top and move down the 
auth access rule


http_access allow manager localhost
http_access allow manager example
http_access allow westhants

by the way how are you trying to access  the cache_object?
using squidclient ?
i'm using the basic config files on opensuse 12.1 with squid 3.1.16 and 
it seems to work like that.

sample :
squidclient  cache_object://localhost/client_list

Eliezer



root@proxy:/etc/squid3# grep -e ^acl -e ^http_acc /etc/squid3/squid.conf
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl example src 192.168.11.16/32
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32
acl westhants proxy_auth REQUIRED
http_access allow westhants
http_access allow manager localhost
http_access allow manager example
http_access deny all
acl westhants-network src 192.168.11.0/24
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost
http_access allow westhants-network
http_access deny all

Thanks!

--
Jeff MacDonald
j...@terida.com
902 880 7375




--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [squid-users] Tproxy 3.1 problem

2012-05-02 Thread Eliezer Croitoru

On 02/05/2012 19:08, Daniel Echizen wrote:
for how many clients are you having the problem?

what linux distribution are you using for proxy?? i remember that i had 
similar problem with tproxy (not tplink specific) on centos and fedora.


is there a specific reason for the " 2>&1" in the tproxy mark?
does port 5128 is the port for tproxy?

are there any other routing tables on the machine?

have you tried to connect a machine directly to the squidbox switch and 
use it as a default gateway?


Eliezer


Hi,
Im facing a weird problem with tproxy few weeks, the problem is, all
work fine except clients that is behind a tplink router and another
one that i dont remembe, but almost tplink wr541g routers, if i remove
iptables mangle redirect rule, client has traffic, enable not, dont
speak english very well, so i hope someone can understand and help
me.. this is a server with 1000+ clients, and im getting very
frustrated with this problem.

my config:

ip rule add fwmark 1 lookup 100
ip route add local 0.0.0.0/0 dev lo table 100

/sbin/iptables -v -t mangle -N DIVERT
/sbin/iptables -v -t mangle -A DIVERT -j MARK --set-mark 1
/sbin/iptables -v -t mangle -A DIVERT -j ACCEPT
/sbin/iptables -v -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
/sbin/iptables -v -t mangle -D PREROUTING -p tcp --dport 80 \
   -j TPROXY --tproxy-mark 0x1/0x1 --on-port 5128 2>&1

/usr/local/sbin/ebtables -t broute -A BROUTING -i eth5 -p ipv4
--ip-proto tcp --ip-dport 80 -j redirect --redirect-target DROP
/usr/local/sbin/ebtables -t broute -A BROUTING -i eth3 -p ipv4
--ip-proto tcp --ip-sport 80 -j redirect --redirect-target DROP

cd /proc/sys/net/bridge/
for i in *
do
echo 0>  $i
done
unset i

echo 0>  /proc/sys/net/ipv4/conf/lo/rp_filter
echo 0>  /proc/sys/net/ipv4/conf/all/rp_filter
echo 1>  /proc/sys/net/ipv4/ip_forward


i hav 2 interfaces in bridge, as i said.. all working fine.. except
with this tplink routers
also got log in iptable mangle, and then i can see traffic from the
client router, but traffic cant reach squid
, in access.log cant get anything
i use a mikrotik as pppoe-server, my network is:

router<->  squidbox<->  mikrotik<->  clients

With Squid inline on a bridge like this there should be *no* squid
related configuration outside the Squid box.

Is the tplink being used as "router" or "squidbox" in that diagram?

What kernel and iptables version is the squidbox? some of the older
2.6.3x kernels have bridge+tproxy problems.


Amos





I got some more info.. the conection from client tplink dont answer
syn, ack in tshark.. i can see syn ->  |  ack<- | syn, ack ->  , but
final ack from client dont..
i upgrated kernel to 3.3.4 and iptables to 1.4.13 .. all work fine
except the problem with tplink wireless router..



--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [squid-users] Tproxy Syn/Ack Problem

2012-05-03 Thread Eliezer Croitoru
ho 1>   /proc/sys/net/ipv4/ip_forward

hope someone help.. dont know how to track where syn/ack is dying or
getting drop



--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [squid-users] Squid - Apache - Wordpress

2012-05-03 Thread Eliezer Croitoru

On 03/05/2012 12:33, Markus Lauterbach wrote:

Hi there,

I installed two squids in front of my wordpress installation (based on apache). 
Up to now, I can browse my wordpress but I still see many misses in the squid 
logs. Wordpress itself likes to use many cookies and I expect that this is a 
corrupting influence. By searching via google, I found examples, where people 
use a squid infornt of there wordpress installation. Somehow it seems to work. 
And I found several howtos to Setup Varnish in front of wordpress, where the 
cookie is stripped. 
http://ocaoimh.ie/2011/08/09/speed-up-wordpress-with-apache-and-varnish/

Does anyone know a hint for me, what I should take a look to. It is posible, to 
manage the caching behavoir with my squid.conf, or do all the information 
(weather to cache the element or not) has to be set in the header by the 
application (wordpress). I already tried to set the vary header for cookies, 
but this doesnt leed to a useable configuration. Now, I would unlikely change 
from squid to varnish.

Thanks in advance

Markus


most of wordpress basic content is cacheble.
what you can't or wont like to cache are the dynamic content pages.
if you still want to cache the pages change the url of the dynamic pages 
to those without any "?" (question mark)
and use a domain specific refresh pattern without "ignore-reload" to 
make sure that if a user will do a "refresh" he will get the dynamic page.


if you will look closely most of the page content is being cached by the 
browser so you will see mostly misses on the log after first loading the 
page.


Regards,
Eliezer


--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [squid-users] Re: external acl code examples

2012-05-03 Thread Eliezer Croitoru

On 02/05/2012 14:53, E.S. Rosenberg wrote:

2012/5/2 E.S. Rosenberg:

Hi,
I just thought I'd share the script I have for the squid side, maybe
someone finds it useful.
I wrote in PHP because I wanted to use prepared statements and am most
familiar with PDO.

Now my logs have usernames but squid does not allow me to make
proxy_auth acls since I have no auth mechanism configured (this
particular squid instance is a museum piece - 2.6, soon to be
replaced), if this issue also exists in squid 3.1 then how would I
control users based on a username returned through an external ACL?

Thanks,
Eli

I stuck the script on my server, that makes an easier read then from
inside a mail:
http://kotk.nl/verifyIP.phps

Hope that helps,
Eli


i saw your external_acl app and it seems very nice.
i wrote another one on ruby that seems almost like that(a mimic for 
practice).

and i was wondering about how do you plan to implement the proxy_auth acls?
using AD? some other DB?
you mentioned something about the network infrastructure\CISCO if i 
remember right.


Regards,
Eliezer





2012/4/10 akadimi:

Hi Amos,

Could you give me more details on your new session helper as soon as it
becomes available.

Regards,

--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/external-acl-code-examples-tp4424505p4546016.html
Sent from the Squid - Users mailing list archive at Nabble.com.



--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [squid-users] Re: FTP through squid

2012-05-03 Thread Eliezer Croitoru

as it states in the error:
 The requested URL could not be retrieved

it's not an acl problem.
means that you can access the ftp server.
the problem is something on a routing level.
can you ping the domain from the linux proxy server shell?

ping ftp.free.fr


Regards,
Eliezer

On 03/05/2012 16:48, Hugo Deprez wrote:

Hello,

no one have an idea on this issue ?

Regards



On 2 May 2012 11:55, Hugo Deprez  wrote:

Dear community,

I am setting up a squid proxy but I am not able to allow access to ftp server.
I read many explanation on this but I'm a bit lost.

So here is my conf :


acl SSL_ports port 443 20 21
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl CONNECT method CONNECT

### IPOC ACL's
acl sub1 src 10.1.1.0/24
acl sub2 src 10.1.2.128/25
acl ftp proto FTP
http_access allow ftp
## Default access based on defined access lists
http_access allow manager localhost
http_access deny manager
# Deny requests to certain unsafe ports
http_access deny !Safe_ports
# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports
http_access allow sub1
http_access allow sub2
# Deny all
http_access deny all

## Squid's port
http_port 3128

## Default Squid

hierarchy_stoplist cgi-bin ?
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320

I can see the following log in the access.log :

  [02/May/2012:11:44:55 +0200] "GET ftp://ftp.free.fr/ HTTP/1.0" 504
3190 "-" "Mozilla/5.0 (Windows NT 5.1; rv:12.0) Gecko/20100101
Firefox/12.0" TCP_MISS:DIRECT

But I get a squid error message on firefox :
The requested URL could not be retrieved

What am I missing here ?

Regards,

Hugo



--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


[squid-users] i was wondering about refresh_pattern changes.

2012-05-03 Thread Eliezer Croitoru
is there any effect on "in cache"(already cached) objects by new refresh 
patterns or,

the refresh patters affect the object while it was been cached?

and is there a way to change a cached object min max times or to extend 
manually the expiration time?


Thanks,
Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [squid-users] anyone knows some info about youtube "range" parameter?

2012-05-05 Thread Eliezer Croitoru


it seems like if a server (apache,nginx) is responding to a "range" 
request with the full file the player will get it without any problem 
despite the "range" request.

so if you do have a current cache with a lot of files you can still use it.

i also got into this nice project:
http://code.google.com/p/yt-cache/
it's a Fork of the project http://code.google.com/p/youtube-cache/

and has much more options such as graphs php management menu and some 
other nice stuff.


the only problem is that it from my testing it works good only on 
debian\ubuntu.

i was testing it on gentoo and got some problems running it.

it' really a nice project that implements some nice database features.

there was another page that was implementing a store_url_rewrite based 
on the one in squid wiki

http://aacable.wordpress.com/2012/01/30/youtube-caching-problem-an-error-occured-please-try-again-later-solved/

i was thinking of adding a url_rewrite (not store_url_rewrite) that will 
use a database to get statistics done and in a case of a very popular 
video to cache the whole video instead of chunks and also to add a "last 
accessed" for the video so the statistics will be relevant.


also it seems like if you will add to the uri\url some custom parameters 
such as found in urls "redirect=1" it wont change anything for yt 
servers about serving the file that matches the basic parameters.


Will update
Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


  1   2   3   4   5   6   7   8   9   10   >