[squid-users] Re: Help setting up a custom captive portal

2014-09-03 Thread babajaga
I suggest, you first start with a simple solution, before turning to Andrews.
Have a look at the contents of 
(squid2.7-sources)/helpers/external_acl/session

There you find squid_session.c
with some description. 

For other versions of squid sources, you will find something similar, too. 



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Help-setting-up-a-custom-captive-portal-tp4667499p4667503.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Help setting up a custom captive portal

2014-09-02 Thread babajaga
With squid, you can use the session_helper to create a simple captive
portal with splash page:
http://wiki.squid-cache.org/ConfigExamples/Portal/Splash

Not difficult to customize the external helper, as it is simple C.


Another, more complicated solution,  containing much more functionality
(like RADIUS accouting etc.):
http://coova.org/CoovaChilli
But quite some hard learning, as not very much support around. However, in
case you can master it, very powerful, and stable.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Help-setting-up-a-custom-captive-portal-tp4667499p4667500.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: parent problem - TCP_MISS/403 from parent

2014-08-29 Thread babajaga
I suspect, you might have some statement like never_direct /
always_direct in the squid.conf of first squid with some ACL, which does
not match any more.
To get a clear picture, pls publish both of actual  squid.conf, anonymized.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/parent-problem-TCP-MISS-403-from-parent-tp4667444p4667445.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: parent problem - TCP_MISS/403 from parent

2014-08-29 Thread babajaga
Yes.
 You might also try on inner squid.conf:
cache_peer 127.0.0.1   parent8092 0 no-digest no-query
no-net-db-exchange

assuming, you only have one upstream proxy. 
Outer squid.conf should have NO intercept/transparent in http_port.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/parent-problem-TCP-MISS-403-from-parent-tp4667444p4667452.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: parent problem - TCP_MISS/403 from parent

2014-08-29 Thread babajaga
I remember a bug, I detected in my favourite squid2.7, also in a sandwiched
config, with another proxy inbetween:
It was not possible to have both squids listen on 127.0.0.1:a/b; had to use
127.0.0.1:a; 127.0.0.2:b

To be pragmatic: Whats the purpose of having two squids directly coupled ?
Why not to use just one ?



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/parent-problem-TCP-MISS-403-from-parent-tp4667444p4667458.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Squid not listening on any port

2014-08-27 Thread babajaga
As long as you do not use parent proxy, no need for pinger. And, even in case
of parent, pinger is nice to have.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-not-listening-on-any-port-tp4667004p4667398.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] RE: Anybody using squid on openWRT ?

2014-08-26 Thread babajaga
@Leonardo: Thanx a lot. Your logs are much better than mine, although I am
closer to the site. 
So I have to look somewhere else, like slow DNS-resolution (I also use
googles 8.8.8.8), or slow conn establishment, as now I have also seen very
long response times durin initial page loads when trying to access other
sites. Like some limits on no. of conns somewhere, which then causes squid
to hang/loop, until conn established. So squid would be victim only. 
BTW: These small boxes from open-mesh.com, I am hacking, are very neat for
small hotspots. 

 



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Anybody-using-squid-on-openWRT-tp4667335p4667387.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Squid not listening on any port

2014-08-26 Thread babajaga
This is a bit strange:
2014/08/25 09:19:42| pinger: Initialising ICMP pinger ...
2014/08/25 09:19:42| pinger: ICMP socket opened.
2014/08/25 09:19:42| Pinger exiting.
2014/08/25 09:21:04| Current Directory is /root 

1) Pinger exiting. You might try to disable pinger in squid.conf
   pinger_enable off
2) Did you manually restart squid at 09:21:04 ?


Just for completeness: Pls, publish squid.conf, without comments.
Anonymized.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-not-listening-on-any-port-tp4667004p4667388.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Squid not listening on any port

2014-08-25 Thread babajaga
I would first eliminate the following warnings:
2014/08/25 09:21:04| Warning: empty ACL: acl blockfiles urlpath_regex -i
/etc/squid/local/bad/blockfiles
2014/08/25 09:21:04| WARNING: log name now starts with a module name. Use
'stdio:/var/log/squid/access.log'
2014/08/25 09:21:04| WARNING: log name now starts with a module name. Use
'stdio:/var/log/squid/store.log' 

and allow cache.log.
There might be some more info.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-not-listening-on-any-port-tp4667004p4667375.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Filter squid cached files to multiple cache dirs

2014-08-23 Thread babajaga
Have a look at cache_dir in squid.conf. There are the options min-size
and max-size.
So you can specify ranges for the size of objects cache in different
cache_dirs.




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Filter-squid-cached-files-to-multiple-cache-dirs-tp4667347p4667349.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Filter squid cached files to multiple cache dirs

2014-08-23 Thread babajaga
In the past, with older squids, there was a bug regarding a conflict with the
general parm
maximum_object_size

regarding the sequence (may be: value ?) of cache_dir max-size and
max_obj_size.
Can't exactly remember, think, max_obj_ has to be before cache_dir in
squid.conf, imposing the highest limit.
Should not contradict cache_dir max-size.








--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Filter-squid-cached-files-to-multiple-cache-dirs-tp4667347p4667358.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Anybody using squid on openWRT ?

2014-08-22 Thread babajaga
Just trying to use offic. package for openWRT, which is based on squid2.7
only.
Having detected some DNS-issues, does anybody use squid on openWRT, and
which squid version ?



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Anybody-using-squid-on-openWRT-tp4667335.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Anybody using squid on openWRT ?

2014-08-22 Thread babajaga
Sounds good. I also do not like C++ :-)

squid2.7 from openWRT is running on my Open-Mesh; besides the DNS-issues I
have not found any problem. Only a bit slow.
DNS-issues are related to advert-sites only, which is a bit strange. Lokks
like some tricks regarding TTL/DNS-based load sharing, I guess.
So I just block the well-known ad sites, and a few more, and it works
(slowly) on AR71xxx CPU/64MB RAM.
Maintaining the block list is a bit inconvenient, though.




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Anybody-using-squid-on-openWRT-tp4667335p4667337.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Anybody using squid on openWRT ?

2014-08-22 Thread babajaga
Interesting. Have you seen any DNS issues ?
For details, pls ref. here:
http://squid-web-proxy-cache.1019090.n4.nabble.com/Very-slow-site-via-squid-td4667243.html

Or, can your reproduce it here:
www.spiegel.de





--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Anybody-using-squid-on-openWRT-tp4667335p4667339.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] RE: Anybody using squid on openWRT ?

2014-08-22 Thread babajaga
@James:
For details of my problems, pls ref. here:
http://squid-web-proxy-cache.1019090.n4.nabble.com/Very-slow-site-via-squid-td4667243.html

Not shure, that it is really squid. Effect is slow loading of objects from
ad-servers.
As I have an open-mesh AP, 64MB RAM, my squid2.7 does memory-only caching,
and some ACLs + forwarding some traffic to another upstream proxy on the
web. 
One very slow page is here:
www.spiegel.de
It calls
*.meetrics.de , which loads veeery slow
So, in case you can confirm/deny slow response times to this site, I need to
look somewhere else for the bug.
Which would be great help, already.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Anybody-using-squid-on-openWRT-tp4667335p4667341.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Very slow site via squid

2014-08-19 Thread babajaga
The latest squid-3.x stable releases may be able to help with this. 
Actually, I am trying to use the standard package of squid for openWRT,
which is squid2.7.
So I would need to build my own one. 

Also, in my experiene the worst slow domains like this are usually
advertising hosts. So blocking their transactions outright (and quickly)
can boost page load time a huge amount. 
Correct, it is an advert side. Using hosts I block it already; however, I
was wondering, why this does not happen when running without squid. This
site is using varying names, like dcXX.s290.meetrics.net, with XX changing
on almost every browser session, which points to a DNS issue. 
Could there be any interference between TRANSPARENT squid and fast
DNS-resolution ?




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Very-slow-site-via-squid-tp4667243p4667263.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Very slow site via squid

2014-08-18 Thread babajaga
I have a squid 2.7 setup on openWRT, running on a 400Mhz/64MB embedded
system.
First of all, a bit slow (which is another issue), but one site is
especially slow, when accessed via squid:

1408356096.498  25061 10.255.228.5 TCP_MISS/200 379 GET
http://dc73.s290.meetrics.net/bb-mx/submit? - DIRECT/78.46.90.182 image/gif
1408356103.801  46137 10.255.228.5 TCP_MISS/200 379 GET
http://dc73.s290.meetrics.net/bb-mx/submit? - DIRECT/78.46.90.182 image/gif

Digging deeper, (squid.conf: debug ALL,9) I see this:
2014/08/18 11:17:26| commConnectStart: FD 198, dc44.s290.meetrics.net:80
2014/08/18 11:18:00| fwdConnectDone: FD 198:
'http://dc44.s290.meetrics.net/bb-mx/submit?//oxNGf

which should explain the slowness.

Example of http-headers:

Cache-Control   no-cache,no-store,must-revalidate
Content-Length  43
Content-Typeimage/gif
DateMon, 18 Aug 2014 10:04:52 GMT
Expires Mon, 18 Aug 2014 10:04:51 GMT
Pragma  no-cache
Server  nginx
X-Cache MISS from my-embedded-proxy
X-Cache-Lookup  MISS from my-embedded-proxy:3128
---
Accept  image/png,image/*;q=0.8,*/*;q=0.5
Accept-Encoding gzip, deflate
Accept-Language de,en-US;q=0.7,en;q=0.3
Connection  keep-alive
Cookie  id=721557E9-A0E0-C549-7D6A-B2D622DA4B1F
DNT 1
Hostdc73.s290.meetrics.net
Referer http://www.spiegel.de/
User-Agent  Mozilla/5.0 (Windows NT 6.1; WOW64; rv:31.0) Gecko/20100101
Firefox/31.0

I can only suspect something special regarding their DNS.
Any other idea ?










--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Very-slow-site-via-squid-tp4667243.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: writing storeid.pl file

2014-08-13 Thread babajaga
Real, but obsolete example (squid2.7):

#!/usr/bin/perl
$|=1;
while () {
chomp;
@X = split;
if ($X[0] =~ /(youtube|google).*videoplayback\?/){
@itag = m/[?](itag=[0-9]*)/;
@id = m/[?](id=[^\\s]*)/;
@range = m/[?](range=[^\\s]*)/;
@begin = m/[?](begin=[^\\s]*)/;
 print
http://video-srv.youtube.com.SQUIDINTERNAL/@id@itag@range@begin\n;;
} else {
print $X[0] . \n;
}
}

Send me a beer :-)



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/writing-storeid-pl-file-tp4667206p4667208.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Announce: ETPLC projet check Treats on Squid logs.

2014-08-01 Thread babajaga
Hi, looks interesting.
Which of the 3 variants (.pl, .py2/3) do you think is the fastest one ?
I am willing to trade RAM usage for speed on my embedded system.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Announce-ETPLC-projet-check-Treats-on-Squid-logs-tp4667111p4667114.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Announce: ETPLC projet check Treats on Squid logs.

2014-08-01 Thread babajaga
Pls, correct your log format specific:
http://etplc.org/squid.html
Actual version results in
squidFATAL: Can't parse configuration token: '%h %{Referer}h
%{Cookie}h'





--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Announce-ETPLC-projet-check-Treats-on-Squid-logs-tp4667111p4667115.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] RE: YouTube Resolution Locker

2014-07-26 Thread babajaga
 it probably wouldn't work anyway, unless youtube really did use a
consistent url domain name for their content delivery network..
Not correct. It is possible to cache youtubes content using StoreID.
Additionally, locking the resolution is much more trivial, as the requested
youtube-URL contains the requested resolution as one of the ... pars. Just
needs modification.
I run a free youtube proxy myself, for years already, and also do the
resolution locking myself, to low res, to avoid overload of the proxy. 




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/YouTube-Resolution-Locker-tp4667042p4667064.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Never used Squid, need to access it

2014-07-25 Thread babajaga
how to actually access the software itself. 

Pls, be more specific. What do you want to know or achieve ?

(Usually, either in /etc OR in /usr/local/squid/etc the config-files to be
found).
Search for squid.conf. That's the entry for the features used.

Depending on, whether squid has been installed from a binary package, or
not, you also might find sources.




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Never-used-Squid-need-to-access-it-tp4667025p4667026.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: 3.HEAD and delay pools

2014-07-25 Thread babajaga
What do you want to achieve ?
You might also refere to my responses here:
http://squid-web-proxy-cache.1019090.n4.nabble.com/Re-split-the-connexion-using-Squid-td4666739.html#a4666742



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/3-HEAD-and-delay-pools-tp4667023p4667027.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Three questions about Squid configuration

2014-07-16 Thread babajaga
there is just one network in both the client and server
side.
On the client side,
I just added the OUTPUT DNAT iptables rule to make it match the 3128 IP
and port of the remote server.

Sorry, I am a bit confused.
Pls, read carefully:
#Example for squid and NAT on same machine: !!
iptables -t nat -A OUTPUT -p tcp --dport 80 -j DNAT --to-destination
SQUIDIP:3128 

This also means, that client machine (running the browser, transparently)
and squid-machine are in the same net, and that squid then forwards the
request
to the real destination/server.

According to your posts, squid and NAT seem NOT to be on same machine. 




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Three-questions-about-Squid-configuration-tp4666931p4666949.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Three questions about Squid configuration

2014-07-15 Thread babajaga
Regarding first issue:
Have a look here for a correct solution:
http://wiki.squid-cache.org/ConfigExamples/Intercept/AtSource


#Example for squid and NAT on same machine:
iptables -t nat -A OUTPUT -p tcp --dport 80 -j DNAT --to-destination
SQUIDIP:3128
#Replace SQUIDIP with the public IP which squid may use for its listening
port and outbound connections. 

You are redirecting port 8080 ... That means, you have a proxy explicitly
set up in the brwoser. 
DO not do this for transparent squid. That's the purpose of the steup :-)






--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Three-questions-about-Squid-configuration-tp4666931p4666933.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Problem to set up multi-cpu multi-ports squid 3.3.12

2014-07-14 Thread babajaga
Besides SMP, there is still the old fashioned option of multiple instances
of squid, in a sandwich config. 
http://wiki.squid-cache.org/MultipleInstances

Besides described port rotation, you can set up 3 squids, for example: 
one frontend, just doing ACLs and request dispatching (carp), and 2
backends, with real caching. 
This variant has the advantage avoiding double caching, which might happen
in the port rotation alternative.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Problem-to-set-up-multi-cpu-multi-ports-squid-3-3-12-tp4666906p4666915.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: how to implement access control using connetcing hostname and port

2014-07-11 Thread babajaga
It is not true that IOS and others do not support authentication.
They do. 
I think, this is not the point. As the starter of the thread wrote:
...makes it possible to proxy a lot of MOBILE APPS on ios devices and
android which don't support traditional proxy authentication.
Many APPs are not handling proxy auth. 

But I would like to know, what is the reason for proxying the APPS ? And
would caching of their http-data (if any !) really make sense ?



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/how-to-implement-access-control-using-connetcing-hostname-and-port-tp4666818p4666833.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Blocking spesific url

2014-07-11 Thread babajaga
Pls, publish your complete non-working squid.conf
OR
at least the part invoking your
/etc/squid3/adservers



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Blocking-spesific-url-tp4666791p4666836.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: how to implement access control using connetcing hostname and port

2014-07-11 Thread babajaga
 i get a new proxy address (eg,3121212.proxy.com) and a port number(in the
range of 3). it's not the listening port.
It is not their listening port ? I doubt it, how else could you use it ?
I can think about some type of DNS rotation, they use. When their proxy.com 
at any time slot points to another of their IPs out of the pool reserved for
this domain, they modify their DNS A-record for next time slot, to use
another IP. 

And, when having a second pool of IPs, they might also rotate the
.proxy.com (CNAME) within their DNS-record. Using some type of
redirection, they finally always point to the same physical proxy. 

Because of the IP rotation, the GFW will have problems to dynamically detect
this service by means of traffic to same IP. However, the vast amount of DNS
requests for proxy.com might be a hint, as the TTL must be just the (short)
time slot.

Intruders into the service will need to scan the correct IPs/ports during
correct time slot, and then have access only during this time slot. Even
this might be minimized by checking intruders IP characteristics, like
country. Or integrate some type of port-scan detection, to block this
potential intruder.
So more or less safe, unless a lot of effort is invested figuring out the
DNS tricks. 


So it is not a question, that such a scheme is possible to be done using
squid. Because the real effort has to be invested in DNS manipulation.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/how-to-implement-access-control-using-connetcing-hostname-and-port-tp4666818p4666842.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: how to implement access control using connetcing hostname and port

2014-07-11 Thread babajaga
In case, the port knocking supervisor keeps track of the knocking IP, then
finally the real proxy port is opened ONLY for this knocking IP. 
So, unless you know how the port knocking is done correctly, you will not be
granted access to the real proxy port. 
Practically secure, in case
- check for port scanning. Remember scanners IP
- detect port knocking IP
-IF scanners IP, deny access to any port
-Forward to real proxy port

and DNS/port rotation used.

I like it :-)

Although, with quite some effort, you might be able to be the succesful
intruder. (Or the GFW)



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/how-to-implement-access-control-using-connetcing-hostname-and-port-tp4666818p4666858.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Transparent proxying and forwarding loop detected

2014-07-10 Thread babajaga
Having a very similar config like you, up and running(squid+chilli), you
better ask in  a chilli forum OR in the chilli group of linkedin. (I am
there, too :-)
Cause there are several issues to be considered with our setup:
- Proper config of iptables, as chilli also modifies them. And for
transparent squid you need special rule(s).
- Do not mix up transparent and non-transparent in regards to squid/chilli:
Coova has a configuration option for the IP and port of an
optional proxy - all web traffic from wireless clients will be routed
through this. I've set it to 10.0.0.1:3128

So you set up chilli for NON-transparent proxy, most likely.
As in chillis config NOT to enable HS_POSTAUTH_PROXYPORT for TRANSPARENT !

http_port 10.0.0.1:3128 transparent #shouldn't it be intercept for squid 3.3
?


Again, switch to one of the chilli forums, pls, to be better served. 



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Transparent-proxying-and-forwarding-loop-detected-tp4666810p4666811.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Waiting for www...

2014-07-09 Thread babajaga
Have a look here for a correct solution:
http://wiki.squid-cache.org/ConfigExamples/Intercept/AtSource

(Example: Replace SQUIDIP with the public IP which squid may use for its
listening port and outbound connections. )

iptables -t nat -A OUTPUT -p tcp --dport 80 -j DNAT --to-destination
SQUIDIP:3129



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Waiting-for-www-tp4666774p4666779.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Handling client-side request floods

2014-07-08 Thread babajaga
Rate limit using iptables
http://thelowedown.wordpress.com/2008/07/03/iptables-how-to-use-the-limits-module/
seems to be the simplest solution for an upper limit of requests/time.

Practically, you want the same as an administrator, who wants to protect his
web server against a DoS attack by means of a flood of incoming
http-requests. So you might also google for apache request limit or
similar.




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Handling-client-side-request-floods-tp4666726p4666736.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: split the connexion using Squid

2014-07-08 Thread babajaga
For a very first beginning, you might look into the delay_pools of squid, to
distribute and limit download speed, at least. Works only for proxied
traffic, of course, so torrents etc. are not throttled.
But easy to implement. 



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Re-split-the-connexion-using-Squid-tp4666739p4666742.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: split the connexion using Squid

2014-07-08 Thread babajaga
Not percentagewise, only in absolute values.

I had problems myself to vaguely understand at least the doc about
delay_pools, look into the documented squid.conf. So somebody else should
answer your detailed questions, if any.
However,
I use it to put an upper limit of 125kbit/s  download speed to every user
having this simple config (squid2.7):
.
delay_pools 1 #Just one pool
delay_class 1 2 #class 2
delay_access 1 allow all #everybody will be throttled; you might set up
another pool allowing higher badwidth
delay_parameters 1 -1/-1 125000/125000 #125kbit/s; no bursts. You might
allow
#delay_parameters 1 -1/-1 125000/25 #... 250kbit/s burst rate, for
initial page load

which should be adequate for interactive browsing. As you have 6MBit WAN,
this should also leave quite some spare bandwidth for non-proxied traffic,
as not all of your 30 users will be hitting the enter button
simultaneously to load another page.




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Re-split-the-connexion-using-Squid-tp4666739p4666745.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Antwort: [squid-users] Re: Antwort: [squid-users] Re: Antwort: [squid-users] Re: Antwort: Re: [squid-users] Question to cache_peer

2014-07-05 Thread babajaga
So the behaviour you are seeing looks more
like a bug in always_direct processing. 
Which might be specific to the squid version OR the squid.conf in use.
I have several squids of different versions with cache_peer in production.
The config needs to be different:
2.7:
hierarchy_stoplist cgi-bin ?
always_direct deny fwd_youtube  #So it might depend upon usage/type
of ACL 
never_direct allow fwd_youtube
#Needs both

3.3.11:
hierarchy_stoplist cgi-bin ?
never_direct allow all
is sufficient.

3.4.5:
hierarchy_stoplist cgi-bin ?
never_direct allow all
sufficient

Willing to do some more research on this one in case of getting some
instructions what to look at (special debug ?)






--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Question-to-cache-peer-tp416p464.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Antwort: Re: [squid-users] Question to cache_peer

2014-07-04 Thread babajaga
Hassan definitely is correct.
So, may be you just use a working config before trying alternatives:


#ALL your ACL's first in squid.conf !
.
cache_peer xx.xx.xx.xx parent 6139 0 no-query no-digest no-netdb-exchange
never_direct allow all


If this does not work, pls post your squid.conf again, as there were a few
other annoyances.
Any special messages in cache.log ?



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Question-to-cache-peer-tp416p441.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Antwort: [squid-users] Re: Antwort: Re: [squid-users] Question to cache_peer

2014-07-04 Thread babajaga
OK, then we will have a look at the ACL-decisions (often a problem) and the
peer selection within squid, using 

debug_options ALL,5 33,2 28,9 44,3

in squid.conf

This will produce a detailed log about ACL processing, and peer selection,
which is the most interesting. 
It will cause a lot of output to cache.log, so only to use it for a short
period of time.

In cache.log then simply search for peer_select and have a look around,
why the parent cache is not chosen.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Question-to-cache-peer-tp416p444.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Antwort: [squid-users] Re: Antwort: [squid-users] Re: Antwort: Re: [squid-users] Question to cache_peer

2014-07-04 Thread babajaga
So squid is exactly doing, what you are asking for:

cache_peer = local=0.0.0.0 remote=194.99.121.200:3128 flags=1 

But probably, this is not what you want, as it is the public IP on the web,
the request is forwarded to.
So you most likely should use an internal/local IP of your peer here, OR
there is a problem with your routing.

BTW: babajaga is a Russian witch. Sort of.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Question-to-cache-peer-tp416p446.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: what is cached

2014-07-03 Thread babajaga
It depends on.
Facebook
now uses https, which can not be cached. Valif for other sites, using https,
too.
Or, in other words, only http can be cached.

 and game sites where chat is available.
So facebook not (because of https), game sites may be, in case of using http
for chat.

 Are facebook posts cached or just images?
Neither or, as https is used.

 Is the chat on chat sites cached in any format that can be available. 
In case, of http, may be. Not all http can be ceache, either.

Is gaming chat cached and accessible if needed?
MIGHT be cache in case of http, which is unlikely.  Usually, games use
teamspeak, for example, which can not be cached.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/what-is-cached-tp410p411.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: access denied

2014-07-03 Thread babajaga
change
http_port 3129 transparent 
to
http_port 3129 intercept


You did not get an error msg in cache.log ?

If this does not help, pls publish
a) your browser proxy setup
b) your firewall rules




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/access-denied-tp419p428.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Probs with squid 3.4.4 and cache_peer parent

2014-07-01 Thread babajaga
Then lets try to get ridd off the error messages in squid.log.

This is my standard cmd for a parent proxy, all requests are forwarded to:
cache_peer xxx.xxx.xxx.xx  parent 3128 0 no-query no-digest
no-netdb-exchange

This should get rid off the errors regarding pinger. Correct ? Still
crashing ?



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Probs-with-squid-3-4-4-and-cache-peer-parent-tp4666557p4666573.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Probs with squid 3.4.4 and cache_peer parent

2014-07-01 Thread babajaga
Looks like your problem is caused by the failing pinger. Which means, there
is an --enable-icmp in your config options, when building squid. So
another possibillity would be to remove this config option. AFAIK in your
situation, the pinger would only be an advantage (or even necessary), in
case of alternatives for your upstream proxy, to detect the closest one. 
But as your squid has no choice, you should be able to disable it
completely.
Or to give proper rights to the pinger, although redundant.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Probs-with-squid-3-4-4-and-cache-peer-parent-tp4666557p4666580.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Probs with squid 3.4.4 and cache_peer parent

2014-06-30 Thread babajaga
Did you try without Antivirus ? Not so into the squid code, but I would
suspect a problem in the interface to Trend, first. As squid is crashing
already during/immediately after startup.

BTW: What should happen here ?

maximum_object_size 1 KB
maximum_object_size 50 MB 

Probably, you can delete the first of them, in both squid.conf's

MfG 



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Probs-with-squid-3-4-4-and-cache-peer-parent-tp4666557p4666561.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Connection pinning in Squid 3.1

2014-06-30 Thread babajaga
Any reason not to build squid from newest sources ? 
Will probably increase your chances of getting better support, as 2.1 is not
much newer than 2.7 :-)
(Still using latest 2.7, with private mods, myself. Solid as a rock.)



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Connection-pinning-in-Squid-3-1-tp4666560p4666562.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: redirector/rewriter help

2014-06-13 Thread babajaga
StoreID should help you; may be together with a special helper. There are a
few examples in the wiki.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/redirector-rewriter-help-tp4666339p4666340.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: cache problem

2014-06-11 Thread babajaga
In case, I correctly understand, 82MB-file is flushed from cache after 1'st
stalled/partial download from cache ?
If yes, it would be a good idea to post the http-headers of the cached file.
You might also add
 ignore-reload ignore-private negative-ttl=0

What is your
maximum_object_size_in_memory
in squid.conf ?

In case, virusscanner is clamAV, it _might_ have/cause problems with large
files. So you should try another cached large file to download, just to
compare.

MfG :-)





--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/cache-problem-tp4666287p4666303.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: cache problem

2014-06-11 Thread babajaga

 What is your
 maximum_object_size_in_memory

The default value: 512 kB


Increase above 82 MB. Usually squid 2.7 keeps only in-transit objects in
memory, then in memory_cache, until swapped out later on. So this might
inhibit the swap-out to disk, because not being cached before in memory. 

MfG aus dem Pott




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/cache-problem-tp4666287p4666309.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: How to build ext_session_acl ?

2014-06-09 Thread babajaga
Thanx, that was the probelm, dblib-dev was not installed.
Which leads me to a suggestion:
As it is general policy to include most features of squid doing a plain
./configure, which also includes _all_
external auth helpers, configure should also check for _all_ dependencies to
be satisfied. 
Obviously silently  ext_session_acl is not built, in case dblib-dev is
missing. 
I consider this not very consequent.
Alternative: Change the very generous policy of including almost everything
just to the opposite, to a 
minimalistic default ./configure. And then do appropiate checking for
dependencies.
This might also help to trace down some bugs, as the squid having this bugs
has a minimal set of 
code modules included.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/How-to-build-ext-session-acl-tp4666258p4666265.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] How to build ext_session_acl ?

2014-06-08 Thread babajaga
Trying to build external helper ext_session_acl.cc in
squid-3.4.5-20140603-r13143.
Even with defaults a lot of helpers are built after ./configure, but this
one not. 
(I do not want ext_sql_session_acl, which is successfully built.)



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/How-to-build-ext-session-acl-tp4666258.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Why not cached ?

2014-05-27 Thread babajaga
I was wondering about very few HITs in this squid installation, and did some
checking:

access.log:
1401203150.334   1604 10.1.10.121 TCP_MISS/200 718707 GET
http://l5.yimg.com/av/moneyball/ads/0-1399331780-5313.jpg -
ORIGINAL_DST/66.196.65.174 image/jpeg
1401203186.100   1327 10.1.10.121 TCP_MISS/200 718707 GET
http://l5.yimg.com/av/moneyball/ads/0-1399331780-5313.jpg -
ORIGINAL_DST/66.196.65.174 image/jpeg

cache.log:
2014/05/27 14:52:12 kid1| Starting Squid Cache version 3.4.5-20140514-r13135
for i686-pc-linux-gnu...
2014/05/27 14:52:12 kid1| Process ID 7477
2014/05/27 14:52:12 kid1| Process Roles: worker
2014/05/27 14:52:12 kid1| With 1024 file descriptors available
2014/05/27 14:52:12 kid1| Initializing IP Cache...
2014/05/27 14:52:12 kid1| DNS Socket created at [::], FD 7
2014/05/27 14:52:12 kid1| DNS Socket created at 0.0.0.0, FD 8
2014/05/27 14:52:12 kid1| Adding nameserver 127.0.0.1 from /etc/resolv.conf
2014/05/27 14:52:12 kid1| Logfile: opening log
daemon:/tmp/var/log/squid/access.log
2014/05/27 14:52:12 kid1| Logfile Daemon: opening log
/tmp/var/log/squid/access.log
2014/05/27 14:52:12 kid1| Logfile: opening log
daemon:/tmp/var/log/squid/store.log
2014/05/27 14:52:12 kid1| Logfile Daemon: opening log
/tmp/var/log/squid/store.log
2014/05/27 14:52:12 kid1| Swap maxSize 0 + 2097152 KB, estimated 161319
objects
2014/05/27 14:52:12 kid1| Target number of buckets: 8065
2014/05/27 14:52:12 kid1| Using 8192 Store buckets
2014/05/27 14:52:12 kid1| Max Mem  size: 2097152 KB
2014/05/27 14:52:12 kid1| Max Swap size: 0 KB
2014/05/27 14:52:12 kid1| Using Least Load store dir selection
2014/05/27 14:52:12 kid1| Set Current Directory to /tmp
2014/05/27 14:52:12 kid1| Finished loading MIME types and icons.
2014/05/27 14:52:12 kid1| HTCP Disabled.
2014/05/27 14:52:12 kid1| Squid plugin modules loaded: 0
2014/05/27 14:52:12 kid1| Accepting HTTP Socket connections at
local=10.1.10.1:3129 remote=[::] FD 13 flags=9
2014/05/27 14:52:12 kid1| Accepting NAT intercepted HTTP Socket connections
at local=10.1.10.1:3128 remote=[::] FD 14 flags=41
2014/05/27 14:52:13 kid1| storeLateRelease: released 0 objects

squid.conf:
root@voyage:/usr/local/squid/etc# vi squid.conf
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network

acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http

acl SSL_ports port 443
acl CONNECT method CONNECT
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost manager
http_access deny manager
http_access allow localnet
http_access allow localhost
http_access deny all
http_port 10.1.10.1:3129
http_port 10.1.10.1:3128 intercept
cache_mem 2048 MB
memory_cache_mode always
access_log daemon:/tmp/var/log/squid/access.log squid
cache_store_log daemon:/tmp/var/log/squid/store.log squid
logfile_rotate 3
pid_filename /var/run/squid.pid
cache_log /tmp/var/log/squid/cache.log
coredump_dir /tmp
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320
shutdown_lifetime 10 seconds


This quid is running on a scaled down debian, no HDD, with mobile internet
connection. So /tmp in fact is a RAM-disk, and a good hit rate very welcome.
The example above should be cachable, or not ? squid was accessed on port
3128, intercept.






--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Why-not-cached-tp4666117.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Why not cached ?

2014-05-27 Thread babajaga
Thanx, you are the man !
Problem was here in squid.conf:
maximum_object_size_in_memory

Default is 500 kB, which is too small. 



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Why-not-cached-tp4666117p4666123.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Squid in a WiFi Captive portal scenario

2014-05-15 Thread babajaga
Not yet, but as I heard about this stuff, at least for apple, looks like I am
forced to have a look at it soon.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-in-a-WiFi-Captive-portal-scenario-tp4665950p4665978.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: How to throttle wireless clients MISS

2014-05-10 Thread babajaga
No need to guess when you can test :-)
I did; but you never are absolutely sure, that you covered all test cases
:-)

OK; at least my guess is confirmed. 
Any other possible solution to satisfy my reasonable idea ?




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/How-to-throttle-wireless-clients-MISS-tp4665898p4665905.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Squid and Hangout (google) problem

2014-05-08 Thread babajaga
Sorry, what are
Google Hangouts video ?

May be, you can provide an URL as an exaample to try/test ?



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Re-Squid-and-Hangout-google-problem-tp4665683p4665877.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] How to throttle wireless clients MISS

2014-05-08 Thread babajaga
When I have squid installed on a system with a wireless upstream link, how to
throttle downloads to clients with TCP_MISS only, not to saturate the
upstream link ? 
Actually I am using delay pools for  the clients, but these also
unnecessarily throttle TCP_HITs, I guess.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/How-to-throttle-wireless-clients-MISS-tp4665898.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: squid caching dynamic content

2014-05-01 Thread babajaga
You should start here:
http://wiki.squid-cache.org/ConfigExamples/DynamicContent




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/squid-caching-dynamic-content-tp4665779p4665780.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: squid caching dynamic content

2014-05-01 Thread babajaga
youtube is another story... 
Yep. What's written in the wiki seems to be obsolete (once again): AFAICS,
now the id for a video is not unique any more, which means, not to be used
as part of the STORE-ID any more :-(

Obviously, the guys from youtube are reading here as well. And doing
eveything reasonable for them to harm caching.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/squid-caching-dynamic-content-tp4665779p4665783.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: squid caching dynamic content

2014-05-01 Thread babajaga
It is unique 
That is past. I checked my favourite video today once more in detail, as I
was wondering about  the droppig hit rate.
Might be country dependent, though.
Pls, verify:
Video from Dire straits: http://www.youtube.com/watch?v=8Pa9x9fZBtY
From my logs:
1398957788.924 40 88.78.165.175 TCP_MISS/200 55727 GET
http://r7---sn-a8au-nuae.googlevideo.com/videoplayback?c=webclen=10270797cpn=W_QOZclsT2zLDiglcver=as3dur=646.652expire=1398981332fexp=900225%2C912524%2C945043%2C910207%2C916611%2C937417%2C913434%2C936923%2C3300073%2C3300114%2C3300131%2C3300137%2C3300164%2C3310366%2C3310635%2C3310649gcr=usgir=yesid=o-ALN3b5RL_HFbn0uq6wVdN7ok381e5Klr5aXnm2-j_rVgip=209.239.112.105ipbits=0itag=140keepalive=yeskey=yt5lmt=1389150223974581ms=aumt=1398957076mv=mmws=yesrange=10215424-10455039ratebypass=yessignature=C94B087DF4A3CF0CCF06AFE4FDC7A729F5C6B606.BF87024E4845DCD025BF2AAB9E408F4824E32085source=youtubesparams=clen%2Cdur%2Cgcr%2Cgir%2Cid%2Cip%2Cipbits%2Citag%2Clmt%2Csource%2Cupn%2Cexpiresver=3upn=LnXkp11yenY
re27md0x54bl26 CARP/127.0.0.1 application/octet-stream

1398959279.449 41 88.78.165.175 TCP_MISS/200 55727 GET
http://r7---sn-a8au-nuae.googlevideo.com/videoplayback?c=webclen=10270797cpn=MCtgQsjQaEvCa6lMcver=as3dur=646.652expire=1398981332fexp=947338%2C916624%2C929313%2C902534%2C937417%2C913434%2C936923%2C902408%2C3300073%2C3300114%2C3300131%2C3300137%2C3300164%2C3310366%2C3310635%2C3310649gcr=usgir=yesid=o-AJTyH4Z0kYlbLupmzkm6UGeWxO2d2KyQNUiluKAS28Kuip=209.239.112.105ipbits=0itag=140keepalive=yeskey=yt5lmt=1389150223974581ms=aumt=1398958548mv=mmws=yesrange=10215424-10455039ratebypass=yessignature=77AB96CA03864D43DC0A917F6AA4D0EC4A0C739B.21F0DE6753599D7298439D8B2C0496574061A641source=youtubesparams=clen%2Cdur%2Cgcr%2Cgir%2Cid%2Cip%2Cipbits%2Citag%2Clmt%2Csource%2Cupn%2Cexpiresver=3upn=xv54PP7C3To
re27md0x54bl26 CARP/127.0.0.2 application/octet-stream



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/squid-caching-dynamic-content-tp4665779p4665787.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: squid caching dynamic content

2014-05-01 Thread babajaga
No. But what has it to do with varying id  for same video, which makes
documented STORE-ID algo obsolete ?
This varying id yt also used some time ago already, for quite a while.

(BTW: About a year ago yt also used real range requests. Changed it after a
few months to their range spec in  URL. Never mentioned in wiki, AFAIK)



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/squid-caching-dynamic-content-tp4665779p4665794.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: StoreID - Need help with Yandex videos

2014-04-29 Thread babajaga
You can only get serious help, in case you specify the tags from the URL,
which uniquely identify the video.
May be, it is token= AND range=.. But, may be it is also
something else ?



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/StoreID-Need-help-with-Yandex-videos-tp4665739p4665749.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] How to split ssl-bump ?

2014-04-21 Thread babajaga
Is there a chance to do the following with squid:

client - https://example.com - squid_A - http://example.com - Dansguardian -
Squid_B - https:://example

As dansguardian works on Http, squid_A should do the conversion of https
-- http (ssl-bump, 50%),
forward traffic to DG, which then forwards to squid_B, to do the
conversion of http:// -- https again.

Is this possible to do with squid ? 



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/How-to-split-ssl-bump-tp4665649.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Force Video content caching?

2014-04-21 Thread babajaga
Have a look here for url_rewrite:
http://wiki.squid-cache.org/ConfigExamples/DynamicContent/Coordinator
http://wiki.squid-cache.org/ConfigExamples/DynamicContent/YouTube/Discussion




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Force-Video-content-caching-tp4665650p4665652.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: How to make Squid 3.3.8 a transparent proxy?

2014-04-19 Thread babajaga
Problem is here:
HIER_DIRECT/127.0.0.1 ...
Strange enough, squid forwards the request to 127.0.0.1 

I am not sure, whether you need 2 ports to be specified:
http_port 3129
http_port 3128 intercept 

In your setup, you need special firwall rules, to avoid a loop:
DG forwards to port 80, squid intercepts, forwards to port 80, NO INTERCEPT
THEN (hopefully)
So you should post firewall rules, as well.

Otherwise:
I always did it the other way:
client --- (transparent) squid ---DG --web
because
1) client does not need to specify proxy explicitly (in your setup, a MUST)
2) no need to cache content, later on blocked by DG
3) Not sure any more, whether DG supports parent proxy

Then my setup matched the rules in
http://wiki.squid-cache.org/ConfigExamples/Intercept/LinuxRedirect

Only the line 
cache_peer 127.0.0.1 parent DG-port 0 no-query no-digest no-netdb-exchange 
to be added to squid.conf






--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/How-to-make-Squid-3-3-8-a-transparent-proxy-tp4665624p4665633.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Cache Windows Updates ONLY

2014-04-10 Thread babajaga
Should I change the
cache allow mywindowsupdates
always_direct allow all
... to
cache allow mywindowsupdates
cache deny all 

To ONLY cache the windows updates, 

cache allow mywindowsupdates
cache deny all

would be correct.

#
#always_direct allow all #This is NOT related to caching.




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Re-Cache-Windows-Updates-ONLY-tp4665520p4665524.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Cache Windows Updates ONLY

2014-04-10 Thread babajaga
The server won't deliver the file unless the tokens are in place.
Whenever a file is fetched, it appears to be the same irrespective of
the tokens. I will carry out more research based on checksums of
multiple files to make sure. 
I very doubt  to be the same ... . Because this would not make sense.
youtube does something similar for their videos, and there the tokens
contain add info like resolution of the movie, as it is distributed in
different ones. Depending upon actual connection speed, for instance.

So, the only reason to have random tokens in your case would be to confuse
the caches, which I doubt. OR it might signal some info regarding the size
of the range requests. Then it would be safe to ignore the tokens, as you
are considering, as the complete file will be cached within squid, and the
different ranges serviced from there. (Note: This is something, youtube did
some time ago. )
So you might test with different connections speeds, too.





--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Re-Cache-Windows-Updates-ONLY-tp4665520p4665525.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: How to change redirection path to forward to www.earth.com/moon insted of moon.earth.com ?

2014-04-10 Thread babajaga
Search for the comments to

url_rewrite_program

in squid.conf. 



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/How-to-change-redirection-path-to-forward-to-www-earth-com-moon-insted-of-moon-earth-com-tp4665521p4665527.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Squid brought down by hundreds of HEAD request to itself

2014-04-09 Thread babajaga
Some type of loop, I suspect. As you probably have parent squids configured.
In case, you have, pls also post parents squid.conf
It (almost) always makes sense, to post the squid.conf here. Just guessing
around does not help a lot.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-brought-down-by-hundreds-of-HEAD-request-to-itself-tp4665513p4665514.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Caching not working for Youtube videos

2014-04-08 Thread babajaga
Hi, you are a bit late to detect this issue :-)
youtube changed this already some months ago. Actually I can not do further
research, but also look here:
http://squid-web-proxy-cache.1019090.n4.nabble.com/Do-we-have-an-algorithm-to-define-the-cachabillity-of-an-object-by-the-request-and-response-td4665473.html



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Caching-not-working-for-Youtube-videos-tp4665486p4665488.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Do we have an algorithm to define the cachabillity of an object by the request and response?

2014-04-08 Thread babajaga
the only way is to force a fetch of the full object

I do not see, how this will solve the random (?) range-issue, without a lot
of new, clever coding.
Actually, I can not seriously test for random range, but will definitely do. 
(NOTE: With range I refere to explicit range=xxx-yyy somewhere within
URL, NOT range request in http-header, which was used and then dumped
already quite some time ago by youtube.)



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Do-we-have-an-algorithm-to-define-the-cachabillity-of-an-object-by-the-request-and-response-tp4665473p4665489.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: WARNING: Forwarding loop detected for:

2014-04-08 Thread babajaga
Pls, post squid.conf, without comments.
And, wich URL exactly results in the forward loop ?



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/WARNING-Forwarding-loop-detected-for-tp4665487p4665491.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Do we have an algorithm to define the cachabillity of an object by the request and response?

2014-04-08 Thread babajaga
If the range is done properly with Range: header then the future random
ranges can be served as HIT on the cached object.
Yes. 
But that is NOT the actual state with youtube; only history, unfortunately.

Problem remains if anything in the URL changes and/or the range detail
is sent in the URL query-string values.
That IS actual state. 

And it looks like, that the range details, as you call it, are NOT
repeatable ANY MORE (which means, the WERE), even in case you request 2 time
the same video from same client, just one after the other.
That is, what I meant with random range. Will check random range in more
detail some time in the future.




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Do-we-have-an-algorithm-to-define-the-cachabillity-of-an-object-by-the-request-and-response-tp4665473p4665497.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Do we have an algorithm to define the cachabillity of an object by the request and response?

2014-04-08 Thread babajaga

 stripping the range header 
How often should I say: There is no range header any more ! There was one, a
year ago, may be. 
Now the range is within URL !

Real world example, brand new:

1396987801.026   1766 127.0.0.1 TCP_MISS/200 930166 GET
http://r3---sn-a8au-nuae.googlevideo.com/videoplayback?c=webclen=8890573cpn=YbdH9EPrD2WaihVOcver=as3dur=229.296expire=1397011183fexp=931327%2C909708%2C943404%2C913564%2C921727%2C916624%2C931014%2C936106%2C937417%2C913434%2C936916%2C934022%2C936923%2C333%2C3300108%2C3300132%2C3300137%2C3300164%2C3310366%2C3310622%2C3310649gcr=usgir=yesid=o-AJr0zxHxn0iVmV-Cln_bZf3PMd4um4Qt9Thok1FphZR0ip=199.217.116.158ipbits=0itag=134keepalive=yeskey=yt5lmt=1384344824807223ms=aumt=1396987642mv=umws=yesrange=2789376-3719167ratebypass=yes;
  
!RANGE IN URL !! 
signature=622A2F30C82E4D26D6C9A88C2D08CBD7737DBD83.EFEABA801CDE1A0AE37F5F55161DD8994EC17788source=youtubesparams=clen%2Cdur%2Cgcr%2Cgir%2Cid%2Cip%2Cipbits%2Citag%2Clmt%2Csource%2Cupn%2Cexpiresver=3upn=b0ZrCxMNX5k
- DIRECT/4.53.166.142 application/octet-stream
Accept:%20*/*%0D%0AAccept-Encoding:%20gzip,deflate%0D%0AAccept-Language:%20de-DE,de;q=0.8,en-US;q=0.6,en;q=0.4%0D%0ACache-Control:%20max-age=0%0D%0AHost:%20r3---sn-a8au-nuae.googlevideo.com%0D%0AReferer:%20http://www.youtube.com/watch?v=hSjIz8oQuko%0D%0AUser-Agent:%20Mozilla/5.0%20(Windows%20NT%206.3;%20WOW64)%20AppleWebKit/537.36%20(KHTML,%20like%20Gecko)%20Chrome/33.0.1750.154%20Safari/537.36%0D%0A
HTTP/1.0%20200%20OK%0D%0ALast-Modified:%20Wed,%2013%20Nov%202013%2012:13:44%20GMT%0D%0ADate:%20Tue,%2008%20Apr%202014%2020:09:57%20GMT%0D%0AExpires:%20Tue,%2008%20Apr%202014%2020:09:57%20GMT%0D%0ACache-Control:%20private,%20max-age=23086%0D%0AContent-Type:%20application/octet-stream%0D%0AAccept-Ranges:%20bytes%0D%0AContent-Length:%20929792%0D%0AAlternate-Protocol:%2080:quic%0D%0AX-Content-Type-Options:%20nosniff%0D%0AConnection:%20close%0D%0AX-UA-Compatible:%20IE=edge%0D%0A%0D

And when this range=2789376-3719167 is more or less random now, no
chance for caching.
(Unless you write very smart code to join/select the pieces, assuming, no
checksum or similar nasty parameters in the URL, too.)

I hope, now it is absolutely clear to you.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Do-we-have-an-algorithm-to-define-the-cachabillity-of-an-object-by-the-request-and-response-tp4665473p4665500.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Do we have an algorithm to define the cachabillity of an object by the request and response?

2014-04-08 Thread babajaga
 Real world example, brand new:
Redirect to a url with no range at all.
It's one of google defaults as far as I can understand.

Sorry, I do not understand. Please, be more specific.




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Do-we-have-an-algorithm-to-define-the-cachabillity-of-an-object-by-the-request-and-response-tp4665473p4665502.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Do we have an algorithm to define the cachabillity of an object by the request and response?

2014-04-07 Thread babajaga
Sorry, but what is vbr ?
The issue is that the player is using vbr 

I do not understand your question. First of all, the request usually is not
simply 
r8---sn-nhpax-ua8e.googlevideo.com
but also contains add. info, like itag, id, and, most important,
range=xxx-yyy.

I was always afraid, that youtube/google might start to make xxx-yyy more or
less random, as this will definitely kill cachability,unless sombody writes
some very smart code to join/extract the various parts.





--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Do-we-have-an-algorithm-to-define-the-cachabillity-of-an-object-by-the-request-and-response-tp4665473p4665474.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Do we have an algorithm to define the cachabillity of an object by the request and response?

2014-04-07 Thread babajaga
Thanx for the hint. Was wondering already because of unusual low byte-hitrate
in my 2.7-setup. 



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Do-we-have-an-algorithm-to-define-the-cachabillity-of-an-object-by-the-request-and-response-tp4665473p4665485.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Duration Access Limits

2014-04-05 Thread babajaga
You can use the simplest MT equipmet, like this one
http://routerboard.com/RB951-2n

For the very beginning, without using script to keep off unwelcome guests,
you might simply reduce wireless TX power to a lower value, just to cover
your own area. 

Or, as a better solution, using MT user manager, you might print very small
tickets containing one-time user/password for login, to be handed out free
of charge to your guests only. So, nobody without a valid user/pwd will be
able to log into your own private hotspot.




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Duration-Access-Limits-tp4665424p4665447.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Duration Access Limits

2014-04-04 Thread babajaga
I could think about a custom external auth helper, checking the IP,
maintaining its own DB regarding the connect times, and allowing/disallowing
access to squid.
However, this helper has to be provided by you.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Duration-Access-Limits-tp4665424p4665435.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Duration Access Limits

2014-04-04 Thread babajaga
Unless I am getting it wrong ...are you telling me to find (or
propose) a solution to my problem?
I proposed a possible solution using squid, however, must be implemented
(programmed) by yourself, as not available AFAIK.

 Must it be external? Such
tend to be slow.
Not necessarily, as the result of the auth helper might be cached within
squid, so number of accesses to this helper is reduced.

Is there another open source solution I can implement besides
tinkering with the existing squid installation? 
You might have a look at mikrotik.com. Their hotspot system within RoS
should be able to do, what you want. Their hardware is really cheap, and not
so bad. As there is also a huge forum regarding scripts, you should find
something suitable on the spot.
BTW: You might use squid as an upstream caching proxy for your MT-box, if
you want. Simple to implement.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Duration-Access-Limits-tp4665424p4665438.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Non Transparent Proxy

2014-04-01 Thread babajaga
1.) Make your squid transparent.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Cache-facebook-videos-dowloads-or-just-all-files-using-squid-on-qnap-tp4665390p4665413.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: How to authorize SMTP and POP3 on SQUID

2014-03-27 Thread babajaga
Although this is a squid forum, and not for email or firewall:
Just completely remove the firewall (all ports on all interfaces are open !)
In case, then email is usable, it really is a firewall problem.
Then
Make shure, your clients are allowed access to your mail server, and mail
server cann access internet.
So, something like
#Allow access to mail server
iptables -A INPUT -p tcp --destination-port 25 -j ACCEPT
iptables -A INPUT -p tcp --destination-port 110 -j ACCEPT

should be in your firewall.
You might restrict it to special interface:
   iptables -A INPUT -i eth1 -p tcp --destination-port 25 -j
ACCEPT
iptables -A INPUT -i eth0 -p tcp --destination-port 110 -j
ACCEPT
(eth1 local int, eth0 public)



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/How-to-authorize-SMTP-and-POP3-on-SQUID-tp4665342p4665359.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: How to authorize SMTP and POP3 on SQUID

2014-03-26 Thread babajaga
Squid has nothing to do with SMTP or POP or IMAP etc. squid works on
different ports (look at http_port in squid.conf). 
Check your firewall settings to allow port 25/110 for email. Or check
postfix etc.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/How-to-authorize-SMTP-and-POP3-on-SQUID-tp4665342p4665343.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: need help with ubuntu upgrade procedure

2014-03-24 Thread babajaga
Dunno about install/upgrade of squid-package on Ubuntu, but always installed
my squid on ubuntu from src.
As you have a running version already, you only should backup squid.conf to
another location to be used with new squid. Do a squid -v to note actual
configure-options, to be used for new squid as well. And copy
/etc/init.d/squid to a safe location, also to be used for your new squid
later on.

You might now deinstall/delete your running squid then, incl. cached files.
(Remove the service and delete the package)

Then
./configure #with old config-options
make
make install
new squid, copy old squid.conf to /usr/local/squid/etc

Copy saved /etc/init.d/squid back to /etc/init.d/squid 

And re-install service-squid 







--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/need-help-with-ubuntu-upgrade-procedure-tp4665324p4665326.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: cant run squid proxy servers : fail :(

2014-03-22 Thread babajaga
Insert into squid.conf:

acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32
acl manager proto cache_object


In newer squid versions, these ACLs are pre-defined. So it looks like, you
used squid.conf from a new version with a rather old squid (3.0). This is
not a good idea.





--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/cant-run-squid-proxy-servers-fail-tp4665315p4665316.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Inject some html with transparent squid

2014-03-19 Thread babajaga
Have a look at my posts in this thread:

http://squid-web-proxy-cache.1019090.n4.nabble.com/Question-in-adding-banner-for-ads-by-squid-td4664976.html



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Inject-some-html-with-transparent-squid-tp4665224p4665295.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: how i can replace website source code content !!

2014-03-18 Thread babajaga
To be inserted in squid.conf:
---
acl block dstdomain block.lst
http_access deny block
#Either
deny_info BLOCKED block # Create file BLOCKED in squid error message
directory, i.e. in
#/usr/local/squid/share/errors/en
#or
#deny_info http://my.domain.com/my_block_page.html block #alternative, but
it needs http-server

-

Edit file block.lst:
.twitter.com
.facebook.com



However, it will still be possible to use https to access facebook etc. So
you might consider to forbid https completely.






--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/how-i-can-replace-website-source-code-content-tp4665213p4665282.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: how i can replace website source code content !!

2014-03-16 Thread babajaga
To replace website src content can be done with content adaption
techniques, using ecap etc.
However, for your purpose this seems to be far too complicated. (BTW: I have
a working solution for this, the purpose of which is to inject ads, to
finance open hotspots.)
However, in case you have some smart algorithms to analyze web site content
on the fly, to check, whether it contains  content to be blocked (porn,
gambling etc.) or not, then, may be, I can help you :-)

Usually, you would have some form of blacklisted sites, so one ACL, using
the blacklist and squids.conf deny_info directive will do it for you
nicely.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/how-i-can-replace-website-source-code-content-tp4665213p4665220.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Automatic StoreID ?

2014-03-15 Thread babajaga
This is how Rock store does it, essentially: Rock store index does not
store the real location of the object on disk but computes it based on
the hash value.
Sorry, then I misunderstood something, when reading some rock-code while
ago. 
For me, in essence, it looked like, that for caching an object, rock picks
one (or multiple, for large-rock) of the available slots for storage, and
keeps the mapping hash-slot in the memory table. So, on restart, squid has
to scan all slots from disk, to rebuild the table.
Which means,  the mapping URL-hash - slot_# is _not_  fixed (predictable).


 Positive consequence: No rebuild of the in-memory-table necessary, as
 there
 is none. Avoids the time-comsuning rebuild of rock-storage-table from
 disk.
If you do not build the index,
you have to do a disk I/O to fetch the first slot of the candidate
object on _every_ request. 
Not necessarily to do a disk I/O, but to do an I/O. Still, underlying
OS-buffering/blocking is happening.
Besides, for a HIT you have to do the I/O anyway. 
So, the amount of unnecessary disk-I/Os would be the (squid-MISSes - not
in OS/buffers residing disk-blocks).
Which leads to a good compromise: Direct hashing would allow the slow
population of the optional translation-table.






--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Automatic-StoreID-tp4665140p4665204.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Automatic StoreID ?

2014-03-14 Thread babajaga
Actually, two commercial vendors - PeerApp and ThunderCache - claim
their products doesn't use urls to identify the objects, thus they
don't have to maintain StoreID-like de-duplication database manually.

Any ideas how do they do it? 

Instead of first mapping the URL to a memory-resident table, keeping
pointers (file-id, bucket no.) to the real location of the object on disk, a
hash-value, derived from the URL could directly be used to designate the
storage location on disk, avoiding the translation table, squid uses.
This is the principle of every hashed table in a fast database system.
Drawback is, you have to deal with collisions on the disk and overflows:
hashes for different URLs point to same storage location on disk. Different
solutions for this problem available, though (chaining, sequential storage,
secondary storage area etc.). And you have to manage variable sized
buckets, the storage locations, hashing points to.

Positive consequence: No rebuild of the in-memory-table necessary, as there
is none. Avoids the time-comsuning rebuild of rock-storage-table from disk.

I can imagine, that because of historical reasons (much simpler to
implement), squid uses the translation-table instead of direct hashing,
whereas Thundercache etc. can rely on some low-level DB-system, having
direct hashing ready to be used.

 




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Automatic-StoreID-tp4665140p4665198.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: SquidGuard redirect to parent proxy (Off-Topic)

2014-03-13 Thread babajaga
You need to make sure, that something like this is in your squid.conf:

acl local-server dstdomain .mydomain.com
acl blockeddomains dstdomain blockeddomains.lst #file  contains list of
blocked domains 
http_access deny blockeddomains
deny_info http://mydomain.com/blocked.html blockeddomains #mydomain.com is
hosted on #local_host/same machine as squid
.

always_direct allow local-server #To access mydomain.com NOT via
parent proxy
never_direct allow all


MfG :-)



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/SquidGuard-redirect-to-parent-proxy-Off-Topic-tp4665178p4665187.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Problem with squid tcp_outgoing_address

2014-03-10 Thread babajaga
As I have a similar problem, just using this thread:
How to use tcp_outgoing_address for load balancing (round robin) ?

My idea was to write an ACL-helper doing the round-robin, which would be
very easy; but how to detect a failed WAN-connection within ACL-helper ?)


(One local interface, 3 WAN-interfaces to different ISPs, for redundancy and
balanced load sharing)





--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Problem-with-squid-tcp-outgoing-address-tp4657445p4665113.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: HTTP/1.1 pipelining

2014-03-10 Thread babajaga
But mgr:client_list shows different type of info, as far as I can see. It
shows current client connections, whereas mgr:pconn shows past connection
statistics (effectiveness)
My squid2.7:

/usr/local/squid/etc#  ../sbin/squid27 -v
Squid Cache: Version 2.7.STABLE9-20110824

/usr/local/squid/etc# squidclient -p  -h 127.0.0.1 -U manager -W ?
mgr:pconn
HTTP/1.1 200 OK
Date: Mon, 10 Mar 2014 08:06:33 GMT
Content-Type: text/plain
Expires: Mon, 10 Mar 2014 08:06:33 GMT
Connection: close

Client-side persistent connection counts:

req/
conn  count
  -
   0  15160
   1   9681
   2   2801
   3   1547
.
 203  2
 208  1
 216  1
 220  1
 231  1
 250  1

Server-side persistent connection counts:

req/
conn  count
  -
   1  95899
   2   2066
   3   1049

  83  1
  99  1
 104  1
 110  1
 112  1
 132  1


So the first part of 2.7-pconn statistics is dropped on later
squid-versions, although quite interesting, I think. May be, I should file a
bug or feuture request ?




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/HTTP-1-1-pipelining-tp4658574p4665114.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: squid with muliwan

2014-03-10 Thread babajaga
Is it for load balancing or FailOver? 
Load balancing, but taking failed connection into acccount, if possible. One
LINUX-PC with 4 interfaces

   |--- ISP-1
LAN --squid--|ISP-2
   |ISP-3



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/squid-with-muliwan-tp4662760p4665115.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: HTTP/1.1 pipelining

2014-03-09 Thread babajaga
Thanx for clarification. Then to this one, pls:

Trying squid 3.4.3, I get


squidclient -p nnn -U ? -W ??? mgr:pconn
HTTP/1.1 200 OK
Mime-Version: 1.0
Date: Fri, 07 Mar 2014 15:15:01 GMT
Content-Type: text/plain
Expires: Fri, 07 Mar 2014 15:15:01 GMT
Last-Modified: Fri, 07 Mar 2014 15:15:01 GMT
Connection: close


 Pool 0 Stats
server-side persistent connection counts:

req/
conn  count
  -
   1 43
   2  5
   4  3
   5  1
   6  2
   7  5
   8  2
   9  2
  10  1
  11  2
  12  1
  13  2
  14  2
  15  1
  17  1
  19  1
  25  1
  26  1
  27  1
  34  1
  36  1
  41  1
  60  1
  70  1
 110  1

 Pool 0 Hash Table
 item 0: 127.0.0.1:8887
 item 1: 127.0.0.1:8887
-

Does that mean, absolutely no persistent conn/pipelining to the client (FF,
pipelining enabled; Chrome) ?
from squid.conf:
pipeline_prefetch 3
client_persistent_connections on 
http_port nnn tcpkeepalive=3,3,125


BUT:
With almost same squid.conf (besides pipeline_prefetch=on) for my squid2.7 
squidclient -p nnn -U ? -W ??? mgr:pconn
always shows me quite a few client side persistent conns with request counts
up to about 50.

So either I am missing something in squid.conf, upgraded from 2.7 - 3.4.3,
a bug in squidclient or a bug/change in behaviour between 2.7/3.4.3 ?
(Note: Using 3.4.3, I can always see Connection: keep-alive in the
response header. )






--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/HTTP-1-1-pipelining-tp4658574p4665110.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: HTTP/1.1 pipelining

2014-03-07 Thread babajaga
 They still have to be read and processed in
order.
Squid reads requests out of the client connection one at a time and
processes them.  

Could this be a bit more clarified ?
 I mean, when squid started to process the first request from pipeline
(request forwarded to destination), will squid also start to process the
next request from pipeline in parallel, or wait, until previous one
completed ?

Are there mayor differences in pipelining between squid2.7 and newest
versions ?
Actually, I am located in a remote area, ping from my client to squid is
about 300-350ms. Theoretically, pipelining should be of benefit here, as I
also suspect, my wireless ISP limits the amount of parallel conns.





--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/HTTP-1-1-pipelining-tp4658574p4665093.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: HTTP/1.1 pipelining

2014-03-07 Thread babajaga
Alex,

then the following in
http://www.squid-cache.org/Doc/config/pipeline_prefetch/
is misleading:

If set to N, Squid
will try to receive and process up to 1+N requests on the same
connection concurrently.

Note the concurrently.
For older versions of squid, it is stated differently.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/HTTP-1-1-pipelining-tp4658574p4665100.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: DANS guardian setup with squid

2014-02-23 Thread babajaga
Besides the drawback of DG (double processing of http) I like the advantage
of being completely independent from squid, besides the config as an
upstream/downstream proxy to squid (parent).
So it is very easy to be used together with squid. In case of thruput
problems, it can be simply put onto another machine.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/DANS-guardian-setup-with-squid-tp4664989p4664994.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Question in adding banner for ads by squid ?!!

2014-02-23 Thread babajaga
 https://answers.launchpad.net/ecap/+faq/1793

very well describes  a few of the obstacles, although solvable. I.e. a good
solution should not rely on MIME-types, as stated in the article correctly,
but do an analysis of the datastream itself, to identify HTML to be
modified. 

Regarding legal issues, as I have a good working solution, at least in my
country (not US) this issue is difficult to decide, according to the
expertise of a special lawyer for internet  and copyrights, as there is no
court decision up to now. Of course, it would be a bad idea to inject an ad
for a law consulting co into  another lawyers web site :-)




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Question-in-adding-banner-for-ads-by-squid-tp4664976p4664995.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: difficulty with Squid Ovidsp.ovid.com

2014-02-23 Thread babajaga

The following error was encountered while trying to retrieve the URL:
http://ovidsp.ovid.com/autologin.html
Unable to determine IP address from host name ovidsp.ovid.com
The DNS server returned:
Timeout 

Looks like a DNS problem. I can access the URL from Thailand via my squid.
So on your site, squid can not resolve ovidsp.ovid.com
You might try first to
ping ovidsp.ovid.com
from the machine, squid is installed on. If that does not work, squid has no
chance anyway, because the DNS-server used has a problem.
If it works, then there is a problem in communication between squid and the
DNS server. Posting squid.conf might give more insight.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/difficulty-with-Squid-Ovidsp-ovid-com-tp4664996p4664997.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Question in adding banner for ads by squid ?!!

2014-02-22 Thread babajaga
That is possible, although not with squid. I have a working solution for this
one, in production  in a free hotspot at an airport, for example.
In case of interest, contact me. But this SW is NOT Open Source.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Question-in-adding-banner-for-ads-by-squid-tp4664976p4664983.html
Sent from the Squid - Users mailing list archive at Nabble.com.


  1   2   3   >