I suggest, you first start with a simple solution, before turning to Andrews.
Have a look at the contents of
(squid2.7-sources)/helpers/external_acl/session
There you find squid_session.c
with some description.
For other versions of squid sources, you will find something similar, too.
--
With squid, you can use the session_helper to create a simple captive
portal with splash page:
http://wiki.squid-cache.org/ConfigExamples/Portal/Splash
Not difficult to customize the external helper, as it is simple C.
Another, more complicated solution, containing much more functionality
I suspect, you might have some statement like never_direct /
always_direct in the squid.conf of first squid with some ACL, which does
not match any more.
To get a clear picture, pls publish both of actual squid.conf, anonymized.
--
View this message in context:
Yes.
You might also try on inner squid.conf:
cache_peer 127.0.0.1 parent8092 0 no-digest no-query
no-net-db-exchange
assuming, you only have one upstream proxy.
Outer squid.conf should have NO intercept/transparent in http_port.
--
View this message in context:
I remember a bug, I detected in my favourite squid2.7, also in a sandwiched
config, with another proxy inbetween:
It was not possible to have both squids listen on 127.0.0.1:a/b; had to use
127.0.0.1:a; 127.0.0.2:b
To be pragmatic: Whats the purpose of having two squids directly coupled ?
Why not
As long as you do not use parent proxy, no need for pinger. And, even in case
of parent, pinger is nice to have.
--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-not-listening-on-any-port-tp4667004p4667398.html
Sent from the Squid - Users mailing list
@Leonardo: Thanx a lot. Your logs are much better than mine, although I am
closer to the site.
So I have to look somewhere else, like slow DNS-resolution (I also use
googles 8.8.8.8), or slow conn establishment, as now I have also seen very
long response times durin initial page loads when trying
This is a bit strange:
2014/08/25 09:19:42| pinger: Initialising ICMP pinger ...
2014/08/25 09:19:42| pinger: ICMP socket opened.
2014/08/25 09:19:42| Pinger exiting.
2014/08/25 09:21:04| Current Directory is /root
1) Pinger exiting. You might try to disable pinger in squid.conf
pinger_enable
I would first eliminate the following warnings:
2014/08/25 09:21:04| Warning: empty ACL: acl blockfiles urlpath_regex -i
/etc/squid/local/bad/blockfiles
2014/08/25 09:21:04| WARNING: log name now starts with a module name. Use
'stdio:/var/log/squid/access.log'
2014/08/25 09:21:04| WARNING: log
Have a look at cache_dir in squid.conf. There are the options min-size
and max-size.
So you can specify ranges for the size of objects cache in different
cache_dirs.
--
View this message in context:
In the past, with older squids, there was a bug regarding a conflict with the
general parm
maximum_object_size
regarding the sequence (may be: value ?) of cache_dir max-size and
max_obj_size.
Can't exactly remember, think, max_obj_ has to be before cache_dir in
squid.conf, imposing the highest
Just trying to use offic. package for openWRT, which is based on squid2.7
only.
Having detected some DNS-issues, does anybody use squid on openWRT, and
which squid version ?
--
View this message in context:
Sounds good. I also do not like C++ :-)
squid2.7 from openWRT is running on my Open-Mesh; besides the DNS-issues I
have not found any problem. Only a bit slow.
DNS-issues are related to advert-sites only, which is a bit strange. Lokks
like some tricks regarding TTL/DNS-based load sharing, I
Interesting. Have you seen any DNS issues ?
For details, pls ref. here:
http://squid-web-proxy-cache.1019090.n4.nabble.com/Very-slow-site-via-squid-td4667243.html
Or, can your reproduce it here:
www.spiegel.de
--
View this message in context:
@James:
For details of my problems, pls ref. here:
http://squid-web-proxy-cache.1019090.n4.nabble.com/Very-slow-site-via-squid-td4667243.html
Not shure, that it is really squid. Effect is slow loading of objects from
ad-servers.
As I have an open-mesh AP, 64MB RAM, my squid2.7 does memory-only
The latest squid-3.x stable releases may be able to help with this.
Actually, I am trying to use the standard package of squid for openWRT,
which is squid2.7.
So I would need to build my own one.
Also, in my experiene the worst slow domains like this are usually
advertising hosts. So blocking
I have a squid 2.7 setup on openWRT, running on a 400Mhz/64MB embedded
system.
First of all, a bit slow (which is another issue), but one site is
especially slow, when accessed via squid:
1408356096.498 25061 10.255.228.5 TCP_MISS/200 379 GET
http://dc73.s290.meetrics.net/bb-mx/submit? -
Real, but obsolete example (squid2.7):
#!/usr/bin/perl
$|=1;
while () {
chomp;
@X = split;
if ($X[0] =~ /(youtube|google).*videoplayback\?/){
@itag = m/[?](itag=[0-9]*)/;
@id = m/[?](id=[^\\s]*)/;
@range = m/[?](range=[^\\s]*)/;
@begin =
Hi, looks interesting.
Which of the 3 variants (.pl, .py2/3) do you think is the fastest one ?
I am willing to trade RAM usage for speed on my embedded system.
--
View this message in context:
Pls, correct your log format specific:
http://etplc.org/squid.html
Actual version results in
squidFATAL: Can't parse configuration token: '%h %{Referer}h
%{Cookie}h'
--
View this message in context:
it probably wouldn't work anyway, unless youtube really did use a
consistent url domain name for their content delivery network..
Not correct. It is possible to cache youtubes content using StoreID.
Additionally, locking the resolution is much more trivial, as the requested
youtube-URL contains
how to actually access the software itself.
Pls, be more specific. What do you want to know or achieve ?
(Usually, either in /etc OR in /usr/local/squid/etc the config-files to be
found).
Search for squid.conf. That's the entry for the features used.
Depending on, whether squid has been
What do you want to achieve ?
You might also refere to my responses here:
http://squid-web-proxy-cache.1019090.n4.nabble.com/Re-split-the-connexion-using-Squid-td4666739.html#a4666742
--
View this message in context:
there is just one network in both the client and server
side.
On the client side,
I just added the OUTPUT DNAT iptables rule to make it match the 3128 IP
and port of the remote server.
Sorry, I am a bit confused.
Pls, read carefully:
#Example for squid and NAT on same machine:
Regarding first issue:
Have a look here for a correct solution:
http://wiki.squid-cache.org/ConfigExamples/Intercept/AtSource
#Example for squid and NAT on same machine:
iptables -t nat -A OUTPUT -p tcp --dport 80 -j DNAT --to-destination
SQUIDIP:3128
#Replace SQUIDIP with the public IP which
Besides SMP, there is still the old fashioned option of multiple instances
of squid, in a sandwich config.
http://wiki.squid-cache.org/MultipleInstances
Besides described port rotation, you can set up 3 squids, for example:
one frontend, just doing ACLs and request dispatching (carp), and 2
It is not true that IOS and others do not support authentication.
They do.
I think, this is not the point. As the starter of the thread wrote:
...makes it possible to proxy a lot of MOBILE APPS on ios devices and
android which don't support traditional proxy authentication.
Many APPs are not
Pls, publish your complete non-working squid.conf
OR
at least the part invoking your
/etc/squid3/adservers
--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/Blocking-spesific-url-tp4666791p4666836.html
Sent from the Squid - Users mailing list archive at
i get a new proxy address (eg,3121212.proxy.com) and a port number(in the
range of 3). it's not the listening port.
It is not their listening port ? I doubt it, how else could you use it ?
I can think about some type of DNS rotation, they use. When their proxy.com
at any time slot points to
In case, the port knocking supervisor keeps track of the knocking IP, then
finally the real proxy port is opened ONLY for this knocking IP.
So, unless you know how the port knocking is done correctly, you will not be
granted access to the real proxy port.
Practically secure, in case
- check for
Having a very similar config like you, up and running(squid+chilli), you
better ask in a chilli forum OR in the chilli group of linkedin. (I am
there, too :-)
Cause there are several issues to be considered with our setup:
- Proper config of iptables, as chilli also modifies them. And for
Have a look here for a correct solution:
http://wiki.squid-cache.org/ConfigExamples/Intercept/AtSource
(Example: Replace SQUIDIP with the public IP which squid may use for its
listening port and outbound connections. )
iptables -t nat -A OUTPUT -p tcp --dport 80 -j DNAT --to-destination
Rate limit using iptables
http://thelowedown.wordpress.com/2008/07/03/iptables-how-to-use-the-limits-module/
seems to be the simplest solution for an upper limit of requests/time.
Practically, you want the same as an administrator, who wants to protect his
web server against a DoS attack by means
For a very first beginning, you might look into the delay_pools of squid, to
distribute and limit download speed, at least. Works only for proxied
traffic, of course, so torrents etc. are not throttled.
But easy to implement.
--
View this message in context:
Not percentagewise, only in absolute values.
I had problems myself to vaguely understand at least the doc about
delay_pools, look into the documented squid.conf. So somebody else should
answer your detailed questions, if any.
However,
I use it to put an upper limit of 125kbit/s download speed to
So the behaviour you are seeing looks more
like a bug in always_direct processing.
Which might be specific to the squid version OR the squid.conf in use.
I have several squids of different versions with cache_peer in production.
The config needs to be different:
2.7:
hierarchy_stoplist cgi-bin ?
Hassan definitely is correct.
So, may be you just use a working config before trying alternatives:
#ALL your ACL's first in squid.conf !
.
cache_peer xx.xx.xx.xx parent 6139 0 no-query no-digest no-netdb-exchange
never_direct allow all
If this does not work, pls post your
OK, then we will have a look at the ACL-decisions (often a problem) and the
peer selection within squid, using
debug_options ALL,5 33,2 28,9 44,3
in squid.conf
This will produce a detailed log about ACL processing, and peer selection,
which is the most interesting.
It will cause a lot of
, OR
there is a problem with your routing.
BTW: babajaga is a Russian witch. Sort of.
--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/Question-to-cache-peer-tp416p446.html
Sent from the Squid - Users mailing list archive at Nabble.com.
It depends on.
Facebook
now uses https, which can not be cached. Valif for other sites, using https,
too.
Or, in other words, only http can be cached.
and game sites where chat is available.
So facebook not (because of https), game sites may be, in case of using http
for chat.
Are facebook
change
http_port 3129 transparent
to
http_port 3129 intercept
You did not get an error msg in cache.log ?
If this does not help, pls publish
a) your browser proxy setup
b) your firewall rules
--
View this message in context:
Then lets try to get ridd off the error messages in squid.log.
This is my standard cmd for a parent proxy, all requests are forwarded to:
cache_peer xxx.xxx.xxx.xx parent 3128 0 no-query no-digest
no-netdb-exchange
This should get rid off the errors regarding pinger. Correct ? Still
crashing ?
Looks like your problem is caused by the failing pinger. Which means, there
is an --enable-icmp in your config options, when building squid. So
another possibillity would be to remove this config option. AFAIK in your
situation, the pinger would only be an advantage (or even necessary), in
case of
Did you try without Antivirus ? Not so into the squid code, but I would
suspect a problem in the interface to Trend, first. As squid is crashing
already during/immediately after startup.
BTW: What should happen here ?
maximum_object_size 1 KB
maximum_object_size 50 MB
Probably, you can
Any reason not to build squid from newest sources ?
Will probably increase your chances of getting better support, as 2.1 is not
much newer than 2.7 :-)
(Still using latest 2.7, with private mods, myself. Solid as a rock.)
--
View this message in context:
StoreID should help you; may be together with a special helper. There are a
few examples in the wiki.
--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/redirector-rewriter-help-tp4666339p4666340.html
Sent from the Squid - Users mailing list archive at
In case, I correctly understand, 82MB-file is flushed from cache after 1'st
stalled/partial download from cache ?
If yes, it would be a good idea to post the http-headers of the cached file.
You might also add
ignore-reload ignore-private negative-ttl=0
What is your
maximum_object_size_in_memory
What is your
maximum_object_size_in_memory
The default value: 512 kB
Increase above 82 MB. Usually squid 2.7 keeps only in-transit objects in
memory, then in memory_cache, until swapped out later on. So this might
inhibit the swap-out to disk, because not being cached before in memory.
MfG
Thanx, that was the probelm, dblib-dev was not installed.
Which leads me to a suggestion:
As it is general policy to include most features of squid doing a plain
./configure, which also includes _all_
external auth helpers, configure should also check for _all_ dependencies to
be satisfied.
Trying to build external helper ext_session_acl.cc in
squid-3.4.5-20140603-r13143.
Even with defaults a lot of helpers are built after ./configure, but this
one not.
(I do not want ext_sql_session_acl, which is successfully built.)
--
View this message in context:
I was wondering about very few HITs in this squid installation, and did some
checking:
access.log:
1401203150.334 1604 10.1.10.121 TCP_MISS/200 718707 GET
http://l5.yimg.com/av/moneyball/ads/0-1399331780-5313.jpg -
ORIGINAL_DST/66.196.65.174 image/jpeg
1401203186.100 1327 10.1.10.121
Thanx, you are the man !
Problem was here in squid.conf:
maximum_object_size_in_memory
Default is 500 kB, which is too small.
--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/Why-not-cached-tp4666117p4666123.html
Sent from the Squid - Users mailing list
Not yet, but as I heard about this stuff, at least for apple, looks like I am
forced to have a look at it soon.
--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-in-a-WiFi-Captive-portal-scenario-tp4665950p4665978.html
Sent from the Squid - Users mailing
No need to guess when you can test :-)
I did; but you never are absolutely sure, that you covered all test cases
:-)
OK; at least my guess is confirmed.
Any other possible solution to satisfy my reasonable idea ?
--
View this message in context:
Sorry, what are
Google Hangouts video ?
May be, you can provide an URL as an exaample to try/test ?
--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/Re-Squid-and-Hangout-google-problem-tp4665683p4665877.html
Sent from the Squid - Users mailing list archive
When I have squid installed on a system with a wireless upstream link, how to
throttle downloads to clients with TCP_MISS only, not to saturate the
upstream link ?
Actually I am using delay pools for the clients, but these also
unnecessarily throttle TCP_HITs, I guess.
--
View this message in
You should start here:
http://wiki.squid-cache.org/ConfigExamples/DynamicContent
--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/squid-caching-dynamic-content-tp4665779p4665780.html
Sent from the Squid - Users mailing list archive at Nabble.com.
youtube is another story...
Yep. What's written in the wiki seems to be obsolete (once again): AFAICS,
now the id for a video is not unique any more, which means, not to be used
as part of the STORE-ID any more :-(
Obviously, the guys from youtube are reading here as well. And doing
eveything
It is unique
That is past. I checked my favourite video today once more in detail, as I
was wondering about the droppig hit rate.
Might be country dependent, though.
Pls, verify:
Video from Dire straits: http://www.youtube.com/watch?v=8Pa9x9fZBtY
From my logs:
1398957788.924 40 88.78.165.175
No. But what has it to do with varying id for same video, which makes
documented STORE-ID algo obsolete ?
This varying id yt also used some time ago already, for quite a while.
(BTW: About a year ago yt also used real range requests. Changed it after a
few months to their range spec in URL.
You can only get serious help, in case you specify the tags from the URL,
which uniquely identify the video.
May be, it is token= AND range=.. But, may be it is also
something else ?
--
View this message in context:
Is there a chance to do the following with squid:
client - https://example.com - squid_A - http://example.com - Dansguardian -
Squid_B - https:://example
As dansguardian works on Http, squid_A should do the conversion of https
-- http (ssl-bump, 50%),
forward traffic to DG, which then forwards
Have a look here for url_rewrite:
http://wiki.squid-cache.org/ConfigExamples/DynamicContent/Coordinator
http://wiki.squid-cache.org/ConfigExamples/DynamicContent/YouTube/Discussion
--
View this message in context:
Problem is here:
HIER_DIRECT/127.0.0.1 ...
Strange enough, squid forwards the request to 127.0.0.1
I am not sure, whether you need 2 ports to be specified:
http_port 3129
http_port 3128 intercept
In your setup, you need special firwall rules, to avoid a loop:
DG forwards to port 80,
Should I change the
cache allow mywindowsupdates
always_direct allow all
... to
cache allow mywindowsupdates
cache deny all
To ONLY cache the windows updates,
cache allow mywindowsupdates
cache deny all
would be correct.
#
#always_direct allow all #This is NOT related to caching.
The server won't deliver the file unless the tokens are in place.
Whenever a file is fetched, it appears to be the same irrespective of
the tokens. I will carry out more research based on checksums of
multiple files to make sure.
I very doubt to be the same ... . Because this would not make
Search for the comments to
url_rewrite_program
in squid.conf.
--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/How-to-change-redirection-path-to-forward-to-www-earth-com-moon-insted-of-moon-earth-com-tp4665521p4665527.html
Sent from the Squid - Users
Some type of loop, I suspect. As you probably have parent squids configured.
In case, you have, pls also post parents squid.conf
It (almost) always makes sense, to post the squid.conf here. Just guessing
around does not help a lot.
--
View this message in context:
Hi, you are a bit late to detect this issue :-)
youtube changed this already some months ago. Actually I can not do further
research, but also look here:
the only way is to force a fetch of the full object
I do not see, how this will solve the random (?) range-issue, without a lot
of new, clever coding.
Actually, I can not seriously test for random range, but will definitely do.
(NOTE: With range I refere to explicit range=xxx-yyy somewhere
Pls, post squid.conf, without comments.
And, wich URL exactly results in the forward loop ?
--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/WARNING-Forwarding-loop-detected-for-tp4665487p4665491.html
Sent from the Squid - Users mailing list archive at
If the range is done properly with Range: header then the future random
ranges can be served as HIT on the cached object.
Yes.
But that is NOT the actual state with youtube; only history, unfortunately.
Problem remains if anything in the URL changes and/or the range detail
is sent in the URL
stripping the range header
How often should I say: There is no range header any more ! There was one, a
year ago, may be.
Now the range is within URL !
Real world example, brand new:
1396987801.026 1766 127.0.0.1 TCP_MISS/200 930166 GET
Real world example, brand new:
Redirect to a url with no range at all.
It's one of google defaults as far as I can understand.
Sorry, I do not understand. Please, be more specific.
--
View this message in context:
Sorry, but what is vbr ?
The issue is that the player is using vbr
I do not understand your question. First of all, the request usually is not
simply
r8---sn-nhpax-ua8e.googlevideo.com
but also contains add. info, like itag, id, and, most important,
range=xxx-yyy.
I was always afraid, that
Thanx for the hint. Was wondering already because of unusual low byte-hitrate
in my 2.7-setup.
--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/Do-we-have-an-algorithm-to-define-the-cachabillity-of-an-object-by-the-request-and-response-tp4665473p4665485.html
You can use the simplest MT equipmet, like this one
http://routerboard.com/RB951-2n
For the very beginning, without using script to keep off unwelcome guests,
you might simply reduce wireless TX power to a lower value, just to cover
your own area.
Or, as a better solution, using MT user
I could think about a custom external auth helper, checking the IP,
maintaining its own DB regarding the connect times, and allowing/disallowing
access to squid.
However, this helper has to be provided by you.
--
View this message in context:
Unless I am getting it wrong ...are you telling me to find (or
propose) a solution to my problem?
I proposed a possible solution using squid, however, must be implemented
(programmed) by yourself, as not available AFAIK.
Must it be external? Such
tend to be slow.
Not necessarily, as the result
1.) Make your squid transparent.
--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/Cache-facebook-videos-dowloads-or-just-all-files-using-squid-on-qnap-tp4665390p4665413.html
Sent from the Squid - Users mailing list archive at Nabble.com.
Although this is a squid forum, and not for email or firewall:
Just completely remove the firewall (all ports on all interfaces are open !)
In case, then email is usable, it really is a firewall problem.
Then
Make shure, your clients are allowed access to your mail server, and mail
server cann
Squid has nothing to do with SMTP or POP or IMAP etc. squid works on
different ports (look at http_port in squid.conf).
Check your firewall settings to allow port 25/110 for email. Or check
postfix etc.
--
View this message in context:
Dunno about install/upgrade of squid-package on Ubuntu, but always installed
my squid on ubuntu from src.
As you have a running version already, you only should backup squid.conf to
another location to be used with new squid. Do a squid -v to note actual
configure-options, to be used for new squid
Insert into squid.conf:
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32
acl manager proto cache_object
In newer squid versions, these ACLs are pre-defined. So it looks like, you
used squid.conf from a new version with a rather old squid (3.0). This is
not a good idea.
Have a look at my posts in this thread:
http://squid-web-proxy-cache.1019090.n4.nabble.com/Question-in-adding-banner-for-ads-by-squid-td4664976.html
--
View this message in context:
To be inserted in squid.conf:
---
acl block dstdomain block.lst
http_access deny block
#Either
deny_info BLOCKED block # Create file BLOCKED in squid error message
directory, i.e. in
#/usr/local/squid/share/errors/en
#or
#deny_info http://my.domain.com/my_block_page.html block #alternative,
To replace website src content can be done with content adaption
techniques, using ecap etc.
However, for your purpose this seems to be far too complicated. (BTW: I have
a working solution for this, the purpose of which is to inject ads, to
finance open hotspots.)
However, in case you have some
This is how Rock store does it, essentially: Rock store index does not
store the real location of the object on disk but computes it based on
the hash value.
Sorry, then I misunderstood something, when reading some rock-code while
ago.
For me, in essence, it looked like, that for caching an
Actually, two commercial vendors - PeerApp and ThunderCache - claim
their products doesn't use urls to identify the objects, thus they
don't have to maintain StoreID-like de-duplication database manually.
Any ideas how do they do it?
Instead of first mapping the URL to a memory-resident table,
You need to make sure, that something like this is in your squid.conf:
acl local-server dstdomain .mydomain.com
acl blockeddomains dstdomain blockeddomains.lst #file contains list of
blocked domains
http_access deny blockeddomains
deny_info http://mydomain.com/blocked.html blockeddomains
As I have a similar problem, just using this thread:
How to use tcp_outgoing_address for load balancing (round robin) ?
My idea was to write an ACL-helper doing the round-robin, which would be
very easy; but how to detect a failed WAN-connection within ACL-helper ?)
(One local interface, 3
But mgr:client_list shows different type of info, as far as I can see. It
shows current client connections, whereas mgr:pconn shows past connection
statistics (effectiveness)
My squid2.7:
/usr/local/squid/etc# ../sbin/squid27 -v
Squid Cache: Version 2.7.STABLE9-20110824
/usr/local/squid/etc#
Is it for load balancing or FailOver?
Load balancing, but taking failed connection into acccount, if possible. One
LINUX-PC with 4 interfaces
|--- ISP-1
LAN --squid--|ISP-2
|ISP-3
--
View this message in context:
Thanx for clarification. Then to this one, pls:
Trying squid 3.4.3, I get
squidclient -p nnn -U ? -W ??? mgr:pconn
HTTP/1.1 200 OK
Mime-Version: 1.0
Date: Fri, 07 Mar 2014 15:15:01 GMT
Content-Type: text/plain
Expires: Fri, 07 Mar 2014 15:15:01 GMT
Last-Modified: Fri, 07 Mar 2014
They still have to be read and processed in
order.
Squid reads requests out of the client connection one at a time and
processes them.
Could this be a bit more clarified ?
I mean, when squid started to process the first request from pipeline
(request forwarded to destination), will squid also
Alex,
then the following in
http://www.squid-cache.org/Doc/config/pipeline_prefetch/
is misleading:
If set to N, Squid
will try to receive and process up to 1+N requests on the same
connection concurrently.
Note the concurrently.
For older versions of squid, it is stated
Besides the drawback of DG (double processing of http) I like the advantage
of being completely independent from squid, besides the config as an
upstream/downstream proxy to squid (parent).
So it is very easy to be used together with squid. In case of thruput
problems, it can be simply put onto
https://answers.launchpad.net/ecap/+faq/1793
very well describes a few of the obstacles, although solvable. I.e. a good
solution should not rely on MIME-types, as stated in the article correctly,
but do an analysis of the datastream itself, to identify HTML to be
modified.
Regarding legal
The following error was encountered while trying to retrieve the URL:
http://ovidsp.ovid.com/autologin.html
Unable to determine IP address from host name ovidsp.ovid.com
The DNS server returned:
Timeout
Looks like a DNS problem. I can access the URL from Thailand via my squid.
That is possible, although not with squid. I have a working solution for this
one, in production in a free hotspot at an airport, for example.
In case of interest, contact me. But this SW is NOT Open Source.
--
View this message in context:
1 - 100 of 256 matches
Mail list logo