Any special messages in cache.log ?
BTW:
Instead of
cache_dir ufs /var/spool/squid 1 16 256
I would prefere
cache_dir aufs /var/spool/squid 1 16 256 #Might need ./configure make in
case your squid
#is
not built with
I good start would be to read this one:
http://wiki.squid-cache.org/Features/Wccp2
--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/how-to-redirect-Traffic-from-Cisco-Router-2911-to-Squid-Proxy-Server-tp4664819p4664827.html
Sent from the Squid - Users
client_side.cc oktoAccept: WARNING! Your cache is running out of
filedescriptors
This definitely will slow down squid.
2 possibilities:
./configure --with-maxfd=4096 #i.e.
squid.conf:
# TAG: max_filedescriptors
# The maximum number of filedescriptors supported.
#
# The default 0
Assuming, you are logged in as root, the simplest is to edit
/etc/init.d/squid and insert
ulimit -n 4096
before the actual start comd for squid.
Then restart squid, and the new limit should be effective.
This also will be a permanent solution, surviving a new boot.
--
View this message in
Better make it
http_access deny !SSL_ports
--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/squid3-block-all-443-ports-request-tp4664735p4664753.html
Sent from the Squid - Users mailing list archive at Nabble.com.
Having tried to decipher the principles of rock some time ago, my
impression at that time was, that this long time period of rebuild is caused
by design of rock, as there must be a scan of the rock area to find all
content and then to init the in-memory-pointers of squid. 16GB of rock
storage will
Have a a look at
iostat
Should help to see, if there is also a burst in disk I/O activity when CPU
peaks. Might indicate flushing of buffers.
Which filesystem do you use for the squid-disks ?
ext4 for example has quite a few options to reduce I/Os.
--
View this message in context:
Thanx, I did not think about the simple solution :-)
However, this does not work for CONNECT. Not because of squid, but because
of a new standard set by the browser developers, NOT to display proxies
custom error message in this special case.
Ref. here, for example:
Should be quite easy. Way to go is to define a parent proxy in your private
squid as default.
And then using always_direct/never_direct (squid.conf keywords) for the
exceptions.
So, something like
cache_peer 192.168.0.1 parent 6139 0 no-query no-digest no-netdb-exchange
#To define co's
Actually, I have a private proxy on the web, set up to be used by me only,
when on the road, in areas of slow wired/wireless connection. Part of it is
to remove ads, when browsing. This is done using local DNS on squid machine,
pointing to 127.0.0.1 for well known ad sites. And a patched web
Guten Morgen ! :-)
In worst
case I would have to run a cron job updating the host entry every minute
or such - sounds horrible.
As far as I understand, you have two problems:
1) squids strategy to find best parent using DNS
2) dynamically changing parents IP
First one should be solvable by a
According to their site
http://www.shallalist.de/
once a day.
--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/update-black-list-shallalist-tp4664496p4664499.html
Sent from the Squid - Users mailing list archive at Nabble.com.
He did it already successfully. Pls refere to his last posting.
--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-3-1-12-hangs-with-HEAD-request-in-LOG-tp4664219p4664464.html
Sent from the Squid - Users mailing list archive at Nabble.com.
Depending upon how flexible your cookie-rwriting must be, and which version
of squid you are using, small patches to squid-src should do the job nicely.
Being (still) a fan of squid2.7, I patched squid to insert special header
line into response to browser.
Not a big deal.
--
View this message
Besides some sort of auth I used another scheme on LINUX: Set up some script
on the client, to email change of dynamic IP to proxy machine. There, modify
firewall rule to allow access to proxy for the new IP.
As dynamic IPs usually do not change very often a day, this should be
sufficient.
--
how to ensure that my client do not use my proxy outside his office?
In case, client has fixed IP, you could set up an ACL in your squid only
allowing access from this special IP.
Or, even better, you just define a rule for iptables only to allow the
clients IP to access the squid-port.
You will
Hi, on MTs-Forum I gave you already the advice first to set up squid as
direct parent to MT-box (between Gateway and Mikrotik), to make it easier
for the beginning.
Did that work ?
--
View this message in context:
Used zypper only a few times, long ago. More used to yast on SuSE 11.x . Do
you also have yast available ?
If not, try to install the package. Then you should be able first of all to
remove current squid.
If no yast, you will need to use zypper -help or google for the correct
zypper cmd to remove
IOs have a variable size and for writing an object to a file with the aufs
store,
the OS write meta data to the file system log, updates the inode table and
writes the data to a new file.
So for aufs for one logical 'write object to disk' there are 3 IOs.
I do not know the internals of rock fs but
Most likely you have to remove the definition of DG as cache peer in your
original squid.conf.
Something like
cache_peer 127.0.0.1 parent 8080
has to be removed.
--
View this message in context:
follow_x_forwarded_for allow localhost
acl_uses_indirect_client on
delay_pool_uses_indirect_client on
log_uses_indirect_client on
... are all Squid config lines to integrate with DG. Try removing them,
or at least commenting out the first one.
no_cache deny no_cache_sites
http_access deny
So it looks like squid3.1.12 does not handle HEAD properly. Having a short
glance at the old squid bugs, I did not see anything directly relevant to
this. Anyway, as it is an old version, no bug fixes will be done, I guess.
So I would suggest you to upgrade at least to very last 3.1.xx and check
Although being a fan of MS, I would assume a problem of
squid2.7/Windows-specific.
Because I still have several squid2.7/ubuntu running, and can not remember
such a problem for my Windows-users.
But I am using persistent server-conns with very last squid2.7
--
View this message in context:
Interesting question Did you compare this behaviour to squid2.7 using
storeurl ?
--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/ICP-and-HTCP-and-StoreID-tp4664307p4664310.html
Sent from the Squid - Users mailing list archive at Nabble.com.
I setup a squid2.7-HEAD5 with default squid.conf besides
cache_dir none /tmp
Set up IE to use this proxy explicitly.
And then used IE to access the video in assmann.de.
About 5s to show up, no problem. But I still can not find the HEAD-request
in my logs. Strange.
My IE-version is 11.0.96
Which
I have built another squid
Squid Cache: Version 3.1.12
configure options: '--prefix=/usr/local/squid3112' '--disable-carp'
'--disable-htcp' '--disable-auth' '--disable-wccp' '--disable-wccpv2'
'--with-large-files' '--disable-external-acl-helpers' '--enable-storeio=ufs'
Hm, your issue seems to be similar to this one, IE-specific, too:
http://superuser.com/questions/560647/ie10-unexpectedly-sends-a-head-request-for-pdf-what-has-changed
--
View this message in context:
It is necessary to change your server process to expect a HEAD request.
Both HEAD and GET requests with a user agent of contype should only return a
content type and not the data.
No, this has to be done on the server itself (assmann.de).
So it more looks like an IE-server issue, not really a
Comparing your info with my squid logs, the HEAD-req is missing in my
squid-log. So I will first do a second test, to make shure. And try to
disable cache in my squid.
What I notice in your fiddler, there is another squid3.1.12 in-between. When
you directly access
the video, without your squid,
I would not consider your squid to be the problem.
I have several squids (2.7 and 3.11) connected to various upstream proxies
(other squids, but also private, custom-written proxies), running on same or
other machines, and can not remember your problem.
May be, you could toggle persistent conns
Hi,
I have 2.7.STABLE9-20110824 up and running in transparent mode, on SuSE
11.3.
No problem accessing
http://www1.wdr.de/mediathek/video/sendungen/lokalzeit/lokalzeit-aus-duesseldorf/videovodafonehauswirdwirtschaftsministerium100-videoplayer_size-L.html
either using FF or IE. I did it check
I did following:
In IE cached cleared
my squid stopped
squids cache cleared
squid started
Used IE to access www.assmann.de and then cliecked the red camera in the
video.
Video (from WDR) showed up. No problem.
in access.log of squid I had:
1389366402.568227 192.168.0.55 TCP_MISS/200 5346 GET
1) Which version of squid ?
2) Anything special in cache.log ?
3)You might also try to start squid directly and check the output:
cd /usr/local/squid/sbin
./squid -d 9
MfG :-)
--
View this message in context:
squid does not create the directory, so it must exist, but squid needs write
access to it.
Usually, squid runs as user nobody, which is the default in squid.conf.
So you need
chown nobody /var/log/squid
to allow squids write access.
--
View this message in context:
After some more testing, it looks like, the DNS-hack could still be the best
solution. The URL-rewrite works for explicitly requesting google.com from
the browser, and then entering the search string into googles page; but when
entering a search string into the search box of Firefox, for example,
Just to let you know:
Rewriting the URL google.com - nosslsearch.google.com (using
url_rewrite_program) AND
url_rewrite_host_header off
works with my (beloved) squid27.
Thanx to your good idea !
More convenient compared to the DNS-hack.
--
View this message in context:
Just because of curiosity I implemented the DNS-hack you mentioned some time
ago, and it worked.
And I succeeded in removening the warning, which pops up as well, by content
adaption, to make it almost transparent.
However, your method seems to be more elegant, but, according to squid docs
for
I have a squid-A for my LAN, which forwards all http-traffic to a parent
squid-B using simple config:
..
cache_peer x.x.x.x parent 0 no-query no-digest no-netdb-exchange
..
never_direct allow all
But in case, the parent (squid-B) is not reachable, the clients accessing
squid-A
What OS?
LINUX
What version of squid?
3.3.11
What is the level which the squid-B is not reachable? PING level? route
level? TCP level?
TCP level. This should cover all reasons, like
- server running squid-B is OFF
- squid-B is down, but server running, network connection UP
- comms/routing
http://dansguardian.org/
has a FREE version.
--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/SquidGuard-not-filtering-tp4663646p4663698.html
Sent from the Squid - Users mailing list archive at Nabble.com.
For filtering you can also use another proxy, chained to squid:
http://dansguardian.org/
--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/SquidGuard-not-filtering-tp4663646p4663675.html
Sent from the Squid - Users mailing list archive at Nabble.com.
cachemgr.cgi for squid 3.3.10:
Why are no client side persistent connection counts displayed in Persistent
Connection Utilization Histogramms, although client persistent conns
enabled in squid.conf ?
I only see server side statistics:
Pool 0 Stats
server-side persistent connection counts:
I would guess, a problem regarding proxying/forwarding of HTTPS in CentOS
machine.
Similar problems with other https://example.com ?
--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/parent-proxy-setup-tp4663116p4663134.html
Sent from the Squid - Users
I can only tell about a multi-2.7 config, running one 2.7-squid for
authentication and load distribution, and 2 more 2.7-squids as backends on a
4-core machine.
Using 2.7 because of some unique features and private patches. And, because
I consider it rock-solid for my requirements.
This config
In principle, should also be possible in other versions as 2.7
As a start:
http://wiki.squid-cache.org/ConfigExamples/ExtremeCarpFrontend
http://wiki.squid-cache.org/ConfigExamples/SmpCarpCluster
Problem no to overlook: Possibility of double caching the same content.
--
View this message in
YouTube are
constantly changing their site, both to improve their service and to
fight back against admin caching the content.
May be, this is the reason, that not so much info about caching YT videos is
available on the web. The Ones in the Know might have the impression, in
case the info about
Sorry, I do not understand your problem. (Happy user of 2.7 myself. Terribly
stable).
--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/squid-2-7stable-64bit-will-not-use-available-memory-tp4661541p4661543.html
Sent from the Squid - Users mailing list archive
In the 64 bit environment, squid process (64 bit binary) ideally should
consume more memory when it is configured with large disks ( 10MB for 1 GB)
I have 1TB configured for my cache_dir. I would assume that squid would
consume 10GB memory at lest. However it stays at 3.8GB consumption.
Depends
Then you will need to patch squids sources even more. Let me know, when done,
as I did already the first 50% :-)
--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/how-to-hide-squid-error-pages-version-from-clients-tp4661524p4661547.html
Sent from the Squid -
For my 2.7 I edited errorpage.c:
ERR_SQUID_SIGNATURE,
/*RKA-
\nBR clear=\all\\n
HR noshade size=\1px\\n
ADDRESS\n
Generated %T by %h (%s)\n
/ADDRESS\n
*/
That removes part of squids signature in the last line. I also do not
My suspicion is some problem here:
store_dir_select_algorithm least-load
# cache_dir aufs /mnt/cachedrive1 1342177 128 512
cache_dir aufs /mnt/cachedrive2 1426227 128 512
cache_dir aufs /mnt/cachedrive3 1426227 128 512
cache_dir aufs /mnt/cachedrive4 1426227 128 512
cache_dir aufs
Back to original squid.conf:
Instead of
follow_x_forwarded_for allow localhost
acl forwardTrafficSubnet1 src 172.21.120.0/24
cache_peer 172.21.120.24 parent 8881 0 proxy-only no-query
cache_peer_access 172.21.120.24 deny forwardTrafficSubnet1
never_direct deny forwardTrafficSubnet1
Sorry, Amos, not to waste too much time here for an off-topic issue, but
interesting matter anyways:
I ACK your remarks regarding disk controller activity. But, AFAIK, squid
does NOT directly access the disk controller for raw disk I/O, the FS is
always in-between instead. And, that means, that a
Like I guessed already in my first reply, you are reaching the max limit of
cached objects in your cache_dir, like Amos explained. Which will render
ineffective part of your disk space.
However, as an alternative to using rock, you can setup a second ufs/aufs
cache_dir.
(Especially, in case, you
Erm. On fast or high traffic proxies Squid uses the disk I/O capacity to
the limits of the hardware. If you place 2 UFS based cache_dir on one
physical disk spindle with lots of small objects they will fight for I/O
resources with the result of dramatic reduction in both performance and
disk
The relatively low byte-hitrate gives the idea, that somewhere in your
squid.conf there is a limitation on the max. objects size to be cached. It
might be a good idea, to modify this one, to a larger value.
Caus it seems, you still have a lot of disk space available for caching.
So you might post
You should install and use
http://wiki.squid-cache.org/Features/CacheManager
This gives you a lot of info regarding cache performance, like hit rate etc.
Having 556 GB of cache within one cache dir might already hit the upper
limit of max. number of cached objects, depending upon the avg size
Send me an email, in case you are willing to spend some $ for the solution,
which is in production for a while already.
--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-very-simple-content-adaptation-tp4660946p4660949.html
Sent from the Squid - Users
Not only medical or space. Just a question of shortest response times. Which
depends upon the HW generation. When doing RT-systems on old 16/32-bit
systems this was in the range of (fractions of) microseconds. Because of
instruction execution times. Which should be shorter nowadays.
Besides, such
Squid is 100% one of the systems that tries and succeed on these
specific tasks.
A bit too optimistic. I did a lot of assembler programming (incl. device
drivers for special HW) for RT-systems (16bit/32bit), using OS, which were
especially designed to handle RT-tasks using HW/SW-interrupts. I
Depends on what the definition of RT is...
RT should be something reliable for a human to use in realtime
What do you think?
You are talking about Online Systems with response times, considered to be
immediate for human beings. Which is in the range of ms.
--
View this message in
Yes.
Assuming, you have different store_dir, squid.conf etc.
I have 8 squid2.7 running on one server.
However, I copied the squid2.7 binary to 8 different binaries. Do not know,
whether this really is necessary or not.
I did not copy the helper binaries, like unlind etc.
--
View this message
No problem.
In fact, only next lines in squid.conf (Proxy Local) are mandatory,
assuming, the Proxy Distant has the IP
199.199.199.199 and is listening on port 8080:
#Next line defines Proxy Distant to be parent proxy
cache_peer 199.199.199.199 parent8080 0 no-query
Amos,
Once subtle and noteworthy difference between Squid-2 and Squid-3 which is
highlighted by this feature is that refresh_pattern applies its regex
argument against the Store ID key and not the transaction URL. So using the
Store-ID feature to alter the value affects which refresh_pattern
We have these in the StoreID part of the docs since there is no chance
of doing a refresh_pattern on StoreID without understanding the concept.
That is NOT the point.
As far as I understand,
up to including squid2.7 the refresh_pattern was based on the transaction
URL, whether
Sounds familar to me on Suse.
Somehow I was not able to completely disable all IPv6 support on my
openSuSE, too, and had similar effects like you.
Try
./configure --disable-ipv6
to compile squid from source as a workaround.
--
View this message in context:
3. Running 3 browser sessions on same video at the same time with
StoreID - 1 or 2 of those browsers sessions will halt with an error
occurred..., please try again later, while at least 1 will run to
completion with no problems.
Interesting
I am using squi2.7 (patched), with Store-URL.
CARP is the type of hierarchy you want for scalability if SMP is
unavailable. It is two-layer and each layer is independently scalable.
It does
require specific hardware ..
Why specific HW ( multi-core CPU will seriously benefit from this setup,
although not a requirement) ?
(I am
FYI: On my ubuntu no problem with
Squid Cache: Version 2.7.STABLE9-20110824
configure options: '--enable-storeio=ufs,aufs,null' '--enable-async-io=64'
'--enable-carp' '--enable-delay-pools' '--enable-useragent-log'
'--enable-basic-auth-helpers=NCSA'
Generated from source
--
View this message
But that means, your problem more related to your debian system.
You should generate your squid from src and check, if error still exists.
I have /lib/x86_64-linux-gnu/libc-2.15.so, without problem.
--
View this message in context:
Good idea.
It should not be too complicated to modify storeUfsWrite/storeUfsRead
for example to include some type of compression.
However, question is, how effective it would be, as there are many graphic
file types, not easily compressable. So it would result in a lot of wasted
CPU cycles.
A
First of all, question was more how to do, and not why to do.
Anyway, there can be two reasons:
- You might reduce stress on busy disks because of smaller transferes. And
more buffering before transfer.
- One step into bandwidth saving towards the client.
Although, in general you are correct.
I would simply use different ACLs, blocking access for different categories,
and then using appropiate Denied Page.
AFAIK Denied page can only consist out of plain HTML.
--
View this message in context:
Syntactically no problem with
http_access allow all
acl naargoogle dstdomain youtube.com
http_access deny naargoogle
However, you probably want this one:
acl naargoogle dstdomain youtube.com
http_access deny naargoogle
http_access allow all
Otherwise, the deny will never be hit.
--
View
I am buying a new router that has enough ROM and RAM to support openwrt +
squid,
Hi,
I am using one of these PCs as a router: http://www.pcengines.ch/
running this embedded LINUX: http://linux.voyage.hk/,
which is similar to ubuntu.
As the ALIX-board has 256MB + 4GB Compact Flash, I run squid on
OK. Then:
1) include
debug_options ALL,5 33,2 28,9
in squid27A.conf
2)Stop squid27A, remove all logfiles of squid27A, start squid27A
3) request video from youtube.
4) stop squidA again
5) post here
-squid27A.conf
-squid27A -v
6)Give me a link, so I can download cache27A.log.zip and
Jo.
Something to start with might be
refresh_pattern (^http://testdomain\.com/cache-me/) 5 % 5
override-lastmod override-expire ignore-reload ignore-private
ignore-no-cache negative-ttl=0
--
View this message in context:
Hi,
just my few cents:
Up to my understanding. what is going in here, the original simple
tcp_outgoing_address yyy.yyy.yyy.239
forces squid to use this outgoing interface for all connections,
overriding or taking precedence over the cache_peer condition.
It will just depend upon the sequence
What does squid -v say ?
Are you using LUSCA as squidA ?
--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/need-help-in-cache-peer-tp4659677p4659733.html
Sent from the Squid - Users mailing list archive at Nabble.com.
This should explain the unexpected behaviour.
i put
debug_options ALL,5 33,2 28,9
in squid.conf(A)
but no thing new in cache.log l and no thing forwarded to cache(B)
Because this can NOT be in case of valid 2.7, for example. LUSCA srces might
have other debugs.
Sorry, I can not help any
Amos,
although a bit off topic:
It does not work the way you seem to think. 2x 200GB cache_dir entries
have just as much space as 1x 400GB. Using two cache_dir allows Squid to
balance teh I/O loading on teh disks while simultaenously removing all
processing overheads from RAID.
Am I correct
Sorry, I was wrong.
This is possible in squid.conf since squid 3.1, I did not recognize it:
cache_peer 192.168.158.105 parent 3128 no-tproxy
So you are using tproxy. Another piece, which might go wrong, when doing the
forwarding.
This should force all requests to be forwarded, but obviously
Likely cause:
ACL videocache_allow_dom
does not exist.
Why do you switch your name for posting ? Anything to hide ?
--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/Fatal-Bungled-squid-conf-tp1026574p4659709.html
Sent from the Squid - Users mailing
3. try:
Did you edit this one in squid.conf on cacheA ?
debug_options ALL,5 33,2 28,9
It will evaluate the ACLs and the forwarding decision, printed in cache.log.
stop squidA, delete cache.log, start squidA, go to youtube.com and request
one video.
stop squidA again.
Did you look at it ?
--
Obviously, you are using a special squid. Because this would not be
possible, otherwise:
cache_peer 192.168.158.105 parent 3128 no-digest no-query no-tproxy
proxy-only name=video
(no-tproxy ??)
Interesting, I would assume all traffic to be forwarded to cache_peer
because of this
never_direct
cache_peer_domain cacheA !youtube.com
cache_peer_domain cacheB youtube.com
Frankly speaking, as you are willing to pay for a license of videocache,
you should also try to get help from over there, how to use their product.
As an alternative, you might ask for payed consultation regarding
It is always a good idea to post full squid.conf
--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/need-help-in-cache-peer-tp4659677p4659686.html
Sent from the Squid - Users mailing list archive at Nabble.com.
I am interested, so I sent you an email.
Pls, confirm.
--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/Need-Squid-free-lancer-tp4659657p4659661.html
Sent from the Squid - Users mailing list archive at Nabble.com.
YES: Very important:
READ IT ! :-)
--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/Youtube-Changes-tp4659599p4659604.html
Sent from the Squid - Users mailing list archive at Nabble.com.
I emailed you detailed info.
The symptom of the bug would be, that data of one video-request will receive
the storage key of the request for another video, and vice versa.
This can only occur, when you have 2 or more users accessing yt the same
time.
Or one user watching yt in 2 browser windows
Amos,
a bit off-topic, but anyway: How transparent is the squid -k reconfigure
for the user ?
Is squid simply refusing new connections, and waiting, until busy conns are
finished, then doing the reconfig ?
When doing single-user test of my simple method, I did not notice any glitch
during
A, thanx a lot.
I only looked into this one, mentioned on top:
this my store-id
https://code.google.com/p/tempat-sampah/source/browse/store-id.pl.
and therefore did not see or implement all the ideas from the notes in
https://code.google.com/p/tempat-sampah/source/browse/storeurl.pl
OK; Mea culpa, so sorry.
Applaus, it works. The HITs show up again in access.log
You did a great job.
--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/Re-squid-3-Head-storeid-tp4659552p4659591.html
Sent from the Squid - Users mailing list archive at
Sorry, but the code here
i have test my store-id for youtube, fbcdn, ytimg and sourceforge.
overall HIT
this my store-id
https://code.google.com/p/tempat-sampah/source/browse/store-id.pl.
hope this store-id can help other
will not work correctly for youtube in busy systems. Needs
so i need squid or helper get from old log. not all log, but some line.
Thanx.
Clever :-)
--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/Re-squid-3-Head-storeid-tp4659552p4659564.html
Sent from the Squid - Users mailing list archive at Nabble.com.
I did not find a solution to a similar problem like yours for my proxy, to
limit the daily/monthly download limit for the users, identified with basic
auth.
I wrote a simple external helper, analyzing the access log. Every user, who
exceeds his limit, is entered into a simple textfile, which
Looks like yt is reading here as well -)
Because now id of specific video is not fixed any more.
For example:
http://www.youtube.com/watch?v=dphpDdfZUGw
I see diffenrent ids for ths video, like
id=o-ALyS_m1Y2dQ-mQfHaKDyABBh4Jnk9xFMDkuHmO8nL5HI
id=o-ANfyeW9vHcb586VYbrcx4iAB6zn20fr9lSgiknEZ04ZO
May be, you have an effect I had already myself and filed as a bug:
http://bugs.squid-cache.org/show_bug.cgi?id=3760
--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/squid-3-2-squidclient-Connection-refused-tp4659536p4659547.html
Sent from the Squid - Users
Fine, at least some progress.
Now you should upgrade your squid and verify, that http forwarding still
works.
http://wiki.squid-cache.org/Features/BumpSslServerFirst
is based on squid 3.3
But you have 3.1
You should not use any other wiki/info as directly mentioned here in the
forum.
Because
There is a bug in new FF regarding kerberos auth. May be, that matters:
https://bugzilla.mozilla.org/show_bug.cgi?id=857291
Regarding:
http://wiki.squid-cache.org/action/show/Features/Authentication#Can_I_use_different_authentication_mechanisms_together.3F
It states:
Due to a bug in common
101 - 200 of 256 matches
Mail list logo