And it works great ! Thank you Amos for your patch.
In previous Squid 3.3.x DIGEST was very buggy , crash, 407, banners,
but now it seems very stable. Perhaps there are some little bugs
like this, but now it's usable.
Thanks for your works
Hi,
William to be more clear this patch is
Maybe you should use a tool that has been created for the only purpose of
filtering web sites.
Like, e2guardian, squidguard, etc
Fred
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
Ok, thanks,
Tested with both nonce_count and nonce_max_duration, no problem. Do
you known if it works with squid 3.5 ?
No sorry I don't know, but if the patch can be applied I guess that yes it can
works.
Except if there are some changes in DIGEST between 3.4 and 3.5.
Hello,
FI, the mailing list CVS seems broken
http://lists.squid-cache.org/pipermail/squid-cvs/
Fred
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
Ok, thanks,
Tested with both nonce_count and nonce_max_duration, no problem. Do
you known if it works with squid 3.5 ?
No sorry I don't know, but if the patch can be applied I guess that
yes it can works.
Except if there are some changes in DIGEST between 3.4 and 3.5.
Should be called auth/digest/Config.cc now. Contents should not be
much different, just the filename.
Amos
Hello,
Thnaks Amos,
Ok, so I will provide a patch in bug report this week
Fred
___
squid-users mailing list
Patch for squid 3.5.0.3
| Tested with both nonce_count and nonce_max_duration, no problem. Do you known
if it works with squid 3.5 ?
Be careful chech_nonce_count is broken, you can see in your log that there are
many unexpected 407, my advice is to set the value check_nonce_count to off
It's
I have some issue with squid 3.5.1: sometimes the browser loads the
page partially (for example: header/footer without styles or missing
images); other times the browser display a cannot connect to the
proxy (proxy refused connection) page.
The problem seems to appear more often with https
Stefano, are using identification helper ? And which version of browser ?
Regards,
Fred
http://numsys.eu
http://e2guardian.org
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
2) Due to the above problem I configured an access control via
htpasswd
using basic_ncsa_auth.
In this case, after the required credentials and the correct
insertion squid
gives me access to the internet.
Now the question is: can I have the credentials expire after a
certain time?
I
@FrebB:
I really don't know what identification helper is (I'm not a squid
guru, please explain or drop a link).
I'm on firefox 31.4.0esr (slackware linux 13.1).
I mean Authentication from Squid, a pop-up with account (login and password)
@Eliezer:
As FredB said, the issue comes
root @ proxyhost /var/core # ulimit
unlimited
This is not related with ulimit -c , please take a look at ulimit -a
ulimit -c ulimited in statupt script only set your core file size to
unlimited
___
squid-users mailing list
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Wow,
3.4.x and 3.5 has incompatible cache format???
No, I'm just thinking about cache corruption, this could explain that the
problem still present with 3.4 returns
It's just an idea.
Regards,
Fred
http://numsys.eu
When I set hals_closed_clients to on, I've got much more assertions:
2015/01/15 18:41:31 kid1| assertion failed: comm.cc:1823: isOpen(fd)
!commHasHalfClosedMonitor(fd)
There is a problem with one assertion (or both)
Bug: http://bugs.squid-cache.org/show_bug.cgi?id=4156
Without
Hash: SHA1
With half_closed_clients by default (.i.e. off), the problem presist,
but catch a bit another assertion.
FI, I noticed a change between 3.3.x and 3.4
fd_table[io.conn-fd].flags.socket_eof = 1;
Becomes
fd_table[io.conn-fd].flags.socket_eof = true;
But as I said I
FI, I noticed a change between 3.3.x and 3.4
fd_table[io.conn-fd].flags.socket_eof = 1;
Becomes
fd_table[io.conn-fd].flags.socket_eof = true;
But as I said I haven't taken a look deeply
Sorry I forgot: in client_side.cc just before commMarkHalfClosed(io.conn-fd);
It is but it’s the latest version available through apt-get on Debian
7 without adding backports which I may end up doing anyway. However
I don’t think that is my problem, I think I may have missed
something in my config and was wondering if anyone had seen this
before with the usernames
Just wait the next 3.5.x release.
Or todays 3.5 snapshot r13752 has the fixes in it.
Amos
___
Great, I am going to be able to justify my time to my manager :)
And step away from the aspirin bottle ...
Regards,
Fred
Sorry for the noob question: is there a special command to wipe the
cache?
Depends, If your cache is not big a rm or mv is enough
http://wiki.squid-cache.org/SquidFaq/OperatingSquid#I_want_to_restart_Squid_with_an_empty_cache
Don't forget squid -z after
As I wrote in the first message,
Stefano, Please can you try this grep assert /var/log/squid/cache.log after
the blank page
Regards,
Fred
http://numsys.eu
http://e2guardian.org
___
squid-users mailing list
squid-users@lists.squid-cache.org
Here is the output of grep assert /var/log/squid/cache.log:
2015/02/11 08:33:49 kid1| assertion failed: store.cc:1887:
isEmpty()
2015/02/11 09:38:05 kid1| assertion failed: store.cc:1887:
isEmpty()
Ok it's what I think .. Please can you remove/format your cache a retry.
This is a
Yes, TCP_SWAPFAIL_MISS is the point
10.253.33.61 - fred [09/Feb/2015:10:50:21 +0100] GET
http://www.google-analytics.com/analytics.js HTTP/1.0 200 11932
TCP_SWAPFAIL_MISS:HIER_DIRECT Mozilla/5.0 (Windows NT 6.1; WOW64; rv:35.0)
Gecko/20100101 Firefox/35.0
Amos, about our conversation this is
Which just means that the credentials have expired or been discarded.
Yes I known, but in this case the NONCE was valid, I'm trying with a very long
time TTL.
SWAP_FAIL - blank page - 407 - New nonce
I guess the blank page breaks something
Though whether its Squid or the browser is
Is it happening on all websites? or a specific one?
I am using 3.4.11 for most of my daily uses now.
In order to reproduce it I will need the OS and version, and if I
assume
it is a self compiled so the squid -v details.
Eliezer
Hi,
I'm just testing and writing some codes for
mising whitespace separator between those options.
'--enable-icap-client' '--enable-follow-x-forwarded-for'
'--enable-basic-auth-helpers=LDAP,digest'
'--enable-digest-auth-helpers=ldap,password'
Syntax: --enable-auth-TYPE=HELPER,LIST
If you dont use that syntax to explicitly limit
Hi,
I'm trying workers and rock store, I missed something ?
workers 2
cache_dir rock /tmp/squid1 13000 max-size=1024
cache_dir rock /tmp/squid2 13000 max-size=1024
Squid Cache: Version 3.5.2-20150304-r13770
Service Name: squid
configure options: '--prefix=/' '--includedir=${prefix}/include'
I would like to know whether a single squid server can handle 1Gbps
traffic?
Consider I have hardware configuration of 64 GB RAM, 12 Core
processor and 10 GB NIC. Is it possible?
Depends on what the users are doing, there is a big difference between
A) One user is downloading an
Hi David,
I quickly tested something.
with E2Guardian I changed my header:
Mozilla/5.0 (Windows NT 6.1; WOW64; rv:35.0) Gecko/20100101 Firefox/35.0
By
Mozilla/5.0 (Windows NT 6.1; WOW64; rv:35.0) Gecko/20100101
Now I can use http://www.google.fr and http://www.youtube.com without
It is not true.
Squid3 in debian is newer then 3.1 as far as I remember and if it's
not
in the main repos then use the backports for it.
No, 3.1.20 is the latest in stable release (wheezy)
It is but it’s the latest version available through apt-get on
Debian 7 without adding backports
Hi,
Fred
Squid directory permission is 644 with nobody:root and same is for
mime.conf and squid.conf
Hi
Your cache user name is ? nobody ?
eg my configuration:
cache_effective_user squid in squid.conf
/etc/squid
drwxr-xr-x 2 squid root4096 juin 3 10:48 squid
Maybe
On 11/06/2015 4:47 p.m., yashvinder hooda wrote:
Squid log says Permission denied for the file /etc/squid/mime.conf
While permission on it is
-rwxrwxrwx 1 nobody root 11364 May 9 15:40 mime.conf
And the squid directory permissions ?
user nobody ? can you try try chown -Rf
Hello,
There is a way to tag an ACL in access.log ?
acl test url_regex /tmp/myfile
logformat fred %a %[ui %[un [%tl] %rm %ru HTTP/%rv %Hs %st %Ss:%Sh
%{User-Agent}h
access_log daemon:/var/log/squid/access.log fred
If I put something at the end
logformat fred %a %[ui %[un [%tl] %rm %ru
Yes, but not to the same log file. Like this:
access_log daemon:/var/log/squid/access.log squid !test
access_log daemon:/var/log/squid/access_test.log fred test
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
At least with squidguard you can't check the content (cookies, keywords in
html, bad words, etc)
It's just an URL/domains filter, but it can also block some objets contain in
the request, eg http://foo.com/test.mp3, but it can't deny some kind of
browsers or header informations.
Maybe
At least with squidguard you can't check the content (cookies, keywords in
html, bad words, etc)
It's just an URL/domains filter, but it can also block some objets contain in
the request, eg http://foo.com/test.mp3, but it can't deny some kind of
browsers or header informations.
Maybe
Hi all,
I think I misunderstand something but why refresh pattern is not useless ?
I mean the objects are supposed to be delivered with instructions from the web
server, lifetime, creation time, etc
I thought, and it seem I'm wrong ?, that squid check the HTTP header when the
object seems
Thanks Amos, I will discuss this in more details with the dev of SSLMITM in E2
Fred
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
I can't answer that with any certainty. Though I am doubtful it woudl
help. Since your problem is quoted string end detection that goes bad
even before URL parsing gets involved.
Amos
Ok thank you Amos, let me know if I can help with some tests
Fred
Well. Yes an 3.4 has a serious CVE that needs releasing. So it will
be a
thing this weekend.
But no other bug fixes in the past few months qualify as security
issues. So yes you need to be moving on to 3.5. Especially if you are
using the ssl-bump features.
Amos
So, no chance for
Not sure we'll have free time for testing the previous 3.4, we now
have
dozens of boxes to manually upgrade to the 3.5.6...
yes, we do use the original squid 3.5.6 package, no build mix here.
Ok I will, It would be interesting to understand what happen and if
there is something
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Fred.
It's depending your OS.
Depending your hardware.
Depending your OS configuration.
Tuning is very complex problem and tuning is EVIL.
Remember it.
Yuri. my tests are very very basic
I think in this case this is not a
Very Interesting, I would add that I wonder which storage scheme in 2015 should
we use to give better performance with recent hardware and high load (more than
800 r/s) ?
Have any recent benchmark somewhere ?
In my case I'm using diskd with some system tuning, noatime, separate disks for
Just use fast separate physical devices on separate controllers - and
all will be ok without any delays.
Of course, with this kind of load without separate disks Squid dies after some
minutes :)
I'm using separates drives with noatime file system and I never found a way to
(completely)
Just a little word about aufs, just for information, to avoid
squidaio_queue_request: WARNING - Queue congestion
squidaio_queue_request: WARNING - Queue congestion
squidaio_queue_request: WARNING - Queue congestion
squidaio_queue_request: WARNING - Queue congestion
I had increase this value
Fred and Fred;
Could you guys who have been seeing these warnings logged please
present a grep of those cache.log lines so I can get a better handle
on
how many doublings your queues are actually requiring ?
I count 5 and 6 warnings respectively in FredB's two earlier log
traces.
Argh ! now crash
2015/07/20 11:06:36 kid1| WARNING: swapfile header inconsistent with available
data
2015/07/20 11:06:36 kid1| Could not parse headers from on disk object
2015/07/20 11:06:36 kid1| BUG 3279: HTTP reply without Date:
2015/07/20 11:06:36 kid1| StoreEntry-key:
'accept-encoding=identity,gzip,deflate'
2015/07/20 10:15:00 kid1| clientProcessHit: Vary object loop!
2015/07/20 10:20:49 kid1| clientIfRangeMatch: Weak ETags are not
allowed in If-Range: bbfe4fbed01:0 ? 537965ecbcc2d01:0
2015/07/20 10:22:50 kid1| urlParse: Illegal hostname '.xiti.com'
2015/07/20 11:06:36 kid1| WARNING: swapfile header inconsistent with
available data
2015/07/20 11:06:36 kid1| Could not parse headers from on disk object
2015/07/20 11:06:36 kid1| BUG 3279: HTTP reply without Date:
2015/07/20 11:06:36 kid1| StoreEntry-key:
F5761430F887925196458A4696151E9C
Fred,
I compared the 2 source diskd.cc, squid 3.4.8 and 3.5.6 both
official, no
dif.
So, using the diskd 3.4 with the 3.5 does not seem to be a good idea,
result
should be the same.
Fred
No crash for you ?
I confirm this discussion
sults.
So, it seems we'll have to switch all boxes from diskd to aufs, but I
think
we could survive
Anyway, we liked the diskd because we see good stability, but the
HITed
objects are really too slow, all my clients are complaining, that's
why we
did many tests yesterday and we found
All,
We have switched some ISPs from DISKD to AUFS this morning, the
queue
congestion appears at the begining then disappears from the
cache.log. For
how long, nobody knows...
Yes me too, but after a while I had
2015/07/15 13:36:07 kid1| DiskThreadsDiskFile::openDone: (2) No such file
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
This test means nothing. Only very approximate overall IO performance
for IO subsystem.
Not nothing I don't agree, it's not sufficiently precise to indicate where the
problem is, ok with that, but if you change only diskd by aufs you
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
At this moment your user got partially loaded web page.
Yes bad experience for me, I guess I reach some limitations about aufs,
fortunately I have no problem with diskd but I like to increase the
performances.
I will (re)test rock
Fred,
Not sure we'll have free time for testing the previous 3.4, we now
have
dozens of boxes to manually upgrade to the 3.5.6...
yes, we do use the original squid 3.5.6 package, no build mix here.
Ok I will, It would be interesting to understand what happen and if there is
something
Sorry, I forgot a real life test
time wget
http://ec.ccm2.net/www.commentcamarche.net/download/files/youtube_downloader_hd_setup-2.9.9.23.exe
-v
--2015-07-15 15:22:03--
http://ec.ccm2.net/www.commentcamarche.net/download/files/youtube_downloader_hd_setup-2.9.9.23.exe
Connexion vers
Objet: Re: [squid-users] AUFS vs. DISKS
Hi Fred,
tests from my side:
DISKD with TCP_HIT objects: 564KB/s with wget, the same url you have
tested.
AUFS with TCP_HITS objects: 47.8M/s, same wget, same squid, same url,
same
all.
Wget with AUFS:
Length: 10095849 (9.6M)
Did you check the TCP_HIT response times with the Diskd ?
Yes
192.x.x.x - fred [15/Jul/2015:14:30:27 +0200] GET
http://ec.ccm2.net/www.commentcamarche.net/download/files/youtube_downloader_hd_setup-2.9.9.23.exe
HTTP/1.0 200 10096376 TCP_HIT:HIER_NONE Wget/1.13.4 (linux-gnu)
192.x.x.x -
I agree, but what about the life time ? I change every two years (max 3) my
sata drives
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
It depends from your squid settings (memory cache size, etc), your OS
(as expected), your fs.
My installation works 4 years 24x7 with shipped HDD.
Yes, in my case it depends of number of read/write by second, I know that I
often
Fred,
We have upgraded 4 big ISPs to the latest 3.5.6 in AUFS, feedbacks
are so
good. I can tell you clients see a big (positive) change here.
We use the same settings in the squid.conf but AUFS instead DISKD,
the
difference is crazy...
In the past we moved to the Diskd due to too
> squid.conf
> auth_param digest program /usr/bin/php /etc/squid3/check_user.php
> auth_param digest children 5
> auth_param digest realm MySquidProxy
> auth_param digest nonce_garbage_interval 5 minutes
> auth_param digest nonce_max_duration 2 hours
> auth_param digest nonce_max_count 50
This
Hello,
I'm trying the latest Jessie + Squid 3.5.10 and there is something wrong with
ldap
Of course the package libldap2.dev is present and no problem with the same
options and Debian Wheezy
I missed something ?
Fred
./configure '--build=x86_64-linux-gnu' '--enable-cache-digests'
>
> I have investigate better about the problem that brings up CPU and
> Squid process over 100%!
> We have this situation: Dansguardian on port 8080 and Squid on port
> 3128.
>
And without DansGuardian, same problem ?
> cgi-bin/a2/out.cgi
Hum, Avast somewhere ? In your log do you have the
This is not the right place to speak about DansGuardian
> OK, but in squid log i saw only the IP of listen
> dansguardian
Take a look at forwarder = on (dg) and forwarder_for on (squid)
> First, there is a way to dansguardian pass username to
> squid ? Second, in sites https
If I understand
>
> If I try a false domain like test.google.com there is a response from
> my DNS Servail, so ok
> But if I retry after a short time - maybe one minute - there is again
> a DNS request
>
test.google.com is not the best example, the domain google.com exist but not
the subdomain
So, same
Hello all,
For a specific need, I want to reduce the DNS requests, so I'm trying with
positive_dns_ttl 6 hours
And
negative_dns_ttl 4 hours
But there is something wrong
If I try a false domain like test.google.com there is a response from my DNS
Servail, so ok
But if I retry after a short
>
> From http://www.squid-cache.org/Doc/config/positive_dns_ttl/
>
> "Upper limit on how long Squid will cache positive DNS responses."
>
> Note: "Upper limit" - not "lower limit", or "forced value".
>
> So, if the DNS response gives you a TTL of 15 minutes, and you've
> specified an
> upper
Hello
With 3.5.10 I can't add a value with more than 100 %
Something like
refresh_pattern -i \.gif$ 1440 500% 262800
refresh_pattern -i \.ram 2880 1000% 262800
The % should be reduced below 100% - Squid Terminated abnormally -
This is a new limit or a bug ?
Regards
Fred
> Config parser bug I think. That is one place where % is legitimately
> much higher than 100%.
>
> Amos
>
Hi
I open a bug ?
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
Just FI
With high load system (and exactly the same configuration of course) the load
average is significantly reduced by the use of the latest release in comparison
with the previous 3.5.x versions
diskd, digest auth, basic auth, delay pools, some acls, 800 r/s, Debian wheezy
64Bits
Fred
>
> For facebook? they are/were pretty good for cacheability before the
> HTTPS fanatics got to them.
>
> Amos
>
HTTPS everywhere is the new mantra
Fred
___
squid-users mailing list
squid-users@lists.squid-cache.org
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Fred,
look ;)
http://i.imgur.com/UBu13g0.png
Store-ID rulez! :)
Yes very interesting, can you share your bytes ratio please ? I will take a
look to increase my cache as I discussed with Amos but I can't touch the SSL
part (no bump
I'm thinking about something like this
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
dangerous or annoying for users ?
The context is many simultaneous users (thousands) with very different kind of
profiles
Regards
FredB
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
> The cases I have personally seen that you might run into serious
> trouble
> with are .tiff files, TFF is a "high quality" format. At least its
> very
> high in detail, and I've seen it used with only no-store protection
> to
> send medical, mapping and hi-res photographic data around by
> >
> > refresh_pattern -i \.(htm|html|xml|css)(\?.*)?$ 43200 1000% 43200
> > -> This is my previous rule "http"
>
> Yes.
>
> Oh, and there is the less common .chm could be in that set too.
>
Ok added
A last point there is a real difference between (\?.*)?$ and (?.*)?$ Here
More precisely
I reduced the ttl of the first line
refresh_pattern -i \.(htm|html|xml|css)(\?.*)?$ 10080 100% 10080
#All File 30 days max
refresh_pattern -i
\.(3gp|7z|ace|asx|bin|deb|divx|dvr-ms|ram|rpm|exe|inc|cab|qt)(\?.*)?$ 43200
100% 43200 ignore-no-store reload-into-ims store-stale
> Hi Fred,
> By keeping objects 30 days maxi, does it mean you expect to upgrade
> all
> windowsupdate objects in 30 days ?
>
> I'm still thinking we should have an option forcing some type of
> objects
> that could never be deleted... ;o)
>
> Bye Fred
>
>
Hi
Yes perhaps, actually it's
.
>
> Based on previous answers, diskd is for freebsd with 1 process only,
> when
> the ufs/aufs are with many processes.
> Also, as you said, it seems the diskd process was modified with the
> latest
> builds...
>
I don't know about freebsd, diskd is a separate process with a light consumption
>
>
> It happens without disk caches too. Was anyone able to reproduce it?
>
>
>
Same messages here, some days many, some days not one, a message among others
2015/09/23 13:50:33 kid1| WARNING: HTTP: Invalid Response: Bad header
encountered from
- Mail original -
> De: "Sebastián Goicochea" <se...@vianetcon.com.ar>
> À: squid-users@lists.squid-cache.org
> Envoyé: Mercredi 23 Septembre 2015 19:12:33
> Objet: Re: [squid-users] Lots of "Vary object loop!"
>
>
> Hi FredB,
>
>
> If you want to achieve highest performance it is best to resolve that
> process collision issue. The wrongly indexed entries will be causing
> others to get expired earlier and maybe reduce HIT rate on them.
>
> The (rather large amount of) extra work Squid is doing to cope with
> the
>
So stupid, just a problem with webnoid dstdomain - "."test.fr was needed for
some requests -
acl all-of his a very great feature !
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
Hi all,
Just for information, mixed results were obtained
The HIT increases 30% to 40%, but the bandwidth saved still the same +- 20%
And the load average and cpu resource are a little more important (regex for
refresh pattern I suppose)
Fred
___
>
> Hi Fred,
>
> No error, no crash.
> Some warnings only:
> 2015/07/21 11:21:02 kid1| DiskThreadsDiskFile::openDone: (2) No such
> file or
> directory
> But we can live with these warnings, Squid will take care the missing
> objects...
>
> Bye Fred
>
>
FI
Tried with squid 3.5.9 and no
Hello all,
I'm trying to use a server with 64 Go of ram, but I'm faced with a problem,
squid can't works with more than 50% of memory
After that the swap is totally full and kswap process gone mad ...
I tried with vm.swappiness = 0 but same problem, perhaps a little better, I
also tried
Thanks for your answer
> What is cache_mem ?
> See also http://wiki.squid-cache.org/SquidFaq/SquidMemory
>
Actually 25 Gb
I tried different values, but I guess no matter, the problem is that the squid
limit is only 50% of ram
> > After that the swap is totally full and kswap process gone mad
>
> Yes I guess this is a good track for me (more or less 2 now ...)
> Maybe half_closed should be help but unfortunately it crashes squid,
> Bug 4156
>
> Fred
> ___
Maybe this is also related with the post "Excessive TCP memory usage" because
Yes I guess this is a good track for me (more or less 2 now ...)
Maybe half_closed should be help but unfortunately it crashes squid, Bug 4156
Fred
___
squid-users mailing list
squid-users@lists.squid-cache.org
Maybe I'm wrong, but the server is also using many memories for TCP
cat /proc/net/sockstat
sockets: used 13523
TCP: inuse 8612 orphan 49 tw 31196 alloc 8728 mem 18237
UDP: inuse 14 mem 6
UDPLITE: inuse 0
RAW: inuse 0
FRAG: inuse 0 memory 0
netstat -lataupen | wc -l
38780
>
> You are mentioning ufdbGuard. Are its lists free for government use?
> If not, then I can not use it, since we have very strict purchasing
> requirements, even if it costs $1. And of course, I would have to go
> through evaluation, the usual learning curve etc.
>
> Don't get me wrong here,
Hi All,
With FF and Squid 3.5.10 do you notice whether the login prompt appears twice
and the second time it works ?
Digest or Basic auth no matter, I tried with www.google.com like start page.
The only way to avoid this, save the account in the browser
To reproduce remove the saved password,
There was a know bug about delay pool and HTTPS, but as far as I know it's
fixed now
you did a test with 3.5.x ?
Fred
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
You can easily make this with an acl, delay_pool is a very powerful tool
eg:
Bandwidth 64k for each users with an identification except for acl BP and only
in time included in acl desk
acl my_ldap_auth proxy_auth REQUIRED
acl bp dstdom_regex "/etc/squid/limit"
acl desk time 09:00-12:00
acl
I guess you have an acl with proxy_auth ?
Something like acl ldapauth proxy_auth REQUIRED ?
So you can just add http_access allow ldapauth !pdfdoc and perhaps http_access
allow pdfdoc after
Fred
___
squid-users mailing list
>
> As I'm currently updating too: is this a bug or have I only to clear
> the
> old cache directories to prevent these error messages?
>
As far as I know, no, I tried
___
squid-users mailing list
squid-users@lists.squid-cache.org
Amos I don't know if this is related or not, but I have a lot of
2016/04/12 13:00:50| Could not parse headers from on disk object
2016/04/12 13:00:50| Could not parse headers from on disk object
2016/04/12 13:00:50| Could not parse headers from on disk object
2016/04/12 13:00:50| Could not parse
Oh sorry
Ok it seems work for me
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
>
> Attached is a patch which I think will fix 3.5.16 (should apply fine
> on
> 4.0.8 too) without needing the cache reset. Anyone able to test it
> please?
>
Reset the cache still needed, at least in my case
Fred
___
squid-users mailing list
Hello
I migrated my Squid to the latest version 3.5.16 (from 3.5.10) and now I have
many many "Vary loop objects"
What happen ? I made no configuration changes
After 1 hours
Squid 3.5.16
grep "Vary" /var/log/squid/cache.log | wc -l
18176
Squid 3.5.10
grep "Vary" /var/log/squid/cache.log | wc
1 - 100 of 184 matches
Mail list logo