-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
13.05.16 21:25, Heiler Bemerguy пишет:
>
>
> Ok but after enabling *collapsed_forwarding*, EVEN when disabling it
again, the cache.log are full of *clientProcessHit: Vary object loop!*
>
Heh, I confirm. I see this when change Squid's binaries to
Ok but after enabling *collapsed_forwarding*, EVEN when disabling it
again, the cache.log are full of *clientProcessHit: Vary object loop!*
What happened ? My cache was modified ? collapsed is off now and even
after restarting squid I'm still getting a flood of these messages which
*were
On Sat, 2016-05-14 at 01:52 +1200, Amos Jeffries wrote:
> The default action should be to fetch each range request separately
> and
> in parallel. Not caching the results.
>
> When admin has set only the range offset & quick-abort to force full
> object retrieval the behaviour Heiler mentions
On 14/05/2016 12:38 a.m., Garri Djavadyan wrote:
> On Fri, 2016-05-13 at 08:36 +1200, Amos Jeffries wrote:
>> Have you given collapsed_forwarding a try? Its supposed to prevent
>> all
>> the duplicate requests making all those extra upstream connections
>> unti
>> at least the first one has
On 14/05/2016 12:38 a.m., Garri Djavadyan wrote:
> On Fri, 2016-05-13 at 08:36 +1200, Amos Jeffries wrote:
>> Have you given collapsed_forwarding a try? Its supposed to prevent
>> all
>> the duplicate requests making all those extra upstream connections
>> unti
>> at least the first one has
On Fri, 2016-05-13 at 08:36 +1200, Amos Jeffries wrote:
> Have you given collapsed_forwarding a try? Its supposed to prevent
> all
> the duplicate requests making all those extra upstream connections
> unti
> at least the first one has finished getting the object.
Amos, I believe that the above
On Thu, 2016-05-12 at 14:02 -0300, Heiler Bemerguy wrote:
>
> Hi Garri,
> That bug report is mine.. lol
Hi Heiler,
Yes, I know it. I just tried to answer to the following question.
> > > Is there a smart way to allow squid to download it from the
> > > beginning
> > > to the end (to actually
do not worry about vary its not a bug its the way its setup the vary
handling yet it need plenty of work this is my guess after i look at the
code
for a range_offset_limit use this test im using it long time and its
wonderful
collapsed_forwarding on
acl range_list_path urlpath_regex
Hi guys
I just enabled "collapsed_forwarding" and noticed a lot of "Vary object
loop!" that wasn't there before..
2016/05/12 19:17:22 kid3| varyEvaluateMatch: Oops. Not a Vary match on
second attempt,
On 05/12/2016 02:36 PM, Amos Jeffries wrote:
> Have you given collapsed_forwarding a try? Its supposed to prevent all
> the duplicate requests making all those extra upstream connections unti
> at least the first one has finished getting the object.
For the record, collapsed forwarding collapses
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Amos, you're a genius! I had forgotten completely about this setting, oh
my Idiotti .
13.05.16 2:36, Amos Jeffries пишет:
> On 13/05/2016 7:17 a.m., Heiler Bemerguy wrote:
>>
>> I also don't care too much about duplicated cached files.. but
On 13/05/2016 7:17 a.m., Heiler Bemerguy wrote:
>
> I also don't care too much about duplicated cached files.. but trying to
> cache "ranged" requests is topping my link and in the end it seems it's
> not caching anything lol
>
> EVEN if I only allow range_offset to some urls or file
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
If a hammer, all, of course, is the nails :)
13.05.16 2:08, Yuri Voinov пишет:
>
> In comparison, the cache of thousands of Linux distributions,
regardless of the purpose, of course, a trifle :)
>
> 13.05.16 2:07, Yuri Voinov пишет:
>
>
> >
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
In comparison, the cache of thousands of Linux distributions, regardless
of the purpose, of course, a trifle :)
13.05.16 2:07, Yuri Voinov пишет:
>
> I recently expressed the idea of caching torrents using SQUID. :)
What's an idea! I'm still
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
I recently expressed the idea of caching torrents using SQUID. :) What's
an idea! I'm still impressed! :)
13.05.16 2:02, Yuri Voinov пишет:
>
> Updates,
>
> in conjunction with hundreds OS's and distros, better to do with
> separate dedicated
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Updates,
in conjunction with hundreds OS's and distros, better to do with
separate dedicated update server. IMHO.
13.05.16 1:56, Hans-Peter Jansen пишет:
> On Freitag, 13. Mai 2016 01:09:39 Yuri Voinov wrote:
>> I suggest it is very bad idea to
On Freitag, 13. Mai 2016 01:09:39 Yuri Voinov wrote:
> I suggest it is very bad idea to transform caching proxy to linux
> distro's or something else archive.
Yuri, if I wanted an archive, I would mirror all stuff and use local repos.
I went that route for a long time - it's a lot of work to
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
And I did not promise a silver bullet :) This is just a small
workaround, which does not work in all cases. :)
13.05.16 1:17, Heiler Bemerguy пишет:
>
>
> I also don't care too much about duplicated cached files.. but trying
to cache "ranged"
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
I suggest it is very bad idea to transform caching proxy to linux
distro's or something else archive.
As Amos said, "Squid is a cache, not an archive".
13.05.16 0:57, Hans-Peter Jansen пишет:
> Hi Heiler,
>
> On Donnerstag, 12. Mai 2016
Hi Heiler,
On Donnerstag, 12. Mai 2016 13:28:00 Heiler Bemerguy wrote:
> Hi Pete, thanks for replying... let me see if I got it right..
>
> Will I need to specify every url/domain I want it to act on ? I want
> squid to do it for every range-request downloads that should/would be
> cached (based
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
IMO better to use
range_offset_limit none !dont_cache_url all
to improve selectivity between non-cached and cached URL's with ACL's
12.05.16 23:02, Heiler Bemerguy пишет:
>
>
> Hi Garri,
>
> That bug report is mine.. lol
>
> But I couldn't
Hi Garri,
That bug report is mine.. lol
But I couldn't keep testing it to confirm if the problem was about
ABORTING downloads or just trying to download what's already being
downloaded...
When you use quick_abort_min -1, it seems to "fix" the caching issue
itself, but it won't prevent the
Hi Pete, thanks for replying... let me see if I got it right..
Will I need to specify every url/domain I want it to act on ? I want
squid to do it for every range-request downloads that should/would be
cached (based on other rules, pattern_refreshs etc)
It doesn't need to delay any
On Mittwoch, 11. Mai 2016 21:37:17 Heiler Bemerguy wrote:
> Hey guys,
>
> First take a look at the log:
>
> root@proxy:/var/log/squid# tail -f access.log |grep
> http://download.cdn.mozilla.net/pub/firefox/releases/45.0.1/update/win32/pt->
> BR/firefox-45.0.1.complete.mar 1463011781.572 8776
On Wed, 2016-05-11 at 21:37 -0300, Heiler Bemerguy wrote:
>
> Hey guys,
> First take a look at the log:
> root@proxy:/var/log/squid# tail -f access.log |grep http://download.c
> dn.mozilla.net/pub/firefox/releases/45.0.1/update/win32/pt-
> BR/firefox-45.0.1.complete.mar
> 1463011781.572 8776
25 matches
Mail list logo