Re: [squid-dev] [PATCH] Purge cache entries in SMP-aware caches
On 23.07.2017 02:04, Alex Rousskov wrote: +if (!flags.cachable) +EBIT_SET(e->flags, RELEASE_REQUEST); This release request feels out of place and direct flags setting goes around the existing releaseRequest() API. Please check all callers -- perhaps we do not need the above because all callers already do an equivalent action (e.g., makePrivate()) for "uncachable" requests? I don't think this lines are 'out of place': storeCreatePureEntry() just initializes the just created StoreEntry fields (including StoreEntry::flags) with correct values. If we definitely know a this moment that 'flags' should have RELEASE_REQUEST set, why do we need to postpone this to many callers, hoping that all of them will do that work correctly? There are lots of storeCreateEntry() calls and it is hardly possible to track that all of them end up with 'releaseRequest()', when flags.cachable is false. BTW, at the time of StoreEntry initialization we do not need to do most of the work releaseRequest() does. E.g., there are no connected storages to disconnect from, no public keys to make them private, etc. The only thing to do is RELEASE_REQUEST flag setting. +SWAPOUT_NONE, ///< the store entry has not been stored on a disk This definition seems to contradict the "else" usage in Rock::SwapDir::disconnect(). What does SWAPOUT_NONE state correspond to in that "else" clause? A readable disk entry? It may be tempting to use SWAPOUT_DONE in that "else" clause inside disconnect() but I suspect that another worker may still be writing that disk entry (i.e., there is a SWAPOUT_WITING in another worker). We may need to either * broaden SWAPOUT_NONE definition to cover that "else" clause or * call anchor.complete() and use SWAPOUT_DONE/SWAPOUT_WITING in that "else" clause, similar to what Rock::SwapDir::anchorEntry() does. Please investigate and suggest any necessary changes. This patch introduces an invariant StoreEntry::checkDisk() for disk fields. According to this rule, a disconnected/unlinked entry (swap_dirn < 0) must have SWAPOUT_NONE. So I assume we should correct SWAPOUT_NONE definition, e.g.,: /// a store entry which is either unlinked the disk Store or /// has not been stored on a disk. Thanks, Eduard. ___ squid-dev mailing list squid-dev@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-dev
Re: [squid-dev] Build farm updaes?
Yes, I agree. Eduard, let's align on this. Possibly real time (chat or Skype/WhatsApp/whatever) Thanks! On Sun, 23 Jul 2017 at 23:04, Alex Rousskov < rouss...@measurement-factory.com> wrote: > On 07/23/2017 10:21 AM, Kinkie wrote: > > > is it worth investing time in freshening up the current build farm > > setup or is preferable to abandon it altogether in favor of a newly > > built one? > > I would be surprised if we should abandon the current build farm as a > whole, but perhaps you have some reason to believe it should be done? > > Please coordinate with Eduard on this. My short-term expectation is that > the old build farm nodes will remain 95+% the same while the Jenkin's > configuration will change to integrate with Github. I am worried that if > both of you modify things independently, it would be difficult to bring > everything back under one roof. > > > > If the former, I can start investing some time in that, removing > > obsolete nodes and adding newer ones, etc. > > Removing obsolete/broken nodes is fine, I guess. See above regarding > adding new ones. > > > Thank you, > > Alex. > -- @mobile ___ squid-dev mailing list squid-dev@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-dev
Re: [squid-dev] Build farm updaes?
On 07/23/2017 10:21 AM, Kinkie wrote: > is it worth investing time in freshening up the current build farm > setup or is preferable to abandon it altogether in favor of a newly > built one? I would be surprised if we should abandon the current build farm as a whole, but perhaps you have some reason to believe it should be done? Please coordinate with Eduard on this. My short-term expectation is that the old build farm nodes will remain 95+% the same while the Jenkin's configuration will change to integrate with Github. I am worried that if both of you modify things independently, it would be difficult to bring everything back under one roof. > If the former, I can start investing some time in that, removing > obsolete nodes and adding newer ones, etc. Removing obsolete/broken nodes is fine, I guess. See above regarding adding new ones. Thank you, Alex. ___ squid-dev mailing list squid-dev@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-dev
Re: [squid-dev] Build farm updaes?
My vote is for the former. > On 24/07/2017, at 4:21 AM, Kinkiewrote: > > Hi all, > is it worth investing time in freshening up the current build farm > setup or is preferable to abandon it altogether in favor of a newly > built one? > If the former, I can start investing some time in that, removing > obsolete nodes and adding newer ones, etc. > > -- >Francesco > ___ > squid-dev mailing list > squid-dev@lists.squid-cache.org > http://lists.squid-cache.org/listinfo/squid-dev > ___ squid-dev mailing list squid-dev@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-dev
[squid-dev] Build farm updaes?
Hi all, is it worth investing time in freshening up the current build farm setup or is preferable to abandon it altogether in favor of a newly built one? If the former, I can start investing some time in that, removing obsolete nodes and adding newer ones, etc. -- Francesco ___ squid-dev mailing list squid-dev@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-dev
Re: [squid-dev] What should we do about these *wrong* wiki articles?
Well, I need to re-read these pages... And I do understand when there are times it's needed and the examples are giving good reasons to why and when to use. After I will re-read these wiki sections I will try to think about it again and reply. Then if we will think that it's needed to write a wiki page or to rewrite the wiki page order or structure I will try to offer a better version. Thanks, Eliezer Eliezer Croitoru Linux System Administrator Mobile: +972-5-28704261 Email: elie...@ngtech.co.il -Original Message- From: Amos Jeffries [mailto:squ...@treenet.co.nz] Sent: Sunday, July 23, 2017 02:21 To: Eliezer Croitoru; squid-dev@lists.squid-cache.org Subject: Re: [squid-dev] What should we do about these *wrong* wiki articles? On 23/07/17 09:22, Eliezer Croitoru wrote: > As I understood the article the DNAT is from another box ie "the router" to > the squid box. > If I understood it wrong and didn't read properly I will re-read them and see > in what I am wrong. see the Details section notes. You are right about the cross-machine DNAT use-case no longer existing. We keep them both in the wiki because they still meet other use-cases: * REDIRECT copes best for machines and black-box situations where one never knows in advance what network it will be plugged into. Such as products that will be sold as plug-and-play proxy caches, or to minimize config delays on VM images that get run up by the dozen and automatically assigned IPs. However it always NAT's the dst-IP to the machines primary-IP. So is limited to the ~64K receiving socket numbers that IP can privide. It also spends some CPU cycles looking that IP up on each new TCP connection. * DNAT copes best for high performance and security installations where explicit speed or control of the packets outweighs the amount of effort needed to configure it properly. It is not doing any primary-IP stuff so is slightly faster than REDIRECT, and multiple DNAT rules can be added for each IP the machine has - avoiding the ~64K limit. BUT requires the admin to know in advance exactly what the IPs of the proxy will be. And the IP assignment, iptables rules and squid.conf settings are locked together - if any change they all need to. Lots of work to reconfigure any of it, even if automated. But, also lots of certainty about what the packets are doing for the security paranoid. Those properties are generic, not just in relation to Squid. Amos ___ squid-dev mailing list squid-dev@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-dev