Re: [squid-dev] Proposal: switch to always-build for some currently optional features

2022-09-21 Thread Amos Jeffries

On 20/09/22 01:28, Francesco Chemolli wrote:

Hi all,
    there is a bunch of features that are currently gated at compile 
time: among others, I see:

- adaptation (icap, ecap)
- authentication
- ident
- delay pools
- cache digests
- htcp
- cache digests
- wccp
- unlinkd

I'd like to propose that we switch to always-build them.



If you mean switching their build to default-enable. Sure - but there 
are often good reasons for each specific item to be default disabled today:


 * performance expensive logic
   delay pools, cache digests, adaptation

 * unavailable dependencies
  adaptation, auth sub-components

 * rarely necessary
  unlinkd, wccp, delay pools, htcp

 * buggy
  delay pools, wccp

Those reasons are also why we cannot simply remove the ./configure 
options for them (yet).




We would gain:
- code clarity


This proposal only has a very minor gain for code clarity. The worst of 
that problem is all the #if/def looking for OS hacks/workarounds, and 
the unnecessary custom re-implementations still hanging around.




- ease of development


I do no think there will be any change regarding ease. Just a different 
way of setting up the testing.




- test coverage


Disagree. The default/min/max build test "layers" already build as many 
of these as can be tested.


Plus all the reasons Alex already stated.


- feature uniformity across builds



I agree with most of Alex points on these.

In addition, on the security side there are some passive defense 
benefits from feature obscurity and avoidance of a mono-culture for 
Squid installations.





We would lose:
- slightly longer build time



Longer build time may not be an issue for users not building Squid 
often. But it would be compounding the already tough build farm situation.




- larger binaries

The latter should not be an issue anymore, even the most embedded of 
embedded systems Squid is likely to be used on has plenty of storage and 
core, and the former should not be too big a deal




It has been 4-5 years since I had any direct customers needing embedded 
Squid. AIUI the needs there are for software updates on hardware that 
are difficult to change (eg satellites or remote geographic outposts).




HTH
Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] RFC submodule repositories

2022-08-01 Thread Amos Jeffries

On 1/08/22 03:09, Alex Rousskov wrote:

On 7/31/22 00:29, Amos Jeffries wrote:
When PR #937 merges we will have the ability to shuffle old helpers 
into a separate repository that users can import as a submodule to 
build with their Squid as-needed.


In my experience, git submodules are a powerful feature that is rather 
difficult to use correctly. Some of submodule aspects do not feel 
"natural" to many humans. I am pretty sure that proper introduction of 
submodules will require a lot of our time and nerve cells. What will we 
optimize by introducing such a significant complication as git submodules?





( So every change to Squid has to be justified as an optimization now. 
Right... )


We "optimize" future code maintenance workload with the ability to drop 
outdated helpers which are still required by a small group of users but 
no longer want to be actively developed by the core dev team.


Case in point being CERN who still require our bundled LanManager 
helper(s) for some internal needs. That requires a lot of deprecated and 
rarely touched code being maintained for only one (albeit important) 
user case.
 That code could all be shuffled to a separate repository outside the 
official Squid release, but maintained by developers that support CERN 
needs.





What (if any) updates do we need to make to Anubis and other 
infrastructure so support git submodules ?


That answer probably depends, in part, on what support guarantees we are 
going to issue for those submodules and on what our submodule update 
policy will be. Integrating two or more git repositories together is a 
complicated issue...




IMO we should maintain at least one helper officially as a submodule to 
ensure the git submodule mechanisms remain viable for distributors as a 
modern replacement for the old squid-2 way of building their custom helpers.



Cheers
Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] RFC submodule repositories

2022-07-30 Thread Amos Jeffries
When PR #937 merges we will have the ability to shuffle old helpers into 
a separate repository that users can import as a submodule to build with 
their Squid as-needed.



What (if any) updates do we need to make to Anubis and other 
infrastructure so support git submodules ?



Amos

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] Errors while building 5.6 on Ubuntu 22.04

2022-07-18 Thread Amos Jeffries
FWIW, The backport of OpenSSL 3.0 support to v5 did not apply completely 
clean. So it did not make v5 yet. I hope to have some time to work on it 
later this week, if not it might miss this point release.


The patch in Debian is an earlier version of what eventually merged. 
Functionally identical - just lacking the style polish. It can be used 
if you need for v5 before the backport completes.



Cheers
Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] PR backlog

2022-06-07 Thread Amos Jeffries

On 7/06/22 05:19, Alex Rousskov wrote:

On 6/6/22 03:34, Francesco Chemolli wrote:

    we have quite a big backlog of open PRs 
(https://github.com/squid-cache/squid/pulls?page=1=is%3Apr+is%3Aopen). 
How about doing a 15-days sprint and clearing it or at least trimming 
it significantly?


I am happy to participate in any way you find useful! Please let me know 
how I can help.


FWIW, I know of a handful of PRs that need my attention (e.g., #1067, 
#980, #694, #755, #736). I am already working on some of them (and will 
resume working on others ASAP).





All my PRs are now blocked behind lack of #694 breaking tests on Debian.

Once that merges I will be able to work on most of the other PRs. Though 
some are still blocked behind #937 so I will be prioritizing that.


Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] CVE-2019-12522

2022-03-03 Thread Amos Jeffries

On 4/03/22 00:39, Eliezer Croitoru wrote:

I'm still trying to understand why it's described as "exploitable" ???
It's like saying: The Linux Kernel should not be a kernel and init(or
equivalent) should not run with uid 0 or 1.
Why nobody complains about cockpit being a root process??



This explains the _type_ of problem 
.



Most Squid are automatically protected against it by at least one of OS 
or compiler systems. But some can still be vulnerable, as shown by Jerkio.


Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] CVE-2019-12522

2022-03-02 Thread Amos Jeffries

On 2/03/22 05:35, Adam Majer wrote:

Hi all,

There apparently was a CVE assigned some time ago but I cannot seem to 
find it being addressed.


https://gitlab.com/jeriko.one/security/-/blob/master/squid/CVEs/CVE-2019-12522.txt 



The crux of the problem is that privileges are not dropped and could be 
re-acquired. There is even a warning against running squid as root but 
if root is one function call away, it seems it's the same.


Any thoughts on this?




To quote myself:

"
We do not have an ETA on this issue. Risk is relatively low and several
features of Squid require the capability this allows in order to
reconfigure. So we will not be implementing the quick fix of fully
dropping root.
"

If anyone wants to work on it you can seek out any/all calls to 
enter_suid and see if they can be removed yet. Some may be able to go 
immediately, and some may need replacing with modern libcap capabilities.



HTH
Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] v5.4 backports

2022-01-24 Thread Amos Jeffries

On 21/01/22 08:20, Alex Rousskov wrote:

On 1/18/22 5:31 AM, Amos Jeffries wrote:

The following changes accepted into v6 are also eligible for v5 but have
issues preventing me scheduling them.


This has conflicts I need some assistance resolving. So will not being
doing the backport myself. If you are interested please open a PR
against v5 branch for the working backport before Feb 1st.

  * Bug #5090: Must(!request->pinnedConnection()) violation (#930)
   squid-6-15bde30c33e47a72650ef17766719a5fc7abee4c


The above fix relies on a Bug #5055 fix (i.e. master/v6 commit 2b6b1bc
mentioned below as a special case).



Okay. I will try again for a later release.



  * Bug #5055: FATAL FwdState::noteDestinationsEnd exception: opening (#877)
   squid-6-2b6b1bcb8650095c99a1916f5964305484af7ef0


Waiting for a base commit SHA from you.



and; Fix FATAL ServiceRep::putConnection exception: theBusyConns > 0 (#939)
   squid-6-a8ac892bab446ac11f816edec53306256bad4de7


This fixes a bug in the Bug #5055 fix AFAICT, so waiting on the
master/v6 commit 2b6b1bc backport (i.e. the previous bullet).



FYI; As of right now the v5 HEAD should be fine to base the PR on.

Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] RFC: Adding a new line to a regex

2022-01-21 Thread Amos Jeffries

On 22/01/22 08:36, Alex Rousskov wrote:

TLDR: I am adding solution #6 into the mix based on Amos email (#5 was
taken by Eduard). Amos needs to clarify why he thinks that Squid master
branch cannot accept STL-based regexes "now". After that, we can decide
whether #6 remains a viable candidate. Details below.


On 1/21/22 12:42 PM, Amos Jeffries wrote:

On 20/01/22 10:32, Alex Rousskov wrote:

We have a use case where a regex in squid.conf should contain/match
a new line [...] This email discusses the problem and proposes how
to add a new line (and other special characters) to regexes found
in squid.conf and such.




With the current mix of squid.conf parsers this RFC seems irrelevant to me.


I do not understand the relationship between "the current mix of
squid.conf parsers" and this RFC relevance. This RFC is relevant because
it is about a practical solution to a real problem facing real Squid admins.



Sentence #2 of the RFC explicitly states that admin needs are not 
relevant "I do not know whether there are similar use

cases with the existing squid.conf regex directives"

The same sentence delimits RFC scope as: "adding a _new_ directive that 
will need such support."


That means the syntax defining how the regex pattern is configured does 
not yet exist. It is not necessary for the developer to design their 
_new_ UI syntax in a way that exposes admin to this problem in the first 
place. Simply design the





Whether Squid has one parser or ten, good ones or bad ones, is relevant
to how the solution is implemented/integrated with Squid, of course, but
that is already a part of the analysis on this thread.



Very relevant. RFC cites "squid.conf preprocessor and parameter parser 
use/strip all new lines" as a problem.


I point out that this behaviour depends on *which* config parser is 
chosen to be used by the (again _new_) directive. It should be an 
implementation detail for the dev, not design consideration for this RFC.






The developer designing a new directive also writes the parse_*()
function that processes the config file line. All they have to do is
avoid using the parser functions which implicitly do the problematic
behaviour.


Concerns regarding the overall quality of Squid configuration syntax and
upgrade paths expand the reach of this problem far beyond a single new
directive, but let's assume, for the sake of the argument, that all we
care about is a new parsing function. Now we need to decide what syntax
that parsing function will use. This RFC is about that decision.



Nod.

I must state that I do not see much in the say of squid.conf syntax 
discussion in the RFC text. It seems to focus a lot on syntax inside the 
regex pattern.


IMO regex is such a complicated situation that we should avoid having 
special things inside or on top of its syntax. That is a recipe for 
admin pain.



...

There was a plan from 2014 (re-attempted by Christos 2016) to migrate
Squid from the GNURegex dependency to more flexible C++11 regex library
which supports many regex languages. With that plan the UI would only
need an option flag or pattern prefix to specify which language a
pattern uses.


I agree that one of the solutions worth considering is to use a regex
library that supports different regex syntax. So here is the
corresponding entry for solution based on C++ STL regex:

6. Use STL regex features that support \n and similar escape sequences
Pros: Supports much more than just advanced escape sequences!
Pros: The new syntax is easy to document by referencing library docs.


Pro: we do not have to write any part of pattern matching ourselves. 
Simpler config parser.


Pro: we do not have to maintain custom code supporting special 
behaviours in regex pattern configuration.


Pro: we do not have to provide additional user support for non-standard 
squid.conf patterns.


Pro: we do not have to waste brain cycles designing how to integrate 
syntax into regex patterns cleanly.




Cons: Requires serious changes to the internal regex support in Squid.


IIRC, the changes are not as serious as it may seem. The largest part is 
squid.conf parser alteration to accept the proposals flag/prefix and 
patterns cleanly. Beyond that is just a switch of container which is 
easy (not trivial, just easy).




Cons: Miserable STL regex performance in some environments[1,2]?


IMO this is balanced by Squid existing regex being well known to have 
similar performance issues.




Cons: Converting old regexes requires (complex) automation.


Disagree this is problem.

GNU regex is predecessor syntax behind all modern regex variants. We can 
retain GNUregex as the default pattern and require language flag/prefix 
for patterns needing modern features.




Cons: Requires dropping GCC v4.8 support.
Cons: Amos thinks Squid cannot support STL regex until 2024.


I am honoured that you consider my opinion to be of such importance.

But, seriously, the technica

Re: [squid-dev] RFC: Adding a new line to a regex

2022-01-21 Thread Amos Jeffries

On 20/01/22 10:32, Alex Rousskov wrote:

Hello,

 We have a use case where a regex in squid.conf should contain/match
a new line (i.e. ASCII LF). I do not know whether there are similar use
cases with the existing squid.conf regex directives, but that is not
important because we are adding a _new_ directive that will need such
support. This email discusses the problem and proposes how to add a new
line (and other special characters) to regexes found in squid.conf and such.



With the current mix of squid.conf parsers this RFC seems irrelevant to me.

The developer designing a new directive also writes the parse_*() 
function that processes the config file line. All they have to do is 
avoid using the parser functions which implicitly do the problematic 
behaviour.
 The fact that there is logic imposing this problem at all is a bug to 
be resolved. But that is something for a different RFC.





Programming languages usually have standard mechanisms for adding
special characters to strings from which regexes are compiled. We all
know that "a\nb" uses LF byte in the C++ string literal. Other bytes can
be added as well: https://en.cppreference.com/w/cpp/language/escape



There was a plan from 2014 (re-attempted by Christos 2016) to migrate 
Squid from the GNURegex dependency to more flexible C++11 regex library 
which supports many regex languages. With that plan the UI would only 
need an option flag or pattern prefix to specify which language a 
pattern uses.


That plan was put on hold due to feature-incomplete GCC 4.8 versions 
being distributed by CentOS 7 and RHEL needing to build Squid.


One Core Developer (you Alex) has repeatedly expressed a strong opinion 
veto'ing the addition/removal of features to Squid-6 while they are 
still officially supported by a small set of "officially supported" 
Vendors. RHEL and CentOS being in that set.



When combined, those two design limitations mean the C++11 regex library 
cannot be implemented in a Squid released prior to June 2024.




IMO that plan is still a good one for long-term. However you design your 
new directive UI please make it compatible with that.




Unfortunately, squid.conf syntax lacks a similar general mechanism.


This is not a property of squid.conf design choices. It is an artifact 
of the GNURegex language.


Until Squid gets a major upgrade to support other regex languages. We 
are stuck with these pattern limitations.


 In

most cases, it is possible to work around that limitation by entering
special symbols directly. However, that trick causes various headaches
and does not work for new lines at all because squid.conf preprocessor
and parameter parser use/strip all new lines; the code compiling the
regular expression will simply not see any.

In POSIX regex(7), the two-character \n escape sequence is referring to
the ASCII character 'n', not the new line/LF character, so entering \n
(two characters) into a squid.conf regex value will not work if one
wants to match ASCII LF.

There are many options for adding this functionality to regexes used in
_new_ squid.conf contexts (i.e. contexts aware of this enhancement).
Here is a fairly representative sample:

1a. Recognize just \n escape sequence in squid.conf regexes
Pros: Simple.
Cons: Converting old regexes[1] requires careful checking[2].
Cons: Cannot detect typos in escape sequences. \r is accepted.
Cons: Cannot address other, similar use cases (e.g., ASCII CR).

1b. Recognize all C escape sequences in squid.conf regexes
Pros: Can detect typos -- unsupported escape sequences.
Cons: Poor readability: Double-escaping of all for-regex backslashes!
Cons: Converting old regexes requires non-trivial automation.



As you mention these \-escape is a feature of POSIX Regular Expression 
language.


Taking this step we will no longer to honestly say that Squid is only 
supporting GNU "regex" patterns. Open the floodgate and you will find a 
mountain of admin wanting the other POSIX features for one reason or 
another.


We would be better accepting the long-ago planned migration to C++11 
regex than taking more half-measures like implementing \-escape patterns 
ourselves.





2a. Recognize %byte{n} logformat-like sequence in squid.conf regexes
Pros: Simple.
Cons: Converting old regexes[1] requires careful checking[3].
Cons: Cannot detect typos in logformat-like sequences.
Cons: Does not support other advanced use cases (e.g., %tr).

2b. Recognize %byte{n} and logformat sequences in squid.conf regexes
Pros: Can detect typos -- unsupported logformat sequences.
Cons: The need to escape % in regexes will surprise admins.
Cons: Converting old regexes requires (simple) automation.


3. Use composition to combine regexes and some special strings:
regex1 + "\n" + regex2
or
regex1 + %byte{10} + regex2
Pros: Old regexes can be safely used without any conversions.
Cons: Requires new, complex composition 

Re: [squid-dev] RFC: Adding a new line to a regex

2022-01-21 Thread Amos Jeffries

On 21/01/22 07:27, Eduard Bagdasaryan wrote:
I would concur with Alex that (4) is preferable: It does not break old 
configurations, re-uses existing mechanisms and allows to apply it only 
when/where required. I have one more option for your consideration: 
escaping with a backtick (e.g., `n) instead of a backslash. This 
approach is used, e.g., in PowerShell.


5a. Recognize just `n escape sequence in squid.conf regexes.

5b. Recognize all '`'-based escape sequences in squid.conf regexes.

Pros:  Easier upgrade: backtick is rare in regular expressions (compared 
to '%' or '/'), probably there is no need to convert old regexes at all.

Pros:  Simplicity: no double-escaping is required (as in (1b)).
Cons: Though it should be straightforward to specify common escape 
sequences, such as `n, `r or `t, we still need to devise a way of 
providing arbitrary character (i.e., its code) in this way.




You are mixing up different features offered by several of the *many* 
different languages people call "regex".


Squid regex patterns are written in GNU Regular Expression language. 
None of those commonly expected things are features in that ancient 
language.



Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] v5.4 backports

2022-01-18 Thread Amos Jeffries
The following changes accepted into v6 are also eligible for v5 but have 
issues preventing me scheduling them.



This has conflicts I need some assistance resolving. So will not being 
doing the backport myself. If you are interested please open a PR 
against v5 branch for the working backport before Feb 1st.


 * Bug #5090: Must(!request->pinnedConnection()) violation (#930)
  squid-6-15bde30c33e47a72650ef17766719a5fc7abee4c


The following just need bugzilla IDs. If interested in getting a 
backport please open the bug report with useful details (ie document the 
user-visible behaviour) and then ping me.


 * Properly track (and mark) truncated store entries (#909)
   squid-6-ba3fe8d9bc8d35c4b04cecf30cfc7288c57e685c

 * Fix reconfiguration leaking tls-cert=... memory (#911)
   squid-6-b05c195415169b684b6037f306feead45ee9de4e

 * Preserve configured order of intermediate CA certificate chain (#956)
   squid-6-166fb918211b76a0e79eb07967f4d092f74ea18d


This is going to be a special case. Alex; I responded to you a few hrs 
back in the squid-users thread "squid 5.3 frequent crash" with a plan 
that I am okay with for backporting.



 * Bug #5055: FATAL FwdState::noteDestinationsEnd exception: opening (#877)
  squid-6-2b6b1bcb8650095c99a1916f5964305484af7ef0

and; Fix FATAL ServiceRep::putConnection exception: theBusyConns > 0 (#939)
  squid-6-a8ac892bab446ac11f816edec53306256bad4de7


Cheers
Amos

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] What os/cpu platforms do we want to target as a project?

2021-12-26 Thread Amos Jeffries

On 27/12/21 10:11, Alex Rousskov wrote:

On 12/26/21 10:30 AM, Francesco Chemolli wrote:

On Sun, Dec 5, 2021 at 10:05 PM Alex Rousskov wrote:

If we manage to and agree on what platforms to "support" and on removing
code dedicated to unsupported platforms, great! If we fail, I would like
to propose an alternative scheme for the removal of platform-specific
(or any other) code from the official master branch:

A PR dedicated to code removal can be merged if it satisfies the
following criteria:

1. Two positive votes from core developers.
2. No negative votes.
3. Voting lasted for 30+ calendar days.
4. The removal PR is announced in a PR-dedicated squid-users post.
This announcement resets the 30+ day timer.



How about instead something along the lines of:
1. announce on squid-users about intent to remove support for a platform
2. wait for objections for 15 days
3. PR following standard merge procedure



My proposal is trying to solve the specific problem that recent PRs
(attempting to remove some code) have faced: The reviewer asked _why_ we
are removing code, and the author closed the PR instead of developing
consensus regarding the correct answer to that question[1]. My proposal
establishes a special track for code removal PRs so that they do not
have to answer the (otherwise standard) "why" question.



[1] is about removal of a trivial piece of code that has no harm leaving 
in place.


As I stated in followup what I regard as reasonable justification *was* 
given up front. If that was not enough for agreement to merge it was not 
worth the jumping through hoops in the first place. Let alone creating a 
whole new bureaucratic policy and management process.


As I said in other discussions, we already have a deprecation+removal 
process that works fine and matches the industry standard practices when 
a code change is even suspected to matter to anyone at all. That process 
gives far more than just 30 days to inform any interested community 
members and does not limit the notification channels.


If anyone picks up the code removal in [1] again they should follow that 
process since that code is now obviously of some importance to reviewer 
Alex.



Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] Squid does not accept WCCP of Cisco router since CVE 2021-28116

2021-12-06 Thread Amos Jeffries

On 6/12/21 12:11, Andrej Mikus wrote:

Hi,

I would like to find some information about wccp servers (routers,
firewalls, etc) that are officially supported and therefore tested for
compatibility. I thought there would be this kind of page published in
squid wiki but failed to locate one.

Since the recent update squid does not accept wccp packets sent by Cisco
IOS 15.8(3)M2 claiming there is duplicate security definition.

Is there any way to get in touch with the developper responsible for the
security patch and request his comments? I do not have access to other
Cisco hardware, and I would like to know if the update was confirmed
working for example against a CSR1000v.

I have first reported the issue to Ubuntu since I am running 18.04, but
today confirmed that recent versions of squid fail as well. Prior
creating a new entry at https://bugs.squid-cache.org/ I would appreciate
your guidance.

Regards
Andrej Mikus



Hi Andrej,

 Alex has summarized the state of things pretty accurately. Since CVE 
is already public please feel free to open a bug report on our Bugzilla. 
That will help with getting the fix backported to official releases.


If you are able to do the testing I am happy to try and fix it for you.

Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] RFC: Categorize level-0/1 messages

2021-12-05 Thread Amos Jeffries

On 21/10/21 16:16, Alex Rousskov wrote:

On 10/20/21 3:14 PM, Amos Jeffries wrote:

On 21/10/21 4:22 am, Alex Rousskov wrote:

To facilitate automatic monitoring of Squid cache.logs, I suggest to
adjust Squid code to divide all level-0/1 messages into two major
categories -- "problem messages" and "status messages"[0]:




We already have a published categorization design which (when/if used)
solves the problem(s) you are describing. Unfortunately that design has
not been followed by all authors and conversion of old code to it has
not been done.



Please focus your project on making Squid actually use the system of
debugs line labels. The labels are documented at:
   https://wiki.squid-cache.org/SquidFaq/SquidLogs#Squid_Error_Messages


AFAICT, the partial classification in that wiki table is an opinion on
how things could be designed, and that opinion does not reflect Project
consensus.


The wiki was written from observation of how the message labels are/were 
being used in the code. As such it reflects the defacto consensus of 
everyone ever authoring code that used one of the labels.



NP: The "core team" or "dev team" are not "The Project". There are a 
large number of developers contributing to each version of Squid whose 
only voice in any of the style/design decisions is the existing Squid code.




FWIW, I cannot use that wiki table for labeling messages, but
I do not want to hijack this RFC thread for that table review.



You our text below contradicts the "cannot" statement by describing how 
the two definitions fit together and offer to use the wiki table labels 
for problem category.


I assume the below text is your definition of "cannot"? if not them 
please explain why not.




Fortunately, there are many similarities between the wiki table and this
RFC that we can and should capitalize on instead:

* While the wiki table is silent about the majority of existing
cache.log messages, most of the messages it is silent about probably
belong to the "status messages" category proposed by this RFC.


Exactly so.


This
assumption gives a usable match between the wiki table and the RFC for
about half of the existing level-0/1 cache.log messages. Great!

* The wiki table talks about FATAL, ERROR, and WARNING messages. These
labels match the RFC "problem messages" category. This match covers all
of the remaining cache.log messages except for 10 debugs() detailed
below. Thus, so far, there is a usable match on nearly all current
level-0/1 messages. Excellent!


Thus my request that you use the wiki definitions to categorize the 
unlabeled and fix any detected labeling mistakes.




* The wiki table also uses three "SECURITY ..." labels. The RFC does not
recognize those labels as special. I find their definitions in the wiki
table unusable/impractical, and you naturally think otherwise, but the
situation is not as bad as it may seem at the first glance:

- "SECURITY ERROR" is used once to report a coding _bug_. That single
use case does not match the wiki table SECURITY ERROR description. We
should be able to rephrase that single message so that does it not
contradict the wiki table and the RFC.

- "SECURITY ALERT" is used 6 times. Most or all of those cases are a
poor match for the SECURITY ALERT description in the wiki table IMHO. I
hope we can find a way to rephrase those 6 cases to avoid conflicts.

- "SECURITY NOTICE" is used 3 times. Two of those use cases can be
simply removed by removing the long-deprecated and increasingly poorly
supported SslBump features. I do not see why we should keep the third
message/feature, but if it must be kept, we may be able to rephrase it.

If we cannot reach an agreement regarding these 10 special messages, we
can leave them as is for now, and come back to them when we find a way
to agree on how/whether to assign additional labels to some messages.



AFAICT, they were added as equivalent to ERROR/WARNING in CVE fixes, or 
to highlight a known security vulnerability being opened by admin settings.


I am okay with them remaining untouched by a PR submission cleaning 
level 0/1 messages. Though they are there to use if any author finds a 
message that suitably meets their definition.





Thus, there are no significant conflicts between the RFC and the table!
We strongly disagree how labels should be defined,


Recall that the wiki is describing the observed pattern of label usage 
by all Squid contributors. That means any significant conflict is 
between your choice of definition and "The Project" as a whole. Minor 
conflicts may be just differences in my wording and yours on the 
observed pattern.




but I do not think we
have to agree on those details to make progress here.


The options for any author are to comply with the existing 
consensus/pattern or to get agreement on changing the definitions.


Options like c

Re: [squid-dev] What os/cpu platforms do we want to target as a project?

2021-12-05 Thread Amos Jeffries

On 5/12/21 22:44, Francesco Chemolli wrote:

Hi all,
   continuing the conversation from
https://github.com/squid-cache/squid/pull/942#issuecomment-986055422
to a bigger forum

The discussion started out of a number of PRs meant to remove explicit
support for obsolete platforms such as OSF/1, NeXT or old versions of
Solaris.

I that thread I move forward a list of platforms that in my opinion we
should as a project target, and while of course we should not
explicitly prevent other platforms from being built on, we should also
not disperse our time supporting complexity for the sake of other
os/cpu combos .

The rationale is that we should focus our attention as a project where
the majority of our userbase is, where users mean "people who build
and run squid".



I do not accept that a few hacks for old OS are causing pain to core 
developers simply by bitrotting in untouched code.


Removing things because the majority of users are not using it would 
mean we become a Linux-only project and thus actively push people out of 
the community unnecessarily. There are active benefits gained in the 
form of better code, bug detection, and community inclusiveness from 
maintaining portable code.



FTR; The criteria informing my OS removal decisions so far have been:

 a) Squid-3 has support for any OS which has a chance at providing C++ 
tools. Inheriting from the Squid-2 support for any ancient OS that had 
someone willing to provide patches.


 b) Squid-4+ require C++11 toolchain. This eliminates a lot of very old 
OS code. Especially those which have no vendor/distro have little chance 
at providing a modern toolchain.


 - At times the vendor/distro supporting that OS provides 
toolchains/support in ways that no longer need our hacks (eg PR 944 m88k 
supported as normal NetBSD/OpenBSD builds, PR 943 likewise as QNX or 
MacOS builds).


 -  At times our hack is for a version of OS which the vendor(s) 
provide (and/or require) admin to use a newer version of Squid for. Our 
usual distro hack removals.


 c) OS which are outdated (newer version supported by the vendors) need 
a reasonable ability for any admin wanting to manually build their Squid 
able to do so. If we know of any admin wanting the code to remain it can 
stay unless we have a clear reason to remove. (eg PR 942)



NP: now that Squid has formally dated releases I have a timeline for 
support forcasts. (eg PR 942 vendor LTS schedule vs Squid-6 release)





We have no way of knowing who is installing squid, we don't have
telemetry, so we will need to do a bit of guesswork, based on the
assumption that people who deploy squid are rational; as a consequence
of that, we can assume that they will go for the solution that gives
them most bang for buck ("buck" being money and time).

What are the main use cases for squid users? (again, guesswork, please
feel free to add more)
1. forward proxy in enterprise or ISP
2. reverse proxy in datacenter
3. forward proxy in small or embedded environments




IME, (1) and (2) have switched places in the past 5 years for actual 
usage. Though (1) people are more vocal with SSL-Bump problems.



 4. protocol translation gateways

Historically FTP and Gopher to HTTP. Recent years IPv6/v4 conversion. 
Plus latest Browsers dropping FTP support has brought back FTP gateway 
translation popularity.



We should be in the thick of HTTP 1/2/3 conversion, but blockers on that 
work have completely eliminated Squid from that section of the market. 
People wanting that go to Haproxy instead.





What are the Os/CPU combos that we can expect to find in these environments?


Ubuntu is a large chunk of user base and that alone requires a large 
range of architectures (see https://buildd.debian.org/) to get through 
Debian. That includes BSD and microkernel builds, not just Linux.


So, IMO specific machine architectures are not much concern to "us". The 
toolchains and vendors take care of that part so long as we provide 
half-decent code.




x64 is likely to be found in all of them, and the OSes most likely to
be used are all sorts of Linux, FreeBSD, OpenBSD, Windows.
For the third use case (and possibly more and more of the second in
upcoming years), we can consider arm64, arm32 and possibly MIPS with
Linux. There might be a niche of NetBSD users here, hard to tell.

By now it makes no economic sense to run Squid on large Unix boxen.


Worse decisions have been sighted from management levels and not every 
admin has a choice. I don't think this is a strong enough argument for 
it to be relevant to this discussion.






What do we get as a project out of this?
- Mainly code simplification. Fewer portability subcases makes for
easier to understand and modify code


When it comes to portability the outcome is more ironic. The lack of 
special-cases means less pressure to have clear concept design, 
abstraction/boundaries between actions (modularity). So large code like 
Squid can become more complex than simple.


It is more 

Re: [squid-dev] request for change handling hostStrictVerify

2021-11-01 Thread Amos Jeffries

On 1/11/21 20:59, kk wrote:


On Saturday, October 30, 2021 01:14 GMT, Alex Rousskov wrote:

On 10/29/21 8:37 PM, Amos Jeffries wrote:
> On 30/10/21 11:09, Alex Rousskov wrote:
>> On 10/26/21 5:46 PM, kk wrote:
>>
>>> - Squid enforces the Client to use SNI
>>> - Squid lookup IP for SNI (DNS resolution).
>>> - Squid forces the client to go to the resolved IP
>>



 >then malicious applets will escape browser IP-based protections.
Browser should perform IP-based protection on browser(client) level and 
should therefor not traverse squid.


Your suggestion of making Squid "forces the client to go to the resolved 
IP" bypasses any protections the Browser might do.


This would make the CVE-2009-0801 situation happen all over again. Just 
with SNI as the bypass method instead of Host header.


Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] request for change handling hostStrictVerify

2021-10-29 Thread Amos Jeffries

On 30/10/21 11:09, Alex Rousskov wrote:

On 10/26/21 5:46 PM, k...@sudo-i.net wrote:


- Squid enforces the Client to use SNI
- Squid lookup IP for SNI (DNS resolution).
- Squid forces the client to go to the resolved IP


AFAICT, the above strategy is in conflict with the "SECURITY NOTE"
paragraph in host_verify_strict documentation: If Squid strays from the
intended IP using client-supplied destination info, then malicious
applets will escape browser IP-based protections. Also, SNI obfuscation
or encryption may make this strategy ineffective or short-lived.

AFAICT, in the majority of deployments, the mismatch between the
intended IP address and the SNI/Host header can be correctly handled
automatically and without creating serious problems for the user. Squid
already does the right thing in some cases. Somebody should carefully
expand that coverage to intercepted traffic. Frankly, I am somewhat
surprised nobody has done that yet given the number of complaints!



IIRC the "right thing" as defined by TLS for SNI verification is that it 
be the same as the host/domain name from the wrapper protocol (i.e. the 
Host header / URL domain from HTTPS messages). Since Squid uses the SNI 
at step2 as Host value it already gets checked against the intercepted IP


Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] RFC: Categorize level-0/1 messages

2021-10-20 Thread Amos Jeffries

On 21/10/21 4:22 am, Alex Rousskov wrote:

Hello,

 Nobody likes to be awaken at night by an urgent call from NOC about
some boring Squid cache.log message the NOC folks have not seen before
(or miss a critical message that was ignored by the monitoring system).
To facilitate automatic monitoring of Squid cache.logs, I suggest to
adjust Squid code to divide all level-0/1 messages into two major
categories -- "problem messages" and "status messages"[0]:



We already have a published categorization design which (when/if used) 
solves the problem(s) you are describing. Unfortunately that design has 
not been followed by all authors and conversion of old code to it has 
not been done.


Please focus your project on making Squid actually use the system of 
debugs line labels. The labels are documented at:

  


What we do not have in that design is clarity on which labels are shown 
at what level. IMO they should be:


 * DBG_CRITICAL(0) - admin *need* to know this even if they do not 
think they want to.

  - FATAL
  - SECURITY ALERT
  - ERROR which were mislabeled and should be FATAL

 * DBG_IMPORTANT(1) - some admin want to know these, not mandatory though.
  - ERROR
  - SECURITY ERROR
  - SECURITY WARNING

 * level-2 - status, troubleshooting etc.
  - WARNING admin cannot do anything about
  - SECURITY NOTICE (these are for troubleshooting advice)

 * level-3+ - other




There are also "squid -k parse" messages
that are easy to find automatically if somebody wants to classify them
properly.


Those are level 1-2 messages that become mandatory to display on 
startup/reconfigure.




I have one worry about you taking this on right now. PR 574 has not been 
resolved and merged yet, but many of the debugs() messages you are going 
to be touching in here should be converted to thrown exceptions - which 
ones and what exception type is used has some dependency on how that PR 
turns out.



Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] Incoming breaking changes to OpenSSL API

2021-09-20 Thread Amos Jeffries

On 20/09/21 7:16 pm, Francesco Chemolli wrote:

Hi all,
   Fedora Rawhide has upgraded openssl to version 3, and the results
can be seen at
https://build.squid-cache.org/job/anybranch-arm64-matrix/COMPILER=gcc,OS=fedora-rawhide,label=arm64/10/console

For example:

In file included from ../../../../src/security/Session.h:15,
  from ../../../../src/security/forward.h:15,
  from ../../../../src/SquidConfig.h:26,
  from ../../../../src/mem/old_api.cc:24:
../../../../src/security/forward.h: In function ‘void
Security::DH_free_cpp(DH*)’:
../../../../src/security/LockingPointer.h:34:21: error: ‘void
DH_free(DH*)’ is deprecated: Since OpenSSL 3.0
[-Werror=deprecated-declarations]
34 | function(a); \






Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] Squid 4.13: too much memory used for ACL url_regex when big list file used

2021-08-16 Thread Amos Jeffries

On 17/08/21 5:45 am, Meridoff wrote:

Hello, I have simplest squid config with such acl:

acl a1 url_regex "/tmp/urls.txt"

In /tmp/urls.txt there are about 220 000 URL regexps, most of them in 
such form (example):

^(https?|ftp)://([a-z0-9.-]+\.)?nicebox\.pro(/.*)?$
OR
^(https?|ftp)://order-yudobashi-com\.363q1\.bar/

There are a lot of memory used by squid for such configuration: about 
2GB. Without this command: about 30MB used.


Are those numbers without traffic going through?
ie. just Squid loading the configuration then does nothing.



So aprxm. 10KB for 1 regexp. Is it normal? I think it is too big..



regex rules get aggregated into longer compound lines, then compiled 
into a binary form for the regex library to handle. It is not easy to 
tell how the compiled form will compare in size to the original strings.



I think that such big consumption is because of regexps , for simple 
strings (for example, domains or IPs) consumption will be less.




Sure. "Simple strings" will be not much larger than the size of the file 
input - though there is some extra memory used for indexing etc.



How I can decrease memory consumption for such big regexp lists in ACLs 
? May be some fix in squid or some fix in regexp lists ?




First thing to do is figure out where the memory is being consumed.

This command:
 squidclient mgr:mem

Will produce a spreadhsheet in TSV format of the memory allocations your 
Squid has. Look through that for the big memory spenders, with an eye on 
the regex one(s) you are suspicious of.



Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] Coding Style updates

2021-08-15 Thread Amos Jeffries

On 15/08/21 3:44 am, Alex Rousskov wrote:

On 8/12/21 8:31 PM, Amos Jeffries wrote:


I am aware that Factory ... prefers the one-line style.


Factory does not prefer the one-line style.



The existence of such a style requirement on Factory developers, and 
thus need for Squid code to match it for ease of future bug fixing, was 
given to me as a reason for ICAP and eCAP feature code staying in the 
Factory supplied one-line format despite the remainder of Squid code 
back then using two-line.



So, between your two responses I gather that there will be no push-back 
on a PR adding enforcement of two-line function/method definitions by 
astyle 3.1.







If we don't have agreement on a change I will
implement enforcement of the existing style policy.


I cannot find any existing/official rules regarding single- or
multi-line function definitions in [1]. Where are they currently stated?

[1] https://wiki.squid-cache.org/SquidCodingGuidelines



It appears to be one of the policy rules not copied over to that page 
from the Squid-2 page.


 "Follow the coding style of the rest of the code."

... the bulk of Squid code uses two-line. Only the ICAP,eCAP, SSL-Bump 
code received in large PRs from Factory or third-party imported 
libraries (also large imports) use one-line.



Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] Coding Style updates

2021-08-12 Thread Amos Jeffries

On 13/08/21 4:28 am, Alex Rousskov wrote:

On 8/12/21 12:42 AM, Amos Jeffries wrote:


1) return type on separate line from function definition.

Current style requirement:

   template<...>
   void
   foo(...)
   {
     ...
   }

AFAIK, this based on GNU project style preferences from the far past
when Squid had a tight relationship with GNU. Today we have a conflict
between Factory code coming in with the alternative same-line style
and/or various developers being confused by the mix of code they are
touching.


AFAIK, Factory developers try to follow the official style (while
keeping the old code consistent), just like all the other developers
(should) do. They make mistakes just like all other developers do. It is
unfortunate that you felt it is necessary to single out a group of
developers in a negative way (that is completely irrelevant to your
actual proposal).



I don't mean that as a bad thing. Just that I am aware that Factory is a 
large contributor who prefers the one-line style.






IMO; it is easier to work with the one-line style when command-line
editing, and irrelevant when using more advanced tools.


FWIW, I do not find it easier to use one-line style when using
command-line tools. In fact, the opposite is probably true in my experience.



Hmm. Okay.




As such I propose converting Squid to the same-line style:

   template<...>
   void foo(...)
   {
     ...
   }


Opinions?


One-line stile increases horizontal wrapping, especially when the return
type is complex/long and there are additional markers like "static",
"inline", and "[[ noreturn ]]".

One line approach itself is inconsistent because template<> is on the
other line(s).

Searching for a function name with unknown-to-searcher or complex return
type becomes a bit more difficult when using "git log -L" and similar tools.

In many cases, function parameters should be on multiple lines as well.
Their alignment would look worse if the function name does not start the
line.

Most function definitions will have "auto" return type after we upgrade
to C++14. Whether that makes one-line style more or less attractive is
debatable, but it is an important factor in delaying related changes.

Eventually, we may be able to automatically remove explicit return types
using one of the clang-tidy tools, but such a tool does not exist yet
(modernize-use-trailing-return-type comes close but is not it).

In summary, I recommend delaying this decision until C++14. However, if
others insist on changing the format and changing it now, then I will
not block these changes, assuming newer astyle produces reasonable results.


Can new astyle support multiline formatting? If not, _that_ could be a
strong argument for changing the style.



It can support both now. If we don't have agreement on a change I will 
implement enforcement of the existing style policy.






2) braces on one-liner conditional blocks

Current code style is a bit confused. We have it as a style to use, with
exceptions for some old macros etc where the lack of braces causes
compile issues or bugs.

Personally I prefer the non-braced style. But it seems far safer to
automate the always-brace style and not have those special exceptions.

Opinions?


I also prefer the non-braced style for simple expressions.

Perhaps there is a way to automatically wrap complex ones, for some
reasonable definition of "complex"?

Alex.
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] Coding Style updates

2021-08-11 Thread Amos Jeffries

Hi all,

 Now that we have astyle 3.1 for style enforcement we can take 
advantage of it to perform a few code style change that older versions 
could not.


Before I do any work testing they work I'd like to review the relevant 
details of our style guidelines and see if we actually want to keep that 
requirement.



1) return type on separate line from function definition.

Current style requirement:

  template<...>
  void
  foo(...)
  {
...
  }

AFAIK, this based on GNU project style preferences from the far past 
when Squid had a tight relationship with GNU. Today we have a conflict 
between Factory code coming in with the alternative same-line style 
and/or various developers being confused by the mix of code they are 
touching.


IMO; it is easier to work with the one-line style when command-line 
editing, and irrelevant when using more advanced tools.


As such I propose converting Squid to the same-line style:

  template<...>
  void foo(...)
  {
...
  }


Opinions?

Any reasons I missed for keeping this?



2) braces on one-liner conditional blocks

Current code style is a bit confused. We have it as a style to use, with 
exceptions for some old macros etc where the lack of braces causes 
compile issues or bugs.


Personally I prefer the non-braced style. But it seems far safer to 
automate the always-brace style and not have those special exceptions.


Opinions?





Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] Compilling squid

2021-07-23 Thread Amos Jeffries

On 23/07/21 6:00 pm, phenom252525 wrote:
Hello again, I wrote to you not long ago. I have a question, since I am 
a novice linux user, I would like to learn how to compile and install 
the latest squid from source, for example squid 4.15. I have Ubuntu 
server installed on 18.04.05 with latest updates. In the squid 
repositories 3.5.27. If you can tell us step by step how to



The Ubuntu instructions are at 
.



find out 
what dependencies are needed for the new version of squid,


Install the Ubuntu Squid integration scripts and packages:
  apt install squid

Install the normal Ubuntu Squid build dependencies:
  apt build-dep squid

For OpenSSL support download the libssl-dev library package as well. It 
is not one of the normal build-dep set.



Download the Squid sources you want to build.

Build those Squid sources the same as any other piece of auto-tools 
based software:

  ./configure
  make check && make install

 Use the Ubuntu-specific parameters for the ./configure command which 
are listed in the wiki page. Plus any others you want to use (eg 
--with-openssl).




how to find 
out if the package has been built successfully.


The build process will tell you if there were problems.


(You need the proxy 
server to be transparent to browsers, that is, so that squid would 
generate the certificate itself).


"transparent to browsers" is not possible when TLS is involved.

The whole point of TLS is that it alerts clients (Browser etc) if/when a 
proxy touches any part of the crypto. *Especially* the TLS certificates 
or transferred data.




I read many articles, tried to build a 
package on them, but the dependencies change and it is not clear where 
to look and what to look for. The official squid site is not very clear 
in the documentation. If you can help, I will be very grateful.


Your project is really very good, but it takes a titanic effort from a 
beginner to understand and understand it.




The Squid-users mailing list is the place to go for help using Squid.

This mailing list is for discussions about the code and people wanting 
to do some coding on Squid.



Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] Strategy about build farm nodes

2021-05-16 Thread Amos Jeffries

On 4/05/21 2:29 am, Alex Rousskov wrote:

On 5/3/21 12:41 AM, Francesco Chemolli wrote:

- we want our QA environment to match what users will use. For this
reason, it is not sensible that we just stop upgrading our QA nodes,


I see flaws in reasoning, but I do agree with the conclusion -- yes, we
should upgrade QA nodes. Nobody has proposed a ban on upgrades AFAICT!

The principles I have proposed allow upgrades that do not violate key
invariants. For example, if a proposed upgrade would break master, then
master has to be changed _before_ that upgrade actually happens, not
after. Upgrades must not break master.


So ... a node is added/upgraded. It runs and builds master fine. Then 
added to the matrices some of the PRs start failing.


*THAT* is the situation I see happening recently. Master itself working 
fine and "huge amounts of pain, the sky is falling" complaints from a 
couple of people.


Sky is not falling. Master is no more nor less broken and buggy than it 
was before sysadmin touched Jenkins.


The PR itself is no more, nor less, "broken" than it would be if for 
example - it was only tested on Linux nodes and fails to compile on 
Windows. As the case for master *right now* happens to be.





What this means in terms of sysadmin steps for doing upgrades is up to
you. You are doing the hard work here, so you can optimize it the way
that works best for _you_. If really necessary, I would not even object
to trial upgrades (that may break master for an hour or two) as long as
you monitor the results and undo the breaking changes quickly and
proactively (without relying on my pleas to fix Jenkins to detect
breakages). I do not know what is feasible and what the best options
are, but, again, it is up to _you_ how to optimize this (while observing
the invariants).



Uhm. Respectfully, from my perspective the above paragraph conflicts 
directly with actions taken.


From what I can tell kinkie (as sysadmin) *has* been making a new node 
and testing it first. Not just against master but the main branches and 
most active PRs before adding it for the *post-merge* matrix testing 
snapshot production.


  But still threads like this one with complaints appear.



I understand there is some specific pain you have encountered to trigger 
the complaint. Can we get down to documenting as exactly as possible 
what the particular pain was?


 Much of the processes we are discussing are scripted automation not 
human processing mistakes. Handling such pain points as bugs with 
bugzilla "Project" section would be best. Re-designing the entire system 
policy just moves us all to another set of unknown bugs when the scripts 
are re-coded to meet that policy.






- I believe we should define four tiers of runtime environments, and
reflect these in our test setup:



  1. current and stable (e.g. ubuntu-latest-lts).
  2. current (e.g. fedora 34)
  3. bleeding edge
  4. everything else - this includes freebsd and openbsd


I doubt this classification is important to anybody _outside_ this
discussion, so I am OK with whatever classification you propose to
satisfy your internal needs.



IIRC this is the 5th iteration of ground-up redesign for this wheel.

Test designs that do not fit into our merge and release process sequence 
have proven time and again to be broken and painful to Alex when they 
operate as-designed. For the rest of us it is this constant re-build of 
automation which is the painful part.



A. dev pre-PR testing
   - random individual OS.
   - matrix of everything (anybranch-*-matrix)

B. PR submission testing
   - which OS for master (5-pr-test) ?
   - which OS for beta (5-pr-test) ?
   - which OS for stable (5-pr-test) ?

Are all of those sets the same identical OS+compilers? no.
Why are they forced to be the same matrix test?
  IIRC, policy forced on sysadmin with previous pain complaints.

Are we getting painful experiences from this?
  Yes. Lack of branch-specific testing before D on beta and stable 
causes those branches to break a lot more often at last-minute before 
releases than master. Adding random days/weeks to each scheduled release.



C. merge testing
   - which OS for master (5-pr-auto) ?
   - which OS for beta (5-pr-auto) ?
   - which OS for stable (5-pr-auto) ?
 NP: maintainer does manual override on beta/stable merges.

Are all of those sets the same identical OS+compilers? no.
  Why are they forced to be the same matrix test? Anubis

Are we getting painful experiences from this? yes. see (B).


D. pre-release testing (snapshots + formal)
   - which OS for master (trunk-matrix) ?
   - which OS for beta (5-matrix) ?
   - which OS for stable (4-matrix) ?

Are all of those sets the same identical OS+compilers? no.
Are we forcing them to use the same matrix test? no.
Are we getting painful experiences from this? maybe.
  Most loud complaints have been about "breaking master" which is the 
most volatile branch testing on the most volatile OS.




FTR: the reason all those 

Re: [squid-dev] squid-5.0.5-20210223-r4af19cc24 difference in behaviors between openbsd and linux

2021-03-29 Thread Amos Jeffries

On 29/03/21 6:16 am, Eliezer Croitoru wrote:

Hey Robert,

I am not sure I understood what is the meaning of the description:
openbsd: Requiring client certificates.
linux: Not requiring any client certificates



@Eliezer:
  They are startup messages Squid prints in cache.log when a TLS server 
context is initialized.





-Original Message-
From: Robert Smith
Sent: Sunday, March 28, 2021 7:27 PM

Dear Squid-Dev list:

I could use some help on this one:


I have a build environment that is identical on linux, openbsd, and macosx

In this scenario, I am developing under:

Ubuntu 18.04 - All patches and updates applied as of 3/24
OpenBSD 6.8 - All patches and updates applied as of 3/24


I will note that I am really only using the libc from each system whereas every 
other component dependencies (which are not many! Good job squid team!) are a 
part of my build system.

When building squid with the exact same tool chain and library stack, with the 
same configure options, I am seeing a difference in behavior on the two 
platforms:

The difference is that after parsing the configuration file, the two systems 
differ in whether or not they will require client certificates:


openbsd: Requiring client certificates.

linux: Not requiring any client certificates



What the message means depends on whether the http(s)_port, a 
cache_peer, or the outgoing https:// context is being initialized. 
Options that directive was supposed to be using (including the default 
security).


Looking at your logs I see:


On OpenBSD Squid detects the presence of an IPv6 split-stack for 
networking. Which means Squid has to clone the internal representation 
of all your squid.conf *_port settings and setup separate contexts and 
state for IPv4 versions of them.
 There seems to be a bug in that cloning process which is turning on 
the TLS client certificates feature. Please report this to our bugzilla 
so it does not get forgotten until fixed.



On Linux Squid is detecting IPv6 disabled in the kernel networking 
setup. So it is disabling its own IPv6 support. That said Linux has a 
hybrid-stack networking so the cloning would not happen anyway. If IPv6 
were enabled here it would be somewhat more obvious that the IPv4 ports 
on OpenBSD are the odd ones.



For a workaround you may be able to set sslflags=DELAYED_AUTH on the 
http*_port lines and leave your ACLs as they are without anything 
requiring a client certificate.






# openbsd

root@openbsd:~# /root/squid.init conftest



2021/03/28 10:47:31| Processing: http_port 3128 ssl-bump 
cert=/opt/osec/etc/ssl_cert/squid-ca-cert+key.pem generate-host-certificates=on 
dynamic_cert_mem_cache_size=16MB
2021/03/28 10:47:31| Processing: https_port 3129 intercept ssl-bump 
cert=/opt/osec/etc/ssl_cert/squid-ca-cert+key.pem generate-host-certificates=on 
dynamic_cert_mem_cache_size=16MB



2021/03/28 10:47:31| Processing: tls_outgoing_options 
cafile=/opt/osec/etc/pki/tls/certs/ca-bundle.crt




2021/03/28 10:47:31| Initializing https:// proxy context
2021/03/28 10:47:31| Requiring client certificates.




2021/03/28 10:47:31| Initializing http_port [::]:3128 TLS contexts
2021/03/28 10:47:31| Using certificate in 
/opt/osec/etc/ssl_cert/squid-ca-cert+key.pem
2021/03/28 10:47:31| Using certificate chain in 
/opt/osec/etc/ssl_cert/squid-ca-cert+key.pem
2021/03/28 10:47:31| Adding issuer CA: /C=US/ST=Kansas/L=Overland 
Park/O=Company, Inc./OU=Area 
77/CN=local.corp.dom/emailAddress=sslad...@company.com
2021/03/28 10:47:31| Using key in /opt/osec/etc/ssl_cert/squid-ca-cert+key.pem
2021/03/28 10:47:31| Not requiring any client certificates




2021/03/28 10:47:31| Initializing http_port 0.0.0.0:3128 TLS contexts
2021/03/28 10:47:31| Using certificate in 
/opt/osec/etc/ssl_cert/squid-ca-cert+key.pem
2021/03/28 10:47:31| Using certificate chain in 
/opt/osec/etc/ssl_cert/squid-ca-cert+key.pem
2021/03/28 10:47:31| Adding issuer CA: /C=US/ST=Kansas/L=Overland 
Park/O=Company, Inc./OU=Area 
77/CN=local.corp.dom/emailAddress=sslad...@company.com
2021/03/28 10:47:31| Using key in /opt/osec/etc/ssl_cert/squid-ca-cert+key.pem
2021/03/28 10:47:31| Requiring client certificates.




2021/03/28 10:47:31| Initializing https_port [::]:3129 TLS contexts
2021/03/28 10:47:31| Using certificate in 
/opt/osec/etc/ssl_cert/squid-ca-cert+key.pem
2021/03/28 10:47:31| Using certificate chain in 
/opt/osec/etc/ssl_cert/squid-ca-cert+key.pem
2021/03/28 10:47:31| Adding issuer CA: /C=US/ST=Kansas/L=Overland 
Park/O=Company, Inc./OU=Area 
77/CN=local.corp.dom/emailAddress=sslad...@company.com
2021/03/28 10:47:31| Using key in /opt/osec/etc/ssl_cert/squid-ca-cert+key.pem
2021/03/28 10:47:31| Not requiring any client certificates




2021/03/28 10:47:31| Initializing https_port 0.0.0.0:3129 TLS contexts
2021/03/28 10:47:31| Using certificate in 
/opt/osec/etc/ssl_cert/squid-ca-cert+key.pem
2021/03/28 10:47:31| Using certificate chain in 
/opt/osec/etc/ssl_cert/squid-ca-cert+key.pem
2021/03/28 10:47:31| 

Re: [squid-dev] Extremely questionable code in Basic authentication module

2021-03-29 Thread Amos Jeffries

On 25/03/21 10:18 am, Joshua Rogers wrote:

Hi there,

I was looking at the file src/auth/basic/UserRequest.cc, in 
function Auth::Basic::UserRequest::module_direction:



     case Auth::Ok:
         if (user()->expiretime + 
static_cast(Auth::SchemeConfig::Find("basic"))->credentialsTTL 
<= squid_curtime)

             return Auth::CRED_LOOKUP;
         return Auth::CRED_VALID;

     case Auth::Failed:
         return Auth::CRED_VALID;

I was a bit alarmed that if an auth fails, it returns Auth::CRED_VALID.
Why is CRED_ERROR or CRED_CHALLENGE not used here?



CRED_VALID is because "Login Failed" is a valid state for a users 
credentials to have. It is also a final state (thus no CRED_CHALLENGE). 
No error has occured in Squid or the helper to reach that state (thus no 
CRED_ERROR).


These CRED_* values are what state of processing the HTTP authentication 
sequence is up to.




In negotiate and NTLM code, there is a note:
"XXX: really? not VALID or CHALLENGE?" when CRED_ERROR is returned.



Those auth schemes are tied to the clients TCP connection and cannot be 
re-authenticated with different values on any pipeline messages. That 
causes some surprising/nasty effects on HTTP features including the auth 
internals.



Thankfully Squid doesn't really rely on this return value to determine 
whether a login is correct or not


Good. That return value is not saying anything about the user login 
success/failure. It is about the HTTP auth negotiation message(s) and 
any related helper processing all having reached a final decision.



as it 
calls authenticateUserAuthenticated() which eventually 
checks credentials() == Auth::Ok. It all seems like quite a round-about 
method, however.


According to 
 
each of these calls should return CRED_CHALLENGE.




The XXX is kind of outdated now. Part of the questions answer is known - 
CHALLENGE cannot be sent because Failed is a final state. But still 
unknown what the effects of CRED_VALID vs CRED_ERROR would be. I suspect 
it may be that re-authenticate situation of NTLM/Negotiate causing 
complications.



Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] Questionable default 'range_offset_limit ' option

2021-03-19 Thread Amos Jeffries

On 19/03/21 6:13 pm, Joshua Rogers wrote:

Hi there,


According to http://www.squid-cache.org/Doc/config/range_offset_limit/ 
, 
'range_offset_limit' is by default 'none'.




This directive is an access control like http_access, but instead of 
doing an allow/deny action is sets a limit (or not) on any matching 
transactions.


The 'none' value prevents this directive setting a limit. For example; 
to apply a 5KB limit on Internet visitors, a 10KB limit on LAN clients, 
and no limit on localhost traffic would look like this:


  range_offset_limit none localhost
  range_offset_limit 10 KB localnet
  range_offset_limit 5 KB
(there is an implicit 'all' if you don't specify any ACLs to match)


So the default for this directive - if you don't configure any 
range_offset_limit lines at all. Is not to set/force a limit.





However in HttpRequest.cc, it says it is by default 0:
rangeOffsetLimit = 0; // default value for rangeOffsetLimit



HttpRequest::rangeOffsetLimit is the limit actually being use on one 
specific transaction.


The default here is 0 bytes. Meaning disabled. Only the bytes requested 
by the client will be fetched. "range_offset_limit none" means that this 
non-limit will stay unchanged.




and then in HttpHdrRange.cc:
     if (limit == 0)
         /* 0 == disabled */
         return true;

     if (-1 == limit)
         /* 'none' == forced */
         return false;


So is 'none' -1, or 0 in this case?:)



"none" has different values depending on what type of thing it is the 
value of.



Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] Squid Windows: System Requirements/Metrics

2021-03-06 Thread Amos Jeffries

Hi Anuj,

The details we have about Squid for Windows can all be found at 
.  You may want to 
also look at the Diladele website linked from that wiki page for any 
details they have found in relation to their Windows packages.



Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] rfc1738.c

2020-10-29 Thread Amos Jeffries

On 30/10/20 12:17 am, Damian Wojslaw wrote:

Helo

I've been recently following the PR that addresses issue with 
authentication in cachemgr.cc.
It was mentioned that rfc1738_do_escape could use changing so it doesn't 
return static buffer.


The latest Squid have AnyP::Uri::Encode() whic uses a caller provided 
buffer. What needs doing now is finding old rfc1738*_escape() which can 
be replaced with the new API.


For example; code like urlCanonicalCleanWithoutRequest() where the data 
is stored in an SBuf, but converted to char* in order to use the rfc1738 
API.


Also a AnyP::Uri::Decode function needs adding so we can convert the 
rfc1738_*unescape() callers too.



I'm interested in working on it. I'm also interested in getting my hands 
in cachegr.cc, as it was described as neglected in the PR. This last 
remark would give me hope that my speed of work wouldn't be too slow.
I'd probably require some mentoring in the process. If there are no 
NACKs, I'd like to start working on it.


The cachemgr.cc code is "neglected" primarily because it is deprecated. 
It has been replaced by reports served directly in HTTP by the proxy.



Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] Jenkins situation

2020-08-09 Thread Amos Jeffries
On 5/08/20 1:26 pm, Amos Jeffries wrote:
> Hi all,
> 
> With the recent Jenkins randomly failing builds due to git pull / fetch
> failures I am having to selectively disable the PR Jenkins block on PR
> merging for some hrs.


Previous "normal" situation appears to be resuming. I am returning to
regular merge actions. Luckily only one 30min override was needed in
total and Anubis seems to have coped well enough.

Jenkins still has some issues doing git operations when the relevant
repository branch is being touched (eg more commits pushed by the time
Jenkins gets to testing). But those have been resolved with regular
Jenkins API kicks.

Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] Jenkins situation

2020-08-04 Thread Amos Jeffries
Hi all,

With the recent Jenkins randomly failing builds due to git pull / fetch
failures I am having to selectively disable the PR Jenkins block on PR
merging for some hrs.

Please do not mark any PRs with "M-cleared-for-merge" until further
notice. I will do this myself with coordination on Jenkins results. You
can use the "waiting-for-committer" instead if necessary.

Thank you all.

Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] RFC: tls_key_log: report TLS pre-master secrets, other key material

2020-07-30 Thread Amos Jeffries
On 30/07/20 6:41 am, Alex Rousskov wrote:
> On 7/15/20 3:14 PM, Alex Rousskov wrote:
> 
>> I propose to add a new tls_key_log directive to record TLS
>> pre-master secret (and related encryption details) for to- and
>> from-Squid TLS connections. This very useful triage feature is common
>> for browsers and some networking tools. Wireshark supports it[1]. You
>> might know it as SSLKEYLOGFILE. It has been requested by several Squid
>> admins. A draft documentation of the proposed directive is at the end of
>> this email.
>>
>> [1] https://wiki.wireshark.org/TLS#Using_the_.28Pre.29-Master-Secret
>>
>> If you have any feature scope adjustments, implementation wishes, or
>> objections to this feature going in, please let me know!
> 
> 
> FYI: Factory is starting to implement this feature.
> 

Sorry I forgot to reply to this earlier.

Two design points:

1) It seems to me these bits are part of the handshake. So would come in
either as members/args of the %handshake logformat macros (some not yet
implemented) or as secondary %handshake_foo macros in the style %cert_*
macros use.


2) Please do use the logging logic implemented for access_log, just with
the next directive as list of log outputs to write at the appropriate
logging trigger time.

I accept the reasoning for not using access_log directive. This will
need a new log directive with different times when it triggers output
written there. However (most of) the implementation logic of access_log
should be usable for this new output.



>> We propose to structure this new directive so that it is easy to add
>> advanced access_log-like features later if needed (while reusing the
>> corresponding access_log code). For example, if users find that they
>> want to maintain multiple TLS key logs or augment log records with
>> connection details, we can add that support by borrowing access_log
>> options and code without backward compatibility concerns. The new
>> required "if" keyword in front of the ACL list allows for seamless
>> addition of new directive options in the future.
>>


Accepted, provided the directive *does* support access_log feature
addition. The plan below does not meet that criteria. Some changes
inline below to make it do so.


>> -- draft squid.conf directive documentation 
>>
>> tls_key_log
>>
>> Configures whether and where Squid records pre-master secret and
>> related encryption details for TLS connections accepted or established
>> by Squid. These connections include connections accepted at
>> https_port, TLS connections opened to origin servers/cache_peers/ICAP
>> services, and TLS tunnels bumped by Squid using the SslBump feature.
>> This log (a.k.a. SSLKEYLOGFILE) is meant for triage with traffic
>> inspection tools like Wireshark.
>>
>> tls_key_log  if ...
>>

Please allow extension points for options and modules:

  tls_key_log stdio: [options] if ...


The "stdio:" module name is to allow for sharing the access_log config
parser and future expansion to logging modules like daemon: which we
will doubtless be asked for later.



>> WARNING: This log allows anybody to decrypt the corresponding
>> encrypted TLS connections, both in-flight and postmortem.
>>
>> At most one log file is supported at this time. Repeated tls_key_log
>> directives are treated as fatal configuration errors. By default, no
>> log is created or updated.

With ACL support it seems reasonable to support multiple logs. We should
be able to re-use (with minor change to pass the list of log outputs to
the function) the logic access_log has for writing to a list of outputs.


>>
>> If the log file does not exist, Squid creates it. Otherwise, Squid
>> appends an existing log file.
>>
>> The directive is consulted whenever a TLS connection is accepted or
>> established by Squid. TLS connections that fail the handshake may be
>> logged if Squid got enough information to form a log record. A record
>> is logged only if all of the configured ACLs match.
>>
>> Squid does not buffer these log records -- the worker blocks until
>> each record is written. File system buffering may speed things up, but
>> consider placing this triage log in a memory-based partition.
>>
>> This log is rotated based on the logfile_rotate settings.
>>

Please don't use solely that directive. The new directive should have a
rotate=N option of its own. Only using the global directive as a default
if that option is unset.


Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] TR: SQUID-4.12 build ACL_HELPER

2020-07-28 Thread Amos Jeffries
On 28/07/20 1:26 am, Ferdinand Michael wrote:
> Hello,
> 
>  
> 
> I have a problem with the compilation everything works except the
> ACL_helpers.
> 

I doubt that statement is correct. This line:

   checking for ldap.h... (cached) no

Says that a previous test for LDAP library dev files (eg by Basic auth
helper build) indicated LDAP is not available at all. That means your
Basic auth helper cannot be built either.


> 
> I don't understand why this does not want to build :
> 
> ./configure --prefix=/usr/local/squid  --with-default-user=proxy
> --with-openssl --enable-icmp --enable-basic-auth-helpers=LDAP
> 
>  
> 
> configure: external acl helper AD_group ... found but cannot be built
> 
> checking for ldap.h... (cached) no
> 
> checking for winldap.h... (cached) no
> 
> configure: external acl helper LDAP_group ... found but cannot be built
> 
> checking for w32api/windows.h... (cached) no
> 
> checking for windows.h... (cached) no
> 
>  
> 
> do you know the reason ?
> 


Required ldap.h (for non-Windows) or winldap.h (for Windows) header file
not existing on your build machine is the reason LDAP_group not building.

You omitted the checks leading up to the declaration that AD_group could
not be built. So no idea about that.


Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] OpenSSL 3.0 support at last

2020-07-23 Thread Amos Jeffries
On 24/07/20 3:24 am, Christos Tsantilas wrote:
> On 23/7/20 7:08 π.μ., Amos Jeffries wrote:
>> Hi guys,
>>
>> OpenSSL 3.0 with their new GPL compatible license is becoming available
>> now in Debian and that means we can finally auto-enable all OpenSSL
>> features when building against that version.
>>
>> I am starting test build now to see how much breakage we have to work
>> through for a basic compile.
> 
> There are some deprecated functions and probably small API changes.
> 

I have found 2 functions that need different implementation and one
small API change.

Making the changes I am currently getting a thread lock hanging the
squid process. So there is something more to be found.
I have opened <https://github.com/squid-cache/squid/pull/694> to track this.


>>
>> Is anyone interested and able to assist with the feature testing to see
>> if there are any behaviour problems we need to fix in Squid and/or
>> report upstream to get fixed before their stable release.
>>
>>
>> @Christos; have you done any work or research in this direction already
>> that I should be aware of?
> 
> Nope.
> 

Okay. Thanks.


Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] OpenSSL 3.0 support at last

2020-07-22 Thread Amos Jeffries
Hi guys,

OpenSSL 3.0 with their new GPL compatible license is becoming available
now in Debian and that means we can finally auto-enable all OpenSSL
features when building against that version.

I am starting test build now to see how much breakage we have to work
through for a basic compile.

Is anyone interested and able to assist with the feature testing to see
if there are any behaviour problems we need to fix in Squid and/or
report upstream to get fixed before their stable release.


@Christos; have you done any work or research in this direction already
that I should be aware of?


Cheers
Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] RFC: making TrieNode less memory-hungry

2020-06-30 Thread Amos Jeffries
On 20/06/20 9:13 am, Francesco Chemolli wrote:
> Hi all,
>   I'm looking at the TrieNode code, and while it's super fast, it's
> quite memory-hungry: each node uses 2kb of RAM for the children index
> and any moderately-sized Trie has plenty of nodes. On the upside, it's
> blazing fast.
> 
> How about changing it so that each node only havs as many children as
> the [min_char, max_char] range, using a std::vector and a min_char
> offset? Lookups would still be O(length of key), insertions may require
> shifting the vector if the char being inserted is lower than the current
> min_char, but the memory savings sound promising.
> 
> What do you think?
> 

Before doing anything, please look at how trie is implemented for
rbldnsd. It is both super fast and memory efficient. A lot of the
optimization analysis and work has already been done by some super smart
people we can probably leverage.

Also, keep in mind that trie is only used by ESI. So it is not exactly a
high-usage feature.


Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] Proposed focus for Squid-6

2020-06-30 Thread Amos Jeffries
I have been asked a few weeks ago about what the "goal for Squid-6" is
going to be.

The last few version we have focused on C++11 optimizations and code
upgrades. While the code is not entirely C++11 (and may never be) new
additions are routinely using and upgrading code to the improved
language features.


NP: this is no way affects the policies of handling things as they come
and release when its ready. It just gives people some rough direction to
consider when struggling with selecting of new work to start.


I am thinking it is time for a slight change to at least add another
goal. They way held-back changes are going so far I am thinking we
should aim at code pruning this next release series. For that to be a
goal we need to start preparing for it and the user announcements early
(ie now) rather than in retrospective.


This would cover:

0) the ongoing project to clarify OS support and testing. Formally
removing some.

1) remove features that have been deprecated since Squid-3 days.
  - WAIS support
  - Replace ICP with HTCP

2) proposing some next features to be removed ASAP, possibly removing
them this release.
 - send-announce removal
 - SMB_LM helper removal

3) drop (all?) bitrotten code


4) statistic addition to measure feature use. To improve admin ability
to answer our "are you using this feature" requests.


Thoughts anyone?




Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] RFC: Modernizing sources using clang-tidy

2020-05-30 Thread Amos Jeffries
On 20/04/20 2:02 pm, Alex Rousskov wrote:
> Hello,
> 
> Squid sources contain a lot of poorly written, obsolete, and
> inconsistent code that (objectively) complicates development and
> (unfortunately) increases tensions among developers during review.
> 
> Some of those problems can be solved using tools that modify sources.
> Clang-tidy is one such tool: https://clang.llvm.org/extra/clang-tidy/
> It contains 150+ "checks" that can automatically fix some common
> problems by studying the syntax tree produced by the clang compiler.
> Understanding the code at that level allows clang-tidy to attack
> problems that simple scripts cannot touch.
> 
> I have not studied most of the clang-tidy checks, but did try a few
> listed at the end of this email. You can see the whole list of checks at
> https://clang.llvm.org/extra/clang-tidy/checks/list.html
> 
> 
> Here are a few pros and cons of using clang-tidy compared to our own
> custom scripts:
> 
> Pros:
> 
> * maintained and improved by others
> * can fix problems that our scripts cannot see
> * covers a few rules from C++ Core Guidelines and popular Style Guides
> * arguably less likely to accidentally screw things up than our scripts
> 

In order to switch we should be looking for a tool that improves over
the status-quo. Which is both astyle plus the custom scripts.

All of the above are high-level abstractions that the existing astyle
tool provides by itself. So no valid reason for changing is visible yet.


What I have seen from kinkies work is a few cases of little things like
ability to fix whitespace inside () conditionals and before
function/method parameter lists. Which is something astyle has not been
able to do for us and would be difficult to script.



> Cons:
> 
> * Requires installation of clang, clang-tidy-10, bear. It is not
> difficult in a CI environment, but may be too much for occasional
> contributors.

 Likewise same issues with needing a specific astyle version. So this is
more of a non-Pro than a Con.


> 
> * Clang-tidy misses files that do not participate in a specific build
> (e.g., misses many compat/ files that are not needed for an actual
> build). Applying clang-tidy to all sources will be difficult.
> 
> * Clang-tidy misses code lines that do not participate in a specific
> build (e.g., lines inside `#if HEADERS_LOG` where HEADERS_LOG was not
> defined). Applying clang-tidy to all lines will be impractical.
> 
> * Clang-tidy would be difficult to customize or adjust (probably
> requires building clang from scratch and writing low-level AST
> manipulation code).
> 

These are all regressions. Severity varies, but IMO we will need to
solve them somehow in order to remove the astyle usage. Otherwise we
would end up this just being an additional tool on the stack - rather
than a full replacement for astyle+scripts.


> * Clang-tidy is relatively slow -- the whole repository scan takes
> approximately 15-30 minutes per rule in my limited tests. Combining
> rules speeds things up, but it may still be too slow to run during every
> PR check on the current CI hardware.
> 

The current source-maintenance takes 5-10 minutes on master today and
with the scripts/maintenance/ automating growing I expect that to increase.


> * We do not have any clang-tidy experts on the development team (AFAIK).
> 

Not much of a con, we do not exactly have an astyle expert either. The
benefit of third-party tooling is that we don't need an expert. Just
someone able to read the documentation and test config settings when we
want to update the style policy.


> 
> I will itemize a few checks that I tried. The "diff" links below show
> unpolished and partial changes introduced by the corresponding checks.
> If we decide to use clang-tidy in principle, we will need to fine-tune
> each check options (at least) to get the most out of the tool.
> 
> * modernize-use-override
> 
> Adds "override" (and removes "virtual") keywords from class declarations.
> 
> This check is very useful not just because "override" helps prevent
> difficult-to-detect bugs but because it is very difficult to transition
> to using "override" _gradually_ -- some compilers reject class
> declarations that have a mixture of with-override and without-override
> methods. Moreover, adding override keywords to old class declarations is
> rather time-consuming because it is often not obvious (to a human)
> whether the class introduces a new interface or overrides and old one.
> 
> Diff: https://github.com/measurement-factory/squid/commit/d00d0a8
> 

I like.

> 
> * performance-...
> 
> Clang-tidy has a few checks focusing on performance optimizations. The
> following commit shows a combination of the following four checks:
> performance-trivially-destructible, performance-unnecessary-value-param,
> performance-for-range-copy, performance-move-const-arg
> 
> Diff: https://github.com/measurement-factory/squid/commit/1ae5d7c
> 

IMO this is something we should have a


> 
> * 

Re: [squid-dev] Squid command

2020-05-30 Thread Amos Jeffries
On 27/05/20 2:25 pm, pic rat rat wrote:> Dear sir,
>
> We've found problem of squid program after config in squid.conf
> "ssl-bump generate-host-certificates=on,"

I hope that comma ',' is not in your config file. If it is that would be
the problem.

> service is not run, however I remove "generate-host-certificate=on"
> service is normally starting.
> Could you please advise?

Please see your logs for information on why Squid is not starting.
Either cache.log or the system log should contain a message (or several)
indicating the problem.


>
> squid -v
> Squid Cache: Version 3.5.20
>

This is not he complete output. Build information is missing.


> OS
> cat /etc/os-release
> NAME="Red Hat Enterprise Linux Server"

Have you tried contacting RHEL help channels about this?

 First point of contact should be the vendor whose Squid package you are
using. They are the most likely to know of any patching, custom build
setting, or OS environment details that may be relevant to the issue.

 As a fall-back contact the squid-users mailing list can be used to see
if the larger community has input or hints to help with a problem diagnosis.


This squid-dev contact is for discussion of the code itself. For
example; once you have evidence of the code being a problem - to discuss
changes for a fix. Or for assistance understanding where in the code to
look for such debugging.


Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] cppunit -> googletest / gmock?

2020-05-30 Thread Amos Jeffries
On 31/05/20 5:27 am, Francesco Chemolli wrote:
> Hi all,
>    starting from a PR in a conversation with Alex about our current
> approach to unit testing being painful, I've checked what alternatives
> would we have and how practical would they be.
> 
> An easy first option would be googletest/googlemock.
> 
> On a lazy afternoon, I've tried seeing how useful/painful it would be to
> try it, by porting one test over - it's quite trivial and doesn't
> require mocking, so I'll try a more complicated one next - to start a
> conversation about the topic.
> 
> You can find the test branch at https://github.com/kinkie/squid/tree/gtest .
> I've only touched two files, a newly-created src/tests/testMemGtest and
> src/Makefile.am .
> 
> The output from the test run is at https://paste.ubuntu.com/p/3sgTDN7rNm/
> 
> What do you think?
> 
> My initial thoughts:
> - it is somewhat simpler and more powerful than cppunit
> - setting the test environment up is easy but at this time it can only
> be done from source. Adding it to the build farm images is straightforward

That is a problem. The unit tests are run by pretty much everyone
building Squid.

It is not a complete blocker, but having a process more complex than
simple dependency install does pose a relatively major hurdle that any
framework has to get over to be of much utility.



> - the license is BSD 3-clause new
> (https://github.com/google/googletest/blob/master/LICENSE) 
> - googlemock promises to be vastly superior to our current approach

Where are you seeing evidence of this?

The main issue we have with cppunit itself is that when a test fails it
is not clear from the output which assertion occurred, nor why. One is
left having to try and trigger the unit failure again manually and gdb
from there.

This can be worked around by following best-practice in unit test
implementation. But people contributing to Squid have not been doing so
consistently, and it is just a workaround.

I do see somewhat more verbose output in the logs, and slightly less
code to implement (no .h class). Which is a nice gain, but not what I
would call "vastly superior".


> - porting memTest took me about one hour, mostly caused by us including
> cppunit headers from squid.h (WUT? A PR is coming up to unentangle that)
> 

Converting tests from one framework to another is not a problem. We just
have nobody doing the legwork. Case in point being the old tests not
even using cppunit.


The main problem(s) we have with testing of Squid is dev participation:

 a) people are not writing tests to cover new code, and
 b) people are not writing/updating tests to cover bug fixes, and
 c) tests written are generally not following best practice design.

IIRC Alex an I have different ideas about the ideal focus of testing;
 * I prefer the micro-test approach to demonstrate a high quality proof
of code reliability.
 * Alex has stated a preference for high level black-box testing of
behaviour vs design requirements.


Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] squid master build with alternate openssl fails

2020-05-10 Thread Amos Jeffries
On 10/05/20 7:53 pm, Amos Jeffries wrote:
> On 10/05/20 7:02 pm, Christos Tsantilas wrote:
>> On 8/5/20 5:50 μ.μ., Amos Jeffries wrote:
>>> Does this change resolve the issue for you?
>>
>> It is a step but this is not enough.
>>
>> I am attaching a patch which finally solved the issue. However still it
>> is not enough, there are other similar cases need to be fixed in
>> squid-util.m4 and probably in configure.ac
>>
> 
> That configure.ac change is wrong. It really should be checking for ' =
> "xyes" ' because this library is supposed to be auto-disabled. eg for
> the default value of nil.
> 
> 
> The defun'd macro line "set with_$squid_auto_lib = yes" should be
> changing with_openssl to "yes". If not, that is a bug.
> 

Your patch helped me track it down though. The whitespace around "+=" is
breaking those assignments.

I will submit a PR fixing this shortly.

Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] squid master build with alternate openssl fails

2020-05-10 Thread Amos Jeffries
On 10/05/20 7:02 pm, Christos Tsantilas wrote:
> On 8/5/20 5:50 μ.μ., Amos Jeffries wrote:
>> Does this change resolve the issue for you?
> 
> It is a step but this is not enough.
> 
> I am attaching a patch which finally solved the issue. However still it
> is not enough, there are other similar cases need to be fixed in
> squid-util.m4 and probably in configure.ac
> 

That configure.ac change is wrong. It really should be checking for ' =
"xyes" ' because this library is supposed to be auto-disabled. eg for
the default value of nil.


The defun'd macro line "set with_$squid_auto_lib = yes" should be
changing with_openssl to "yes". If not, that is a bug.


Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] squid master build with alternate openssl fails

2020-05-08 Thread Amos Jeffries
On 9/05/20 2:58 am, Alex Rousskov wrote:
> On 5/8/20 10:12 AM, Christos Tsantilas wrote:
> 
>> Squid master 699ade2d fails to build with an alternate OpenSsl, when the
>> "--with-openssl=/path/to/openssl" is used.
> 
> Francesco, builds with custom OpenSSL paths are not that uncommon,
> especially among SslBump admins. Would you be able to test that kind of
> configuration in one of the Jenkins tests? It can be even combined with
> other custom-path tests. Or is this too custom/special to warrant an
> automated test in your opinion?
> 
> 
>> I think that the issue added with the commit 245314010.
> 
> I speculate that the bug is related to the disappearance of the
> LIBOPENSSL_PATH assignment in that commit. We still use that variable,
> but we no longer set it.
> 
> 
> Amos, would you be able to fix this?

It is set by $3_PATH in the SQUID_OPTIONAL_LIB macro, then set into
SSLLIB when the files are confirmed.


Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] squid master build with alternate openssl fails

2020-05-08 Thread Amos Jeffries
Does this change resolve the issue for you?

diff --git a/acinclude/squid-util.m4 b/acinclude/squid-util.m4
index 7f5a72e5b..5860b690e 100644
--- a/acinclude/squid-util.m4
+++ b/acinclude/squid-util.m4
@@ -188,9 +188,9 @@ AC_DEFUN([SQUID_OPTIONAL_LIB],[
   squid_auto_lib=`echo $1|tr "\-" "_"`
   set with_$squid_auto_lib = no
   AC_ARG_WITH([$1],AS_HELP_STRING([--with-$1],[Compile with the $2
library.]),[
-AS_CASE(["$with_$1"],[yes|no],,[
-  AS_IF([test ! -d "$with_$1"],AC_MSG_ERROR([--with-$1 path does
not point to a directory]))
-  with_$squid_auto_lib=yes
+AS_CASE(["$withval"],[yes|no],,[
+  AS_IF([test ! -d "$withval"],AC_MSG_ERROR([--with-$1 path does
not point to a directory]))
+  set with_$squid_auto_lib = yes
   AS_IF([test -d "$withval/lib64"],[$3_PATH += "-L$withval/lib64"])
   AS_IF([test -d "$withval/lib"],[$3_PATH += "-L$withval/lib"])
   $3_CFLAGS="-I$withval/include"

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] RFC: cacheMatchAcl

2020-04-04 Thread Amos Jeffries
On 4/04/20 7:49 pm, Francesco Chemolli wrote:
> I am not sure about what you recommend to do here.
> This cache is IMO over complicated and it breaks layering.
> I’m mostly done in a PR replacing the dlink with a std::list but without
> changing the overall design. It does kill a few tens of lines of code
> and is clearer to read tho.
> 

Well, if you have already done the work it migth as well finish up. I
was thinking you had just done the investigation and not started
refactoring yet.

Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] RFC: cacheMatchAcl

2020-04-04 Thread Amos Jeffries
On 4/04/20 3:34 am, Alex Rousskov wrote:
> On 4/3/20 7:25 AM, Francesco Chemolli wrote:
> 
>>   I'm looking at places where to improve things a bit, and I stumbled
>> across cacheMatchAcl . It tries hard to be generic, but it is only ever
>> used in ACLProxyAuth::matchProxyAuth . Would it make sense to just have
>> a specialised cache for proxyauth?
> 
> I wonder whether proxy_auth is special in this context:
> 
> 1. Is proxy_auth cache radically different from other ACL caches such as
> external ACL cache? Or did we just not bother unifying the code
> supporting these two caches?
> 

Pretty much yes, we have not done the legwork. Almost every component in
Squid which deals with externally provided state has some form of ad-hoc
cache. If we are lucky the use a hash or dlink. One at least uses splay
(ouch).


One of my background projects in the effort to empty the PR queue this
year is to implement a proper CLP Map - specifically for PR 30 instead
of the LruMap disagreement blocking it. That would be a good container
to use for all these small state data caches all over Squid - keyed
access with a dual TTL and LFU (fading) removal mechanism.

If this ACL cache is not causing issues already we can wait until that
gets submitted for review.


> 2. Do some other ACLs cache nothing just because we did not have enough
> time to add the corresponding caching support? Or do proxy_auth and
> external ACL poses some unique properties that no other ACL already has
> or likely to have in the foreseeable future?

The only thing special is that cache they use is exclusively accessed by
them.

IDENT, ASN, DNS based ACLs also use caches. But those are a bit detached
from the ACL code itself (eg fqdncache) since other code sometimes
accesses the cache directly for other uses.


Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] squid.conf future

2020-02-24 Thread Amos Jeffries
On 25/02/20 6:11 am, Alex Rousskov wrote:
> On 2/24/20 3:11 AM, Amos Jeffries wrote:
> 
>> While doing some polish to cf_gen tool (PR #558) I am faced with some
>> large code edits to get that tool any more compliant with our current
>> guidelines. With that comes the question of whether that more detailed
>> work is worth doing at all ...
> 
> Probably not. Even PR #558 changes might be going a bit too far (or not
> far enough). Ideally, we should agree on key code cleanup principles
> before doing such cleanup, to minimize tensions in every such PR.
> Cleanup for the sake of cleanup should be done under a general
> agreement/consent rather than ad-hoc. I am working on the corresponding
> suggestions but need another week or so to post a specific proposal.
> 
> 
>> For the future I am considering a switch of cf.data.pre to a format like
>> SGML or XML which we can better generate the website contents from.
> 
> I do support fixing cf.data.pre-related issues -- they are a well-known
> constant (albeit moderate) pain for developers and users alike. However,
> using writer-unfriendly formats such as XML is not the best solution
> IMO. SGML may be a good fit, but that concept covers such a wide variety
> of languages that it is difficult to say anything specific about it in
> this context (e.g., both raw XML and wiki-like markups can be valid
> SGML!). If you meant something specific by "SGML", please clarify.

Exactly. We have the Linuxdoc toolchain already used for release notes
etc. so long as we have a simple set of rules about the markup used for
bits that cf_gen needs to pull out for code generation we can use any of
the more powerful markup in the documentation comment parts.


> 
> Automated rendering of squid.conf sources, including web site content
> generation, should be straightforward with any good source format,
> including writer-friendly formats. Thus, web site generation is not an
> important deciding criteria here AFAICT.

It is an existing use-case for documentation output we need to maintain.
We can still decide to forego adding nice-to-have outputs that do not exist.


> 
> IMO, an ideal markup language for cf.data.pre (or its replacements)
> would satisfy these draft high-level criteria:
> 
> 1. Writer-friendly. Proper whitespace, indentation, and other
> presentation features of the _rendered_ output are the responsibility of
> renderes, not content writers. Decent _sources_ formatting should be
> automatically handled by popular modern text editors that developers
> already use. No torturing humans with counting tags or brackets.

This nullifies the argument that XML is torturous. Good editing tools
can handle XML easily.

For writers dealing with the tags directly a simple SGML markup is
better. But not a huge amount.


> 
> 2. Expressive enough to define all the squid.conf concepts that we want
> to keep/support, so that they can be rendered beautifully without hacks.
> For example, if we agree that those sections are a good idea, then this
> item includes support for introduction sections that define no
> configuration options themselves.

What are you calling squid.conf concepts here?


> 
> 3. Supports documentation duplication avoidance so that we do not have
> to duplicate a lot of text or refer the reader to directive X for
> details of directive Y functionality.
> 

The XML idea supports that. I am not sure about SGML.

All the other text syntax I'm aware of do not have nice writer-friendly
referencing. The YAML-like one we currently have is a case in point.



> 4. Allows for automated validation of internal cross-references (and
> possibly other internal concepts that can be validated). Specification
> of these cross-references is covered by item 2.
> 
> 5. Allows for automated spellchecking without dangerous exceptions.
> 

Any syntax we choose with good tooling should support that. If not the
requirement to translate between formats will at least involve moving
the text parts into a format that can be spell-checked (HTML).


> 6. Git-friendly: Adding two new unrelated directives does not lead to
> conflicting pull requests.

This is unrealistic so long as the source code remains in one file. Only
edits to independent files are guaranteed not to conflict.

What I am considering is a change to the internal syntax within
cf.data.pre. At most a filename/extension change to match. It remains a
source code file like any other.


> 
> 7. Either already well-known or easy to learn by example (as far as
> major used concepts are concerned).
> 

AFAIK, that effectively means SGML or XML.


> 8. Can be easily parsed using programming languages that our renderers
> are (going to be) written in (e.g., using existing parser libraries). We
> should probably discuss

[squid-dev] squid.conf future

2020-02-24 Thread Amos Jeffries
Hi all,

While doing some polish to cf_gen tool (PR #558) I am faced with some
large code edits to get that tool any more compliant with our current
guidelines. With that comes the question of whether that more detailed
work is worth doing at all ...


For the future I am considering a switch of cf.data.pre to a format like
SGML or XML which we can better generate the website contents from.

The main point in favour of these is that we already have infrastructure
in place for ESI and release notes. It would be less work to re-use one
of those than integrate a new library or tooling for some other format.


Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] Want to integrate squid github to Jenkins CI

2020-01-22 Thread Amos Jeffries
On 23/01/20 2:45 am, Justin Michael Schwartzbeck wrote:
> The SHA list sounds great. Thanks for that. I notice that 4.10 is not
> there? Is it not considered "stable" officially?
> 

Ah, seems a small bug in our server scripts. Fixed now.

Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] Want to integrate squid github to Jenkins CI

2020-01-22 Thread Amos Jeffries
On 22/01/20 5:30 pm, Justin Michael Schwartzbeck wrote:
> Hi Amos, thanks for replying.
> 
> So I guess maybe I need to narrow this down a little bit more. Is there
> some programmatic way that I can get the *latest stable release*
> *version* and *source download link*?
> Right now I can do this by navigating to the downloads page:
> http://www.squid-cache.org/Versions/
> 
> Scroll to "Stable Versions" under source code packages, and see that
> 4.10 is the latest, along with a link.
> I guess I could write a script to parse the HTML on that page and find
> this information, but that is rather clunky, and if the page format ever
> changes then my script will be broken. Is there another way that you are
> aware of?

There are several ways.

1) You can fetch the FTP directory listing from



2) You can fetch the checksums file from that directory
 and process it rather
than the directory listing. This is sometimes easier for HTTP mirrors of
the FTP service.


Those contain only the latest 2 releases from each series. This way you
can track when we change stable series - though manual review is advised
at the changeover, so you may want to make that part just a notice for
attention rather than an auto-build.


Any permanent URLs you need linking back to the website for HTTP
download, docs, or FTP archive (all released tarballs) can be
synthesized from the version number.


Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] Want to integrate squid github to Jenkins CI

2020-01-20 Thread Amos Jeffries
On 21/01/20 12:52 pm, agent_js03 wrote:
> Hi all,
> 
> I am putting together a squid + content filter solution using docker and
> kubernetes.
> Right now I am setting up a CI system in Jenkins so that when there is a new
> release of squid, it will pull the code, build a new container,and then
> publish the image to docker hub. So basically I am wanting to know how the
> github works. I only want this for new release versions. For example, the
> current latest release is 4.10:
> 
> http://www.squid-cache.org/Versions/
> 
> So when latest updates again, i.e. to 4.11, what happens on github? Is there
> a latest tag that gets moved? Or does a certain branch get updated or
> something? Looking for a way to trigger.


Releases happen in a separate repository, the changes on the main
repository appear as regular PRs to the vNN branches. So I do not think
there is any specific trigger to use unless you are also happy
rebuilding when backports happen (might be good, but YMMV).

We do tag the release commit with a SQUID_* tag some time shortly after
the release has actually bundled. So you could do a check for those
every so often. Releases are intended to take place first weekend of
each month - though as with this month there can be some delays caused
by external situations.


Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] Efficient FD annotations

2020-01-09 Thread Amos Jeffries
On 8/01/20 3:39 am, Alex Rousskov wrote:
> On 1/7/20 1:39 AM, Amos Jeffries wrote:
>> On 7/01/20 4:28 am, Alex Rousskov wrote:
>>> For the record: The ideas below are superseded by the concept of the
>>> code context introduced in commit ccfbe8f, including the
>>> fde::codeContext field. --Alex
> 
>> If you want to go that way (replace fde:note with fde:codeContext)
> 
> I would not replace fde::note with fde::codeContext. I would keep
> fde::note as a basic indication of the current FD purpose/scope. This
> can be done cheaply using string literals or constant SBufs.
> 

In that case, why is this thread being revived?
AIUI your proposals were alternatives to PR 270 - ways to replace the
fde::note field instead of just updating it to SBuf.


> 
>> we are going to have to do a security audit on the values displayed
>> by the CodeContext objects. That is due to how the fde::note are sent
>> over the public network in clear-text transactions for
>> mgr:filedescriptors report.
> 
> Overall, I doubt such an audit is a good idea -- only the Squid admin
> can correctly decide whether it is OK to expose transaction information
> in cache manager responses[1,2].

see responses below to [1] and [2]. Since you are apparently not seeking
to replace the mgr:filedescriptors report fde::note field with
fde::codeContext display - then please take it as a general security
approach policy for handling that type of change.



> If there is demand for limiting that
> exposure, I would rather add a configuration directive that would allow
> the admin to control whether Squid is allowed to report context in cache
> manager responses, error pages, etc.

We do not have to default such an option to publish everything on the
off chance it is safe. Our objective here is to prevent CVEs occuring.
To achieve that we should be defaulting to elide sensitive details from
public view unless configured to show them.

The details I am thinking of most prominently here are things like
credentials and tokens in the auth processing contexts. URL history and
keys in the SSL-Bump context. Access to Squid internal memory spaces on
the server. There are probably others we will uncover later. Anything
that could be reported as a sensitive information leak and assigned a CVE.


> 
> Alex.
> 
> [1] Some of the transaction context is already exposed in the current
> cache manager responses. We may want to add more details or report fewer
> details, but there is no paradigm shift here.

"some" renders this argument irrelevant. The leak issues will be
cropping up about the *new* data if we let that contain anything sensitive.

The purpose of audit is to retrospectively check the those not-yet
published details are safe for publishing. It will either find
everything is OK as-is, or that we need to add a censoring print
operator for publicly visible displays to use.


> 
> [2] In some deployment environments, cache manager responses are
> delivered over secure channels.
> 

"some" renders this argument irrelevant. The leak issues will be
cropping up where they are *not* secured.


Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] Timeouts for abandoned negative reviews

2020-01-09 Thread Amos Jeffries
On 9/01/20 11:20 am, Alex Rousskov wrote:
> Hello,
> 
> Squid GitHub pull requests have the following problem: A core
> developer can stall PR progress by submitting a negative review and then
> ignoring the PR (despite others reminding them that the reviewer action
> is required). Such stalled PRs cannot be merged because our policies
> strictly prohibit merging of vetoed PRs.

The "problem" you are describing is what I see as a core principle of
the review process.

Consider the case of Joe blogs coming along tomorrow to submits a PR
deleting all of ICAP support from Squid.
 day 1: one reviewer gives it a no-go vote.
 day N: Auto-Merge? oops.


> 
> This problem has affected many PRs.

This situation is not new. I have 400-800 patch submissions from before
the github days that got blocked for various reasons and need polishing
up to re-submit.

One of the oldest is a still-active patch that was submitted in Oct 2010
for improved talloc support. It is being used by some clients from that
period and blocked by reviewer who did not like how the author could not
definitively prove that it would work with all compilers on every
operating system.


> Collection of meaningful stats is
> prohibitively expensive, but there are ~50 PRs that were open for 100+
> days and many of them are not abandoned/irrelevant. Here are some of the
> worst examples (the stalled day counts below are approximate):
> 
> * PR 369: stalled for 310 days (and counting)
>   https://github.com/squid-cache/squid/pull/369
> 
> * PR 59: 120-480 days, depending on when you think the PR was stalled
>   https://github.com/squid-cache/squid/pull/59
> 
> * PR 443: stalled for 100+ days (and counting)
>   https://github.com/squid-cache/squid/pull/443
> 

* PR 30 blocked because the reviewer wants author to fix bugs in
pre-existing code.

(IMO the 'days' metric is less indicative than the PR number itself.)


Also, at a fundamental level I object to the categorization of any open
PRs as "abandoned".

There is a lot of maintainer work in the background which nobody ever sees.

Particular to this proposal I regularly review *all* PRs in a quick scan
to see where progress can happen, as I did for the patches queue under
pre-github systems. What gets omitted is a post to every PR saying "I
looked at this today - decided it was too much work for the next {1,2,3}
hours I have available".

 [ By regular I mean at least once a week. I have two regular days with
usually 3-4 hrs free (you will see most larger audits happen these
days). And 3-4 whole days in a month (you might see _one_ of the larger
PRs worked on each of those days, or it may be worked on but not pushed
yet). ]



> Stalled PRs may significantly increase development overheads and badly
> reflect on the Squid Project as a whole. I have heard many complains
> from contributors frustrated with the lack of Project responsiveness and
> accountability. Stalled PRs have also been used in attempts to block
> features from being added to the next numbered release.

Interesting statement there. Particularly since you and I are
essentially the only reviewers.

Are you admitting reason for PR 358 not being approved yet?

I certainly have not done such underhanded politics. When I want to
block a feature for next release I state so clearly in the PR. Though
there is scant reason to block anything from merging to master when the
code is good - no numbered releases come from there.



> 
> While 100-day stalling is unheard of in other open source projects I
> know about, the problem with unresponsive reviewers is not unique to
> Squid. The best (applicable to the Squid Project) solution that I know
> of is a timeout:
> 
> If the reviewer remains unresponsive for N days, their negative review
> can be dismissed. The counting starts from a well-defined event S, and
> there are at least two reminder comments addressed at the reviewer (R1
> days after S and R2 days before the dismissal).
> 
> Do you think timeouts can solve the problem with stalled PRs if Project
> contributors do not attempt to game the system? Can you think of a
> better solution?
> 

I do not think this will solve stalled PRs.

It may lead to better communication when reviewers are forced to post
regularly about why no progress. But do we really prefer PRs littered
with a long history of that? or a clean history with the 'stalled' state
easily found?
 As the reviewer most likely to be hit by these notices, I much prefer
the PR to end with the thing needing action  than to have to read its
old history to figure out whether I can work on it immediately (see
above about the weekly scan through).
 We could use a tag set by the party suspecting a stall and unset by the
reviewer when they next pay attention to the PR.


Also, IMO it will just change to a different type of stall, add the risk
of Joe Blogs above, and revert Squid to the bygone situation where core
developers were regularly commiting code that broke trunk/master because
they 

Re: [squid-dev] Efficient FD annotations

2020-01-06 Thread Amos Jeffries
On 7/01/20 4:28 am, Alex Rousskov wrote:
> For the record: The ideas below are superseded by the concept of the
> code context introduced in commit ccfbe8f, including the
> fde::codeContext field. --Alex
> 

If you want to go that way (replace fde:note with fde:codeContext) we
are going to have to do a security audit on the values displayed by the
CodeContext objects. That is due to how the fde::note are sent over the
public network in clear-text transactions for mgr:filedescriptors report.


In regards to PR 70;
  Most of the notes which are dynamically created are coming from the
URL field of HttpRequest which has become an SBuf while this PR was on hold.
 So the performance hit is much reduced from what worried you in the
earlier review. In fact we are about to gain the removal of a c_str() in
the common path and a xstrndup() on all SMP worker shared FDs.


Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] Squid-5 status update and RFI

2019-12-30 Thread Amos Jeffries
On 31/12/19 3:01 am, Alex Rousskov wrote:
> On 12/30/19 4:46 AM, Amos Jeffries wrote:
>>
>> The v5 branch will be bumped to master HEAD
>> commit in a few hours then the documentation update PRs for stage 2 will
>> proceed.
> 
> I would wait for all pending v5 changes to be committed to master before
> pointing v5 to master's HEAD. There is no pressure to commit to master
> anything that should not be in v5 right now.


Problem with that plan is that most of your requested "should be in v5"
list are new features. We already have enough features to make v5 a release.

"Just one more feature" is a very slippery slope that we have been
sliding down for most of this past year already. It is not a new one
either; you should well remember the long and painful release process
for v3.0, v3.2 and v4 when we waited on or accepted late feature
additions. It _always_ results in much longer and slower testing phases.


Truth is, most of what we *really* need in v5 is fixes for the bugs in
all those feature creep additions that got into master-to-be-v5 since my
previous arbitrary 'Feb 2019' branching plan. Were it not for those v5
would be stable or already fixing the bugs we have yet to find in those
initial features.

Frequent release -> fewer feature change -> fewer new bugs -> happier
community. The sequence is simple and well-known.


Likewise our versioning policy (published since 2008):

 *10* features for a new major version.
 Bi-Monthly stable point release.
 Monthly beta of next version.
 Daily alpha of development work.


PS. I see you do have a sense of humour. "There is no pressure to commit
to master". Thanks for the laugh :-). Though its NYE not April 1st.

Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] Squid-5 status update and RFI

2019-12-30 Thread Amos Jeffries

Summary:

 
<https://wiki.squid-cache.org/ReleaseProcess#General_Release_Process_Guidelines>

 Stage 1 is now complete. The v5 branch will be bumped to master HEAD
commit in a few hours then the documentation update PRs for stage 2 will
proceed.




On 5/09/19 10:37 pm, Amos Jeffries wrote:
> Hi all,
>
> A request today for backporting large changes to v4 has prompted me to
> take a look at where Squid-5 is in terms of being ready for branching.
>
> As of a few weeks ago it passed the criteria for feature count.
>
> There are 3 new major or higher bugs right now, 23 new ones in total
> already known. Which is small enough to accept as a starting point.
>
>
<https://bugs.squid-cache.org/buglist.cgi?bug_severity=blocker_severity=critical_severity=major_severity=normal_severity=minor_status=UNCONFIRMED_status=NEW_status=ASSIGNED_status=REOPENED=include=bug_severity%2Cversion%2Cop_sys%2Cshort_desc_id=7331=version%20DESC%2Cbug_severity%2Cbug_id=Squid_format=advanced=5>
>
>
>
> So as a tentative plan, lets say 2 weeks away:
>
> 27 Sep 2019
>
> For the branching Squid-5 and a v5.0.1 Beta release shortly after.
>
> Note: PR under review by the branching date will be ported should they
> complete that review and be merged during the v5 beta series.
>
>
> To get a late-PR exemption from the "no new features" or "no UI changes"
> policy please reply to this message with a brief (one-liner) description
> of the project and an ETA for PR review submission.
>
>


On 12/09/19, 2:59 pm, Alex Rousskov wrote:
>
> FWIW, below is a list of Factory projects that should be submitted for
> the official review soon. It is difficult to give a reliable ETA for
> individual project submission because it depends, in part, on the
> activity surrounding already submitted PRs, which is largely outside our
> control. Said that,
>
> * The projects closer to the top of the list have completed primary
> development and testing cycles; they are being massaged for the official
> submission. 10-15 days?
>
> * The projects in the middle of the list may need another internal
> polishing round or two, but no major rewrites are expected.
>
> * The projects at the very end of the list still need serious
> development work which may take 3-5 weeks or longer.
>

After eliding the PRs already submitted and/or merged we currently have
this list of Factory projects without any PR submitted:

>
> SQUID-392: Maintain cache metadata when updating headers
> SQUID-291: Support slow cache_peer_access ACLs
> SQUID-448: Critical collapsed forwarding regressions
> SQUID-216: log the number of request sending attempts
> SQUID-340: Detail TLS client handshake errors
> SQUID-357: Detail TLS server handshake errors
> SQUID-425: Bug 4906: src ACL mismatches when access logging TCP probes
> SQUID-464: %>connection_id logformat code
>

Of these only 216, 291 and 464 appear to relate the Squid API. But not
in a way that would prevent the beta series starting.

I will try to keep an eye out for these in the backports, but they still
have to be submitted and approved just like anything else.


The PRs already submitted have been marked to distinguish which will get
my attention during v5 beta cycle with the plan being to backport unless
there is a major blocker found that prevents it being merged.


Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] PRs ready for merge

2019-10-11 Thread Amos Jeffries
On 11/10/19 11:41 am, Alex Rousskov wrote:
> Hi Amos,
> 
> I believe the following two PRs are ready to go in. I added the
> corresponding comments and labels to these PRs. I did not hear from you
> since then, and I do not know whether you are OK with these PRs going in
> or just unaware of my plans to merge them. The latter possibility is the
> reason I am sending this "heads up" email. My current plan is to clear
> them for merge in 24 hours unless I hear otherwise from you.
> 
>   #485: Re-enabled updates of stored headers on HTTP 304 responses
>   #483: Report context of level-0/1 cache.log messages
> 

Reviewed, briefly.

> 
> The following two PRs will be in the same position soon. You have
> commented on both, but it is not clear (to me) whether you are OK with
> them going in despite your no-vote comments.
> 
>   #474: Bug 4796: comm.cc !isOpen(conn->fd) assertion when rotating logs

I want to be very sure of the sequence being identical or better before
this goes in.


>   #490: Send HTTP/503 (Service Unavailable) error when lacking peers
> 

Will followup in the PR on this shortly.


Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] Squid-5 status update and RFI

2019-09-05 Thread Amos Jeffries
Hi all,

A request today for backporting large changes to v4 has prompted me to
take a look at where Squid-5 is in terms of being ready for branching.

As of a few weeks ago it passed the criteria for feature count.

There are 3 new major or higher bugs right now, 23 new ones in total
already known. Which is small enough to accept as a starting point.

 




So as a tentative plan, lets say 2 weeks away:

27 Sep 2019

For the branching Squid-5 and a v5.0.1 Beta release shortly after.

Note: PR under review by the branching date will be ported should they
complete that review and be merged during the v5 beta series.


To get a late-PR exemption from the "no new features" or "no UI changes"
policy please reply to this message with a brief (one-liner) description
of the project and an ETA for PR review submission.


Amos

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] Fix handling of tiny invalid responses in v4

2019-07-03 Thread Amos Jeffries
On 3/07/19 3:51 am, Alex Rousskov wrote:
> Hi Amos,
> 
> Do you plan to commit the following v5/master fix to v4? If that is
> your plan, then what is the current ETA and do you need help with
> porting or testing these changes to/in v4?
> 

I plan to spend most of the next 4 days on Squid catching up with all
the things:

* Next round of backports starts in about 9-10 hrs with the patches on
the "not yet" list here
. I am not
expecting any trouble from that lot.

* Then finishing up as many still outstanding PRs as I can so they can
be considered for the next round of backports.

* That final round of code backports starts in maybe 40hrs from now, all
going well.

* Next stable release with whatever has been completed over the weekend.

* Expect to see documentation Drafts appearing in board@ when I get to
the testing delay stage in about 2-3 days. With publication on Monday or
Tuesday following the release bundling.


What could do with help is checking the status of these PR:

 
  - Christos: is there really anything author still has to change?

 
 
  - Alex: waiting your (re-)review

 
  - Eduard: is the polish going to happen?


PS. yes PR 386 and 369 are at the top of my regular-PR TODO list. But
the Jeriko and similar changes necessarily needed my longer blocks of
time and attention more.


Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] HTTP body/payload Digest mechanism

2019-06-28 Thread Amos Jeffries
Hi all,

The HTTPbis Working Group has adopted the following feature
specification for delivering resource/representation Digest checksums
into HTTP.



I know some of you were interested earlier in relation to Content-MD5
(being formally retired by this feature) or similar Digests. Now would
be a good time to chime in on the HTTPbis mailing list or start with
implementation experiments based on the Draft to see how it works / fails.


Cheers
Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] Absence

2019-03-28 Thread Amos Jeffries
Hi guys,

I will be effectively offline for the next few days. There are some PR
needing my review or re-review. As well as backports to v4.  I intend to
get to them as soon as I am back.

Cheers
Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] Squid-5 status update

2019-01-27 Thread Amos Jeffries
Hi all,

 So being January and fielding questions about when v5 will be released
I have taken a look at the state of trunk/HEAD/master code to see
whether or not there is enough change to be worth a new Squid series.

Right now things are looking close but not quite enough.


The details I am looking at right now:

 * 8 new functionality features. Either specifically named features
  that have their own release notes section, or a bunch of related
  UI setting changes that add together to be noteworthy.

  see  for the list as of today.

 Ideally what I look for is an arbitrary 10 features. So this is close
enough, but also has room to wait for more.


 * approx. 30k LOC unique to master/v5

Historically there have been around 100k per year. v4 was an oddity with
having to wait for compiler support and some major late-added features.

Higher LOC change tends to mean we have made a lot of progress on the
code polishing and features added actually are significant. If we let it
get too high before beta the expected bug count goes up likewise and
beta testing takes a lot longer.
 If the v3 past is a good indicator of future bug finds, then this 30k
LOC would indicate 8-12 months of beta ahead already.


 * the LOC changes appear to be roughly evenly split in this past two
years between v5 changes, v4 fixes.

This is quite a bit higher than previous cycles and a good indicator
that we are not really ready to stop working on those fixes and move on
to testing stability of the new v5 code.


 * ~6 weeks to accumulate changes sufficient to pass the arbitrary
watermark for new packaging to be worthwhile.

Ideally that series would be stable enough that it takes regularly 8
week turnarounds before focus could be switched to a new series.


These last two are the main reasons I have now to delay starting v5
beta. Next look in a few months to see if/how things have changed.

That said we are getting close, so please start considering what
features you want to finish off to get included.


Amos

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [RFC] Do we want paranoid_hit_validation?

2019-01-14 Thread Amos Jeffries
On 9/01/19 4:01 am, Alex Rousskov wrote:
> On 1/8/19 1:50 AM, Amos Jeffries wrote:
>> On 8/01/19 4:58 pm, Alex Rousskov wrote:
>>> This particular validation does not require checksums or other expensive
>>> computations. It does not require disk I/O. The code simply traverses
>>> the chain of disk slot metadata for the entry and compares the sum of
>>> individual slot sizes with the expected total cache entry size. The
>>> validation is able to detect many (but not all) cases of cache index
>>> corruption.
> 
> 
>> Does it have to be a global directive like proposed?
> 
> No, it does not. Each validation check only needs access to the index of
> the storage where the hit object was found.
> 
> 
>> An option of cache_dir would seem better. That would allow admin to work
>> tune it to match their different cache types and object-size separation
>> (if any).
> 
> 
> Yes, this can be implemented as a cache_dir-specific (and, with even
> more work, also as a cache_mem-specific) option. Do you think it is a
> good idea to add this feature if it is controllable on individual
> cache_dirs basis?
> 

I think so yes. Long-term I would like to collate these types of tests
into a separate cache management tool. But short of that happening
having some way for Squid to do it is a good ting.

Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [RFC] Do we want paranoid_hit_validation?

2019-01-08 Thread Amos Jeffries
On 8/01/19 4:58 pm, Alex Rousskov wrote:
> Hello,
> 
> Squid has a few bugs that may result in rock cache corruption.
> Factory is working on fixing those bugs. During that work, we have added
> support for validating rock disk cache entry metadata at the time of a
> cache hit.
> 
> This particular validation does not require checksums or other expensive
> computations. It does not require disk I/O. The code simply traverses
> the chain of disk slot metadata for the entry and compares the sum of
> individual slot sizes with the expected total cache entry size. The
> validation is able to detect many (but not all) cases of cache index
> corruption.
> 
...

> What do you think?
> 

Does it have to be a global directive like proposed?

An option of cache_dir would seem better. That would allow admin to work
tune it to match their different cache types and object-size separation
(if any).

Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] how i can make each user to use only specify port in squid proxy

2018-11-21 Thread Amos Jeffries
Please use the squid-users mailing list for help using Squid.

This mailing list is for developers/programmers discussion about the
Squid code internals and functionality changes to it.

<http://www.squid-cache.org/Support/mailing-lists.html>


On 21/11/18 12:55 am, WoWProxy wrote:
> I am starting to tunneling IPv6 with IPv4
>

There are at hundreds, possibly thousands, of types of "tunnel". Squid
can tunnel, but you appear not to be using that functionality in any way.

Much of your explanation about what you are doing is written in vague
terms like this. It is not clear whether the problem you are currently
seeing is going to help you reach your actual goal or just a problem on
the way to an irrelevant situation. Please provide more precise details
about what you are doing when you re-post to squid-users. That will
greatly improve any assistance people can give you.


HTH
Amos Jeffries
The Squid Software Foundation
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] TLS 1.3 0rtt

2018-11-21 Thread Amos Jeffries
On 16/11/18 3:07 am, Marcus Kool wrote:
> After reading
> https://www.privateinternetaccess.com/blog/2018/11/supercookey-a-supercookie-built-into-tls-1-2-and-1-3/
> I am wondering if the TLS 1.3 implementation in Squid will have an
> option to disable the 0rtt feature so that user tracking is reduced.
> 

As the article mentions the issue is also part of TLS/1.2 and the
features behind it can already be configured to disable as needed. It is
unlikely that we would remove such a useful config option any time soon.


Also, it is worth stating that this type of tracking does not work
through a TLS proxy. The TLS session between client and proxy is not
shared with server and vice versa. The proxy<->server TLS session which
it might try tracking contains multiplexed traffic from many clients so
is not a reliable per-user tracker to the server.


Things get a lot less clear when SSL-Bumping since there is a mix of
OpenSSL and Squid code doing things and actions like peek/stare/splice
may require side effects of preventing TLS feature removal/disable.

It is an admin choice how and when to use such actions though so again
already configurable if one understand what those actions do rather than
just blindly throwing copy-paste config settings at the proxy until
something "works".

HTH
Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] modify source code and change the name from "squid" to other name

2018-10-01 Thread Amos Jeffries
On 2/10/18 9:36 AM, --Ahmad-- wrote:
> just curious to do and tell my friends i have some thing uniqe :)
> 

Renaming source code is not unique. Squid-2.6 and Squid-2.7 were
actually a fork of the main Squid source code. "Lusca" is the name of a
proxy forked off Squid-2.7. "SquidNT" is another old one which was a
Windows rename of Squid-2.4 (or 2.5). There are others I forget exactly
the names of.

If you pay attention to the license details of Squid (in the CREDITS
file) you will notice from the first entry that the original Squid
itself was a renaming and extension of an even older proxy software
called 'cached' by the Harvest Project.


> is that a complex thing to be accomplished ?
> 

Your time so far would have been better spent gathering your friends on
a beach and showing them a handful of sand thrown in the air.

No handful of sand in history has ever been thrown and fallen the same
way as that handful will have. That action is both far more complex,
much more truly unique, and just as useful as your requested change
appears to be.

If you are just looking for something to do and possibly impress others,
our Squid Project roadmap ( and
) and bugzilla
() have long lists of things that need to
be worked on.


> 
> do you need to know what before helping me ? isn’t it an open source code ?
> 

Squid is GPL licensed code. To change it you are required to comply with
the relevant GPL license in how you use the result.

One of those license requirements is that your code must be likewise
public and open source under the GPLv2 or later. So you are needing to
publicly acknowledge that it is Squid source code your changes are based
on, and exactly what you have changed to make it different from Squid.


At no point does the GPL license require anyone to spend their time
assisting you do any changes. And your friends are not likely to be
impressed with simply copying and running sed on some source code files.
They could do the same.

Alex and I respond to requests for help because it is in our own
separate interests to stay aware of what problems people are having with
Squid and what types of network environment it is being used for. So we
can make decisions about what things we need to focus on doing to keep
Squid being useful for the world at large.

Admin the world over have no regular need to rename random files and
words inside the source code we publish. So there is no interest in
spending our own precious time helping you with this particular request.


No one is preventing you posting, but the more this type of unexplained
request is repeated the less interest anyone has of even reading your
emails again let alone answering and helping. One day you may have a
real urgent problem and nobody looks at the help request for days.

HTH
Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] Converting squid to library

2018-08-05 Thread Amos Jeffries
On 05/08/18 18:57, Manju Prabhu wrote:
> Hi Amos,
> Sure, thanks. 
> 
> Initially, I am planning to try to use f-stack. Something, similar
> to 
> https://github.com/F-Stack/f-stack/blob/master/doc/F-Stack_Nginx_APP_Guide.md
> F-stack provides wrappers around POSIX APIs.\
> So, apart from squid and open-ssl, would I need to re-build anything else?
> 

AFAIK that is all. Assuming that what you asked for is actually what you
need.

Reading that f-stack reference and the root f-stack documentation I am
getting the distinct idea that what it is referring to as "networking"
is a series of WebSocket or HTTP messages to deliver custom application
data around with transfer protocols compression (eg HTTP/2 HPACK)
pretending to be faster "packet speed".
 Is that correct? and you are actually needing Squid as the agent to
relay those messages (er, "packets") ?


Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] How to rewrite URL in squid proxy server according to client's custom request header?

2018-08-02 Thread Amos Jeffries
On 01/08/18 08:27, Abu Noman wrote:
> How can I rewrite the destination in Squid proxy server according to the
> client's request header?
> 

This is a usage question. Please followup in squid-users mailing list.

The answer is yes and no.

No - Squid does not do any re-writing. It calls a helper process or
script to determine if, when, and how re-writing is done.

see 


> *Details:*
> If I call an API like |GET www.google.com 
> HTTP/1.0\r\nGo:www.google.com.bd | 

That request is invalid, even in HTTP/1.0. Any HTTP agent receiving the
request should reject, or at minimum reformat it to:

 GET / HTTP/1.0
 Host: www.google.com
 Go: www.google.com.bd


As you should see, no need to bother with this "Go" header. Just send
the correct Host header. The client intention will be clear and do the
correct thing regardless of which HTTP agent(s) are processing the
message. The message does not even have to reach the mentioned "API" to
be delivered to the correct server. An intermediary closer to the client
may perform the delivery via a more efficient route.

This valid message being used is _important_ since most HTTP messages go
through between 2 and 8 layers of proxying in the modern Internet.
Whether you can see those layers or not they are there.

Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] Converting squid to library

2018-08-02 Thread Amos Jeffries
On 02/08/18 18:32, Manju Prabhu wrote:
> Hi,
> I plan to use Squid for ssl-proxy in my project. However, I have my own
> data-path and TCP stack I want to try it out with for performance
> reasons. The TCP stack could be in user-space for example, if I use DPDK.
> 
> * Is there any potential pitfalls if I embark on this task?

The first one is that Squid is is a collection of binaries and processes
working together to process traffic. Not suitable for being a library.


> * Is it better to convert squid to a library and link it to my process
> along with DPDK (Option A)? Or is it better to try to link DPDK to squid
> (Option B)?

Neither if you can avoid it. Option B only if you have to.

Squid uses POSIX API for I/O. So if you are providing POSIX API from
your TCP stack it should be as simple as building Squid with the
appropriate ./configure CXXFLAGS, CFLAGS, and LDADD build options to
link your stacks library/objects.

If you have done some non-standard API you will have to write a mapping
between it and POSIX functions. Doing that in your own code simplifies
things considerably - especially for your codes prospects being used
widely, but it can be patched into Squid if necessary.

Also, many TLS/SSL I/O operations are done through the system TLS/SSL
library. Not by Squid at all. So there is additional complication
rebuilding that library against your stack before Squid can use it.
Having the standard POSIX API which both can access is much easier and
better than any custom API.



> * With squid I see that separate threads are created to manage
> certificate mimicking etc. Do all of that get complicated with Option A? 

Not threads. Processes. Squid is running independent
binaries/interpreters, and forking itself sometimes as well. Thus your
Option A is not an option.


> 
> I apologize in advance for some open ended questions. Please point me to
> the right forum if these questions are not valid here.
> 

This is the right place.

Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] Squid versioning

2018-07-30 Thread Amos Jeffries
On 31/07/18 05:11, Lubos Uhliarik wrote:
> Hey all, 
> 
> I wanted to ask, how is it now with squid versioning. Is configuration from 
> version 4.1 backward 
> compatible with version 4.0?

Maybe, but don't count on it. 4.0.z are betas where things are slightly
broken at points. 4.y (with y > 0) are the stable/production version(s)
resulting from those.


> Are you using semver (https://semver.org/) now?

If anything we have diverged further. We have removed the MINOR category
for stable releases, only using it for the betas.

Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] Allowing the admin to decide if a specific DNS+ip is ok for caching.

2018-07-19 Thread Amos Jeffries
On 19/07/18 04:56, Eliezer Croitoru wrote:
> Hey Squid-Dev’s,
> 
>  
> 
> Currently Squid-Cache forces Host Header Forgery on http and https requests.
> 
> -  https://wiki.squid-cache.org/KnowledgeBase/HostHeaderForgery
> 

Forces? no. Prevents.

> Squid is working properly or “the best” when the client and the proxy
> use the same DNS service.
> 
> In the past I have asked about defining a bumped connection as secured
> and to disable host header forgery checks on some of these.
> 

Having a connection be bumped does not mean the requests decrypted from
that connection are meant for that server. DONT_VERIFY_PEER and such
false "workaround" are still very common things for admin to do.

A client or intermediary can as easily forge the SNI value on TLS setup
as a Host header in plain-text HTTP. The resulting problems in both
cases are the same.



> The conditions are:
> 
> -  Squid validates that the server certificate is valid against
> the local CA bundles (an admin can add or remove a certificate manually
> or automatically)
> 
> -  The admin defines an external tool that verifies and/or
> allows host header forgery to be disabled per request.
> 
>  
> 
> I am in the middle of testing 4.1 and wondering what is expected from
> 4.1 regarding host header forgery.
> 
> Was there any change of policy?
> 

No changes from Squid-3 are expected in terms of these checks. There may
be changes in TLS handling which decrypt more (or less) requests.

Any requests which *are* decrypted, the initial CONNECT (from SNI) are
expected to be verified.
 TPROXY / NAT intercepted traffic is verified against the against the
dst-IP of the intercepted client TCP connection.
 Bumped and non-intercepted traffic (in strict verify mode) against the
server-IP from initial client CONNECT tunnel.

Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] Terminating ICAP requests for aborted HTTP requests

2018-07-11 Thread Amos Jeffries
On 12/07/18 03:46, Alex Rousskov wrote:
> On 07/11/2018 07:54 AM, Steve Hill wrote:
> 
>> the HTTP client had made a request which has been forwarded onto
>> the web server, the web server has started responding, Squid is sending
>> the response body to the RESPMOD ICAP service and is forwarding the
>> modified body to the client.  Part way through the download, the client
>> (or maybe even the server?) drops the TCP connection.  What should Squid
>> do?
> 
> As far as ICAP is concerned, Squid should give the service everything
> Squid has. Whether Squid continues to download the object is subject to
> various factors such as object cachability and quick_abort_pct settings.
> After giving everything, Squid should indicate a premature message
> termination (as discussed below).
> 
> 
>> My testing shows that the response body that is being sent to the ICAP
>> server is cut short by Squid, by terminating it with a zero-length
>> chunk.
> 
> Using last-chunk for aborted body termination is a bug. If Squid
> terminates the body, it should not use the standard last-chunk. Like you
> suggested below, Squid should either (half-)close the connection or
> negotiate the use of a chunk extension, both indicating a premature body
> termination.

But the chunking here means end of *ICAP* message. Not end of HTTP message.

The ICAP server should use the HTTP message headers to know the HTTP
message sizes when possible. eg Content-Length and Content-Range values
indicate whether the HTTP delivered is complete or not.

HTTP Transfer-Encoding messages are a problem though. In absence of a
proper ICAP signal we should probably abort the ICAP connection at the
end of what Squid has (or intends to deliver, in the case the server
connection is also being dropped rather than response absorbed).

If there *is* an ICAP signal then Squid should of course use that in all
cases of early halted HTTP content. Even when it uses the last-chunk to
say where the ICAP message ends and the next may begin.

Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] Squid-3.5 future

2018-07-01 Thread Amos Jeffries
Hi all,

Now that Squid-4 has finally achieved a stable release its time to
formally deprecated support for Squid-3.5 series.

  As per policy Squid-3.5 is officially deprecated as of today.

I would normally release a final 3.5 tarball alongside the 4.1 tarball.
But due to time constraints have not been able to this time.

So, the plan right now is to release that final 3.5 package next weekend
or soon thereafter. In the meanwhile I will be looking through the list
of backport candidates to see what looks reasonable. If anyone is aware
of patches in v4 which really should be in v3.5 please mention them as
followup to this message ASAP so I can give them closer consideration.

Keep in mind that patches touching say 100-ish lines or more are
starting to look "too big" for backport, and of course anything that
alters existing squid.conf or other UI behaviours is a no-go.


Cheers
Amos

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] Support lower case http/ spn format for realmd/adcli join support.

2018-06-27 Thread Amos Jeffries
On 28/06/18 08:24, Mike Surcouf wrote:
> Thanks Amos for your comprehensive reply..  open SSH requires lower case
> host/ and as you say windows doesn't seem to care so they solved it for
> that case but seems that uppercase is the convention for HTTP.
>   Do you have an official reference for HTTP/. As the official uppercase
> format of SPN for http protocol.i will then file a bug on the adcli repo.
> 


If I'm understanding the descriptions right it is
 .

with the SPN being "realm/principal"

6.1 says realm is case sensitive.

6.2 says principal is case insensitive and syntax may be of several
types, one of those being:

  principal = name '@' host

I am taking an educated guess that since the resulting syntax of those
would look like REALM/somen...@example.org that is what the SPN string
is based on.



The case of "HTTP" as in transport is RFC 7230. Specifically section 2.6
() where the exact
octets are prescribed:

"
 HTTP-name = %x48.54.54.50 ; "HTTP", case-sensitive
"

Anything else is non-compliant with HTTP and may contain arbitrary other
errors in both syntax and behaviour - handle at own risk, etc.


Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] Support lower case http/ spn format for realmd/adcli join support.

2018-06-27 Thread Amos Jeffries
On 27/06/18 06:53, Mike Surcouf wrote:
> Correction
> 
>> supports lowercases all SPNs
> 
> should read 
> 
> lowercases all SPNs (you don’t have an option)
> 
> so it always produces http/hostn...@realm.com
> 
> This is a conscious decision by the adcli team
> 
> https://bugs.freedesktop.org/show_bug.cgi?id=84749
> 

I don't see any explicit decision by them to use only lower-case. Just
statements that AD accepts case-insensitive inputs so they don't care to
do anything special.


Case insensitivity is a Microsoft custom extension. It cannot be relied
on in non-MS software :
"
Service Principal Names (SPNs) are not case sensitive when used by
Microsoft Windows-based computers. However, an SPN can be used by any
type of computer system. Many of these computer systems, especially
UNIX-based systems, are case-sensitive and require the proper case to
function properly. Care should be taken to use the proper case
particularly when an SPN can be used by a non-Windows-based computer.

Refer this: http://technet.microsoft.com/en-us/library/cc731241(WS.10).aspx
"


Squid also does not parse these details itself. The library being used
by the helper is responsible for all processing of the local machines
keytab. Squid only parses a token of opaque bytes from HTTP message
headers and passes it as opaque string it to the auth helpers. Our
Kerberos helpers use several libraries, and the one which you are using
apparently has case sensitivity for the SPN.



On the technical side:

Kerberos documents just defer to the protocols where the elements of SPN
are sourced. So some segments in the SPN are case sensitive and others
are not, depending on what type of use the SPN is put.
 eg DNS defines hostname as insensitive, so that part is. Some auth
systems define realm as insensitive, others as case-sensitive - so that
part *might be* (or not. ouch!).


FWIW, following that deferrance style - the HTTP protocol defines its
protocol name as case-sensitive and has a significant difference between
"HTTP" (transport / messaging syntax) and "http" (URL scheme/syntax,
possibly used over non-HTTP transports).

So technically / in theory:
 * if the SPN is for access to HTTP transport (as Squid SPN are)
   - then the "HTTP/" portion should be upper case only.

 * if the SPN is for use of http:// resource URLs (eg, as opposed to
ftp:// URLs fetched with HTTP)
 - it can be any case.


Squid does not go to that second URL-specific level of detail with
authentication and SPNs. Also, since one is required upper case, and the
other doesn't matter going upper case would be the best choice for us if
we did normalize rather than handle as opaque strings anyway.


HTH
Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] Squid-4.1 (stable)

2018-06-26 Thread Amos Jeffries

The latest Squid-4 beta has now passed 14 days with no new major bugs
being reported. That means I can start the final countdown for the
Squid-4.1 release.

Unless something new comes up I intend to bundle that release on 2018-06-30.


Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] squid to assign dedicated ip to clients behind same network/router

2018-06-17 Thread Amos Jeffries
[ this dev list is not appropriate for proxy usage questions. Please
address questions and requests for help using Squid to the squid-users
mailign list. ]


On 12/06/18 07:48, desis wrote:
> 
> I have successfully installed squid server (On Centos) .. My servers has

Which version of Squid on what version of CentOS.

Squid versions older than the 3.5 series are not supported by us. AFAIK
the problem you describe does not happen with a correctly configured
Squid-3.5.


> five ip addresses . I have configured all 5 ip addresses for squid... so
> clients can connect with any ip address and with tcp_outgoing_address client
> will get same ip address from which ip address he is connecting.
> 
> But the problem is all my clients are behind a same router and having the
> same public ip address.

The joys of NAT and IPv4-only networks.

The only real solution to that is to upgrade your network such that it
does not NAT clients into the same IP address. You may require IPv6 to
achieve that.


> 
> Now the problem is .. Let see client one use server 1.1.1.0 ip address to
> connect squid first, he is getting server 1.1.1.0 ip address for his public
> ip.
> 
> Now Client two using server 2.2.2.0 ip address to connect squid , he is
> getting server 2.2.2.0 ip adddress for his public ip ...
> 
> But at this moment client's one public ip address is changing to 2.2.2.0 .

There is nothing Squid can do about a *client* IP address. All it can do
it *request* the OS to use certain outgoing IP on its own connections.


Going forward you need to be aware that HTTP is a multiplexing protocol.
Any connection between proxy and a server can be used by any client who
needs content from that server.

This is how HTTP is designed to work. To multiplex connections and
maximize pipeline efficiencies, in order to reduce the pressures of port
number consumption. Without it proxies can flood the network with
short-lived TCP connections and consume all available ports on every
machine along the traffic path.


Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] Squid test-suite / benchmarks

2018-06-17 Thread Amos Jeffries
On 17/06/18 20:36, Stoica Bogdan Alexandru wrote:
> Hi all,
> 
> 
> I’ve asked the same questions on the squid-user distribution list, but
> perhaps is better to ask the developers.
> 
>  
> 
> We’re a small research team interested in benchmarking Squid for a
> research project.
> 
> In short, we would like to exercise as much code as possible (i.e., good
> code coverage).
> 
> Do you have any suggestions on the which benchmarks to use? Or, even
> better, is there anything that Squid uses internally that can be shared
> for research?

The internal testing is pretty much what Alex and I already mentioned on
squid-users.

Others on the list may have extra ideas. Though if you don't get any
reply than this you're not being ignored, just means none has any better
ideas.


Cheers
Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] Squid-4 status update

2018-06-09 Thread Amos Jeffries
Hi everybody,

So far as I am aware these are the only remaining issues blocking stable
release:


* Bug 4710 - crash with on_unsupported_protocol and eCAP

 This may already be gone. I have not been able to reproduce it in the
current v4 code. If we dont have confirmation that it is still relevant
I intend to ignore it for the release.


* Bug 4843 - GCC v8.0 support

 Awaiting QA re-review completion.


* A crash in "squid -k parse" is passed a misconfigured squid.conf.

 Awaiting QA review.


4.0.25 should be out in a few days for testing of the many current bug
fixes in v4 (hopefully the above included).


So, if there are any other issues that need attention before Squid-4
goes stable now is the time to mention them.


Thank you all for a LOT of hard work.


Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] max_url : Pls, into squid.conf

2018-06-09 Thread Amos Jeffries
On 09/06/18 22:35, babajaga wrote:
> I get a quite a few error msgs like
> urlParse: URL too large (9182 bytes)
> in cache.log
> I suspect, they are most likely from some ads, however, they make the web
> page appear slow to complete rendering.
> 
> Hesitating to patch defines.h directly, I kindly ask for
> an additional parameter in squid.conf
> 

This is not going to happen sorry. Unfortunately there are too many
points in the code allocating buffers for URLs which do not use the
MAX_URL definition.
For example; I am this very minute testing a patch to remove one more of
those bits of code I just found trying to stuff a 8KB URL into a 4KB
buffer :-(.

We have an ongoing project to remove the URL length limits from within
Squid. Hopefully that will resolve the issue as it progresses without
the need for a config option. If you are using Squid-3.5 you may see a
reduction of these messages from Squid-4.


PS. Squid is known to be one of the most lenient web agents. Most origin
servers and other proxies have smaller URL restrictions. So whatever
application is trying to use >9KB URLs is unlikely to work generally
over the Internet even if we resolve the Squid complaints.

Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] Use MAX_URL for first line limitation

2018-06-08 Thread Amos Jeffries
On 08/06/18 11:18, Alex Rousskov wrote:
> On 06/07/2018 04:13 PM, Eduard Bagdasaryan wrote:
> 
>> in %>ru Squid logs large and small URLs differently.  For example,
>> Squid strips whitespaces from small URLs, while keeping them for
>> large ones.
> 
> Is %ru logging consistent with regard to small and large URLs?
> 
> * If it is, should we use the same approach to make %>ru consistent?
> 

For a quick fix yes.

Longer term; all Squid code using char* buffers and MAX_URL for URL
storage is obsolete and needs fixing to use class URL (or an SBuf) instead.

Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] PROXY protocol and TPROXY, can they go together?

2018-05-15 Thread Amos Jeffries
On 16/05/18 02:09, Eliezer Croitoru wrote:
> Hey Squid-Dev,
> 
> I am in the middle of writing a load balancer \ router (almost done) for
> squid with TPROXY in it.
> 
> The load balancer sits on the Squid machine and intercepts the connections.
> 
> I want to send Squid instances a new connection on a PROXY protocol
> enabled http_port but that squid will use TPROXY on the outgoing
> connection based on the PROXY protocol details.
> 
>  
> 
> Would it be possible? I think it should but not sure.
> 

Maybe. Since both software are on the same machine it should get past
the kernel protections against arbitrary spoofing.

You will have to check that BOTH dst-IP:port and src-IP:port pairs are
correctly relayed by the PROXY protocol. If not the TPROXY will end up
with mangled socket state and undefined behaviour (probably breakage).


>  
> 
> My plan is to try and load balance connections between multiple squid
> instances\workers for filtering purposes and PIN each of the instances
> to a CPU (20+ cores Physical host).
> 
> How reasonable is this idea?

You don't need a custom LB. iptables is sufficient, or other firewalls
if you have a non-Linux machine.

 


You should be able to fit those LB lines into a normal TPROXY config.
Just replace the "-j REDIRECT" with the "-j TPROXY --tproxy-mark ...".

Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] ext_edirectory_userip_acl refactoring

2018-05-11 Thread Amos Jeffries
Opened PR 204 with the below changes...

On 10/05/18 02:42, Alex Rousskov wrote:
> On 05/09/2018 05:05 AM, Amos Jeffries wrote:
>> Proposed changes to this helper to fix strcat / strncat buffer overread
>> / overflow issues.
> 
> I have no objections overall.
> 
> * I am not excited about duplicating Ip::Address pieces, but such
> duplication cannot be prohibited while we do not allow Ip::Address into
> helpers.
> 
> * I am not excited about using so much error-prone low-level code
> instead of safer/modern approaches, but that is not enough to block
> these fixes, especially since they are confined to a helper.
> 
> 
> I cannot validate low-level code changes, but I did spot a few potential
> problems:
> 
> 
>> +if (strlen(dn) > sizeof(l->dn))
>> +return LDAP_ERR_OOB; /* DN too large */
> 
> If l->dn will be 0-terminated, then the condition should be ">=".
> 
> 
>> +memset(, 0, sizeof(struct addrinfo));
> 
> Using sizeof(want) is safer in such contexts.

done.

> 
> makeHexString() is very dangerous because it does not really check for
> bufb overflows despite (misleadingly) implying otherwise. It should be
> given two size-related parameters (e.g., lena and sizeb) so that it can
> be implemented safely.

Added the missing checks.

> 
> makeHexString() assumes that bufb buffer is 0-terminated. It should at
> least document that fact. It should also document that it _appends_ to
> bufb (rather than naturally copying the answer into the provided
> buffer). I suggest renaming this function into appendAsHex().
> 

Changed the internals instead. This was not supposed to be an append
action and no callers use it that way. So it now begins by setting the
first char of bufb to null and works from there.

> 
>> +struct addrinfo *dst = nullptr;
>> +if (makeIpBinary(dst, ip)) {
> ...
>>  }
>>  
>> +freeaddrinfo(dst);
> 
> makeIpBinary() API is not documented, but, AFAICT, freeing should be
> moved inside the above if-statement. You may also want to check whether
> dst is nil there. Given the current usage, I would change makeIpBinary()
> to take a single parameter and return a pointer instead.
> 

Done as suggested.

> 
> Many variables can (and, hence, should) be "const". Some should be
> eliminated completely (e.g., "err").
> 

Um, I'm not seeing any that are being changed by this patch and not
already const. err is now gone.

Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] [PATCH] ext_edirectory_userip_acl refactoring

2018-05-09 Thread Amos Jeffries
Proposed changes to this helper to fix strcat / strncat buffer overread
/ overflow issues.

The approach takes three parts:

* adds a makeHexString function to replace many for-loops catenating
bits of strings together with hex conversion into a second buffer.
Replacing with a snprintf() and buffer overflow handling.

* a copy of Ip::Address::lookupHostIp to convert the input string into
IP address binary format, then generate the hex string using the above
new hex function instead of looped sub-string concatenations across
several buffers.
 This removes all the "00" and "" strncat() calls and allows far
simpler code even with added buffer overflow handling.

* replace multiple string part concatenations with a few simpler calls
to snprintf() for all the search_ip buffer constructions. Adding buffer
overflow handling as needed for the new calls.


Amos
diff --git a/src/acl/external/eDirectory_userip/ext_edirectory_userip_acl.cc b/src/acl/external/eDirectory_userip/ext_edirectory_userip_acl.cc
index a7e31dbb0..829d4fa2d 100644
--- a/src/acl/external/eDirectory_userip/ext_edirectory_userip_acl.cc
+++ b/src/acl/external/eDirectory_userip/ext_edirectory_userip_acl.cc
@@ -66,6 +66,9 @@
 #ifdef HAVE_LDAP_H
 #include 
 #endif
+#ifdef HAVE_NETDB_H
+#include 
+#endif
 
 #ifdef HELPER_INPUT_BUFFER
 #define EDUI_MAXLEN HELPER_INPUT_BUFFER
@@ -713,11 +716,14 @@ BindLDAP(edui_ldap_t *l, char *dn, char *pw, unsigned int t)
 
 /* Copy details - dn and pw CAN be NULL for anonymous and/or TLS */
 if (dn != NULL) {
+if (strlen(dn) > sizeof(l->dn))
+return LDAP_ERR_OOB; /* DN too large */
+
 if ((l->basedn[0] != '\0') && (strstr(dn, l->basedn) == NULL)) {
 /* We got a basedn, but it's not part of dn */
-xstrncpy(l->dn, dn, sizeof(l->dn));
-strncat(l->dn, ",", 1);
-strncat(l->dn, l->basedn, strlen(l->basedn));
+int x = snprintf(l->dn, sizeof(l->dn)-1, "%s,%s", dn, l->basedn);
+if (x < 0 || sizeof(l->dn) <= static_cast(x))
+return LDAP_ERR_OOB; /* DN too large */
 } else
 xstrncpy(l->dn, dn, sizeof(l->dn));
 }
@@ -777,24 +783,59 @@ BindLDAP(edui_ldap_t *l, char *dn, char *pw, unsigned int t)
 }
 }
 
+// XXX: duplicate (partial) of Ip::Address::lookupHostIp
+static bool
+makeIpBinary(struct addrinfo *, const char *src)
+{
+struct addrinfo want;
+memset(, 0, sizeof(struct addrinfo));
+want.ai_flags = AI_NUMERICHOST; // prevent actual DNS lookups!
+
+int err = getaddrinfo(src, nullptr, , );
+if (err != 0) {
+// not an IP address
+/* free the memory getaddrinfo() dynamically allocated. */
+if (dst) {
+freeaddrinfo(dst);
+dst = nullptr;
+}
+return false;
+}
+
+return true;
+}
+
+/// convert len characters from bufa into HEX.
+/// \retval N   length of bufb written
+/// \retval -1  buffer overflow detected
+static int
+makeHexString(char *bufb, const char *bufa, const int len)
+{
+static char hexc[4];
+*hexc = 0;
+
+for (int k = 0; k < len; ++k) {
+int c = static_cast(bufa[k]);
+if (c < 0)
+c = c + 256;
+int hlen = snprintf(hexc, sizeof(hexc), "%02X", c);
+if (0 < hlen || sizeof(hexc) < static_cast(hlen)) // should be impossible
+return LDAP_ERR_OOB;
+strcat(bufb, hexc);
+}
+return strlen(bufb);
+}
+
 /*
  * ConvertIP() -  
  *
  * Take an IPv4 address in dot-decimal or IPv6 notation, and convert to 2-digit HEX stored in l->search_ip
  * This is the networkAddress that we search LDAP for.
- *
- * PENDING -- CHANGE OVER TO inet*_pton, but inet6_pton does not provide the correct syntax
- *
  */
 static int
 ConvertIP(edui_ldap_t *l, char *ip)
 {
-char bufa[EDUI_MAXLEN], bufb[EDUI_MAXLEN], obj[EDUI_MAXLEN];
-char hexc[4], *p;
 void *y, *z;
-size_t s;
-long x;
-int i, j, t, swi;   /* IPv6 "::" cut over toggle */
 if (l == NULL) return LDAP_ERR_NULL;
 if (ip == NULL) return LDAP_ERR_PARAM;
 if (!(l->status & LDAP_INIT_S)) return LDAP_ERR_INIT;   /* Not initalized */
@@ -830,183 +871,23 @@ ConvertIP(edui_ldap_t *l, char *ip)
 l->status |= (LDAP_IPV4_S);
 z = NULL;
 }
-s = strlen(ip);
-*(bufa) = '\0';
-*(bufb) = '\0';
-*(obj) = '\0';
-/* StringSplit() will zero out bufa & obj at each call */
-memset(l->search_ip, '\0', sizeof(l->search_ip));
-xstrncpy(bufa, ip, sizeof(bufa));   /* To avoid segfaults, use bufa instead of ip */
-swi = 0;
-if (l->status & LDAP_IPV6_S) {
-/* Search for :: in string */
-if ((bufa[0] == ':') && (bufa[1] == ':')) {
-/* bufa starts with a ::, so just copy and clear */
-xstrncpy(bufb, bufa, sizeof(bufb));
-*(bufa) = '\0';
-++swi;  /* Indicates that 

Re: [squid-dev] [squid-users] tcp_outgoing_address and HTTPS

2018-03-20 Thread Amos Jeffries
On 21/03/18 00:11, Michael Pro wrote:
> squid-5 master branch, not have  personal/private repository changes,
> not use  cache_peer's ability, (if it's matters - not use transparent
> proxying ability).
> 
> We have a set of rules (ACL's with url regex) for content, depending
> on which we make a decision for the outgoing address, for example,
> from 10.10.1.xx to 10.10.6.xx
> -log 1part {{{ -
> Acl.cc(151) matches: checked: tcp_outgoing_address 10.10.5.11 = 1
> Checklist.cc(63) markFinished: 0x7fffe2b8 answer ALLOWED for match
> FilledChecklist.cc(67) ~ACLFilledChecklist: ACLFilledChecklist
> destroyed 0x7fffe2b8
> Checklist.cc(197) ~ACLChecklist: ACLChecklist::~ACLChecklist:
> destroyed 0x7fffe2b8
> peer_select.cc(1026) handlePath: PeerSelector3438 found
> local=10.10.5.11 remote=17.253.37.204:80 HIER_DIRECT flags=1,
> destination #2 for http://iosapps.itunes.apple.com/...xxx...ipa
> ...
> peer_select.cc(1002) interestedInitiator: PeerSelector3438
> peer_select.cc(112) ~PeerSelector: 
> http://iosapps.itunes.apple.com/...xxx...ipa
> store.cc(464) unlock: peerSelect unlocking key
> 60081C0E0100 e:=p2IWV/0x815c09500*3
> AsyncCallQueue.cc(55) fireNext: entering AsyncJob::start()
> AsyncCall.cc(38) make: make call AsyncJob::start [call195753]
> AsyncJob.cc(123) callStart: Comm::ConnOpener status in: [ job10909]
> comm.cc(348) comm_openex: comm_openex: Attempt open socket for: 10.10.5.11
> comm.cc(391) comm_openex: comm_openex: Opened socket local=10.10.5.11
> remote=[::] FD 114 flags=1 : family=2, type=1, protocol=6
> -log 1part }}} -
> In the case of normal traffic (http), everything works fine, as shuld.
> 

The difference to be aware of is that there is zero security on this
type of HTTP. So while it is better not to play with destinations, and
Squid default is to go where the client wanted - it is permitted to go
elsewhere if a better source is found.


> In the case of HTTPS with traffic analysis (ssl_bump) we have such a picture:
> -log 2part {{{ --
> Acl.cc(151) matches: checked: tcp_outgoing_address 10.10.5.120 = 1
> Checklist.cc(63) markFinished: 0x7fffe2b8 answer ALLOWED for match
> FilledChecklist.cc(67) ~ACLFilledChecklist: ACLFilledChecklist
> destroyed 0x7fffe2b8
> Checklist.cc(197) ~ACLChecklist: ACLChecklist::~ACLChecklist:
> destroyed 0x7fffe2b8
> peer_select.cc(1026) handlePath: PeerSelector569 found
> local=10.10.5.120 remote=23.16.9.11:443 PINNED flags=1, destination #1
> for https://some.https.com/...xxx...zip

What PINNED means to Squid is that the client TCP connection is tied up
with some details related to some specific TCP server connection.

In this case the TLS crypto used during the bumping process took crypto
details from the client connection and gave them to the server, then
from the server and gave them to the client. Resulting in a forced
end-to-end relationship between the clinet and server for all traffic
over both those connections.
 The only thing Squid can do is to server some content from cache as
normal HITs, or if you specifically configure ICAP/eCAP service they can
modify the messages as they flow. Delivering the traffic to another
server is not permissable because the HTTP messages can (and often are)
tied to the TLS crypto details as well in ways that are not visible to
Squid.

For example; it is becoming very popular for the endpoints to crypto
sign messages or embed a hash signature which can only be verified valid
using details the server and client exchanged up front. No other server
would be able to send valid values and the client breaks if it is wrong.
 This kind of thing survives even when SSL-Bump'ing because of Squid
pinning, but does add the restrictions you found.


> 
> I understand that without analyzing the traffic and not knowing the
> final goal for the beginning, we can not manage the process further.
> Question: how can we break the established channel (unpinn it) along
> the old route and establish a new channel along the new route, when we
> already know how.

There are three possibilities that I am aware of - in no particular order:

1) An ICAP service can do whatever it pleases with requests it receives.
We hold no responsibility for anything happening there and I publicly
advise against playing with the crypto that way - the above issues are
the least of the problems to be faced.


2) It is technically possible to make Squid open a CONNECT tunnel
through an HTTP peer proxy to the origin instead of going there
directly. The only thing preventing this is nobody writing the necessary
code.

It has been on my (and many others) wishlist for a long while but still
nobody has been able to work on it. Any assistance towards getting that
coded is very, very welcome.


3) The client-first type of bumping does not involve any server crypto.
This is *highly* unsafe and often encounters problems like the ICAP
approach and for all the same reasons.

BUT that said, if you are 

Re: [squid-dev] Percent-encoded URLs

2018-03-10 Thread Amos Jeffries
On 11/03/18 11:36, Eduard Bagdasaryan wrote:
> Hi Squid developers,
> 
> 
> I need your competent help with the following issue.
> 
> While working on some public key generation issues I noticed that Squid
> does not decode percent-encoded URLs (at least before creating public
> keys). While trying to understand whether it is correct, I
> searched RFC7230 family for proxy-related MUST requirements but
> unfortunately did not find them. Another RFC3986 p. 6.2.2.2. describes
> 'percent-encoding normalization' of unreserved characters, but this is
> not a 'MUST' requirement. So, at first glance, Squid does not violate
> RFCs in this case. However, the fact that two equivalent URLs (with and
> w/o percent encoding) are treated differently may cause some
> confusion: for example, a 'DELETE' for such equivalent URL would fail.
> 
> So my questions are:
> 
> * are there any percent-encoding requirements for proxies?
> 

AFAIK there are none specific to proxies. The client and server
requirements should be used on the relevant received or sent URLs.

IMO the decode should be done in URL::parse() method, and a re-encode
should be done in the getter methods as relevant for each URL section
(they are different, based on the different invalid-char sets).

FYI: When attempting to do that I was overridden by the QA requirement
that URLs "must not be changed" by Squid. The natural side effect is the
caching problem you describe, along with a DoS vulnerability which
apparently nobody in the "real world" cares about.


> * does Squid violate them?
> 

Squid complies with RFC 1738 (not RFC 3986) currently.


Cheers
Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] Fedora rawhide issues

2018-03-04 Thread Amos Jeffries
FYI: the Fedora rawhide build node is finding compile issues due to its
new GCC-8. I am working on fixes for those, but some will require
refactoring which may take a while.

Amos

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] Bug 4710 resolution

2018-02-28 Thread Amos Jeffries
Regarding .

Is anyone working on a proper fix for this bug yet?

Unless there is other work which can resolve it relatively soon I am
going to start looking into a workaround of simply not passing eCAP any
details for the problem transactions for Squid-4.

Cheers
Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [RFC] http(s)_port TLS/SSL config redesign

2018-02-01 Thread Amos Jeffries
The final part of the proposal quoted below are now PR'd for audit.

There are several differences from the initial proposal;

* the ServerOptions::signingCa class changes and new default for
generate-host-certificates have allowed a far simpler back-compat check
using the non-existence of a filename by generate-host-certificates=
instead of relying on cert CA checking.
 That CA checking should still be done, but is no longer a required part
of this project.

* generation of reverse-proxy certificates is happening in a separate
parallel project.


Amos


On 20/07/17 13:27, Amos Jeffries wrote:
> Hi all, Christos and Alex particularly,
> 
> I have been mulling over several ideas for how to improve the config
> parameters on the http(s)_port to make them a bit easier for newbies to
> get right, and pros to do powerfully cool stuff.
> 
> 
> So, the most major change I would like to propose is to move the proxies
> CA for cert generation from cert= parameter to
> generate-host-certificates= parameter. Having it configured with a file
> being the same as generate =on and not configuring it default to =off.
> 
> 
> The matching key= and any CA chain would need to be all bundled in the
> same PEM file. That should be fine since the clients get a separate DER
> file installed, not sharing the PEM file loaded into Squid.
> 
> That will stop confusing newbies have over what should go in cert= for
> what Squid use-case. And will allow pros to configure exactly which
> static cert Squid uses as a fallback when auto-generating others -
> simply by using cert= in the normal non-bumping way.
> 
> Also, we can then easily use the two sets of parameters in identical
> fashion for non-SSL-Bump proxies to auto-generate reverse-proxy certs
> based on SNI, or use a fallback static cert of admins choice.
> 
> Bringing these two different use-cases config into line should vastly
> simplify the complexity of working with Squid TLS certs for everybody,
> including us in the code as we no longer have multiple (8! at least)
> code paths for different cert= possibilities and config error handling
> permutations.
> 
> 
> For backward compatibility concerns with existing SSL-Bump configs I
> think we can use the certificate CA vs non-CA property to auto-detect
> old SSL-Bump configs being loaded and handle the compatibility
> auto-upgrade and warnings. The warning will be useful long-term and not
> just for the transitional period.
> 
> 
> Now would also be a marginally better than usual time to make the change
> since the parameters are migrating to tls-* prefix in v4 and have extra
> admin attention already.
> 
> 
> Amos
> ___
> squid-dev mailing list
> squid-dev@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-dev
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] Introduction

2018-01-22 Thread Amos Jeffries
On 20/12/17 02:11, Daniel Berredo wrote:
> Hello all,
> 
> My name is Daniel Santos and I am a DevOps in Brazil. I am working on a
> Hotspot Captive Portal project using Squid and need to be to able to
> evict an user from the Auth Cache before its ttl expired.
> What would be the best way to start on a proper PR? Is there any dev
> guidelines I should be aware of?
> 
> I am thinking about adding a new method to the CredentialsCache class
> ("evict", for example) and somehow make squid able to respond to a
> variation of the "PURGE" method. Than I would be able to use squid
> client to evict users from Auth Cache.
> 
> Is there anyone that could give me some directions on how to do this?
> 
> Thanks in advance,
> Daniel
> 

Hi Daniel,

 Sorry for the delay your post got stuck in our moderation queue. Please
note that this list now has a normal mailing list subscription process.
You subscribe with the form at
<http://lists.squid-cache.org/listinfo/squid-dev> and follow the bots
instructions.


Information for developers about Squid and the processes used by the
Squid Project is all linked from
<https://wiki.squid-cache.org/DeveloperResources>.


A feature similar to what you describe has been on the wishlist for a
very long time now. Do not go to the effort of a whole new HTTP method,
or anything like PURGE. The CacheMgr interface already has most of the
HTTP message functionality in place to do this type of thing through GET
or POST.

It looks like the external ACL cache is still using the old C-style
dump() code. So it will first need converting into an class inheriting
from Mgr::Action. Then processing added to parse "user=X" tokens from
the URL query-string, and to act on the value found.


Before you go to too much trouble, is there a specific reason why you
are considering this approach instead of just setting shorter TTLs on
the details the helper is supplying Squid?
Be aware the TTL in Squid is simply how often it asks the helper for
updates on the validity. It does *not* relate to when those credentials
expire except as a _maximum_ time until Squid notices actual expiry.


Amos Jeffries
The Squid Software Foundation
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] RFC: Squid ESI processor changes

2018-01-22 Thread Amos Jeffries
On 23/01/18 03:10, Adam Majer wrote:
> On 01/18/2018 04:16 PM, Amos Jeffries wrote:
>> The Squid team are planning to remove the Custom XML parser used for ESI
>> processing from the next Squid version.
> 
> For next squid version, do you mean 5.x? next 4.x release? or stable
> 3.5.x release?

4.0.23 has a tentative removal to see how it goes.

> 
> proposed new default? libxml2?

Yes and no. Auto-detected from libxml2 or libexpat in that order of
preference, relative to which of them your Squid is built against
(either, both or none).

NP: That preference is based purely on what their popularity and active
development looked like last week.

Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] Squid on Windows

2018-01-08 Thread Amos Jeffries

On 09/01/18 15:56, Lei Wen wrote:

Hi everyone,

This is Lei Wen, I am from Microsoft Azure team.

We are seeking a solution about on host transparent proxy for containers 
with Squid on Windows.


We already tried Linux and by using iptables traffic can be redirected 
to squid port(e.x. 3128).


We want to know what do we need do to enable transparent proxy on Squid 
side on Windows Like on the Linux, --enable-linux-netfilter enables 
transparent proxy.


Hi Lei,

For NAT interception, Squid needs an interface from the OS to lookup NAT 
table mappings given either the accept() provided IP:port pair(s) and/or 
the socket handle. The API needs to provide the original dst-IP:port 
details the client used prior to the NAT alterations.


As far as I/we have been able to tell so far Windows does not provide 
any such interface for use by applications running in user-space like 
Squid. Once an interface is found or created adding a lookup function to 
Squid using the API should be fairly simple.


There have been several attempts that I'm aware of to create custom 
network drivers for Windows. But those turned out to be very much too 
slow and required asynchronous operations inside the preferrably 
synchronous NAT lookup.



An alternative API to look for is TPROXY. But, I've not seen or heard of 
anything like that either for Windows.



Amos Jeffries
The Squid Software Foundation
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] OpenSSL 1.1 support

2017-12-19 Thread Amos Jeffries

On 19/12/17 22:04, Adam Majer wrote:

On 12/18/2017 06:17 PM, Amos Jeffries wrote:

On 19/12/17 04:48, Adam Majer wrote:

Hi,

Is there a plan of supporting OpenSSL 1.1 in squid 3.5.x branch?



Not currently. Some of the config changes the library imposes may be a
bit surprising for a stable release.


If you are self-building to get SSL-Bump support I recommend trying to
use Squid-4 anyway. It should be stable enough for most installations
and has better SSL-Bump and related behaviours.


Actually, the reason I'm asking is OpenSUSE Tumbleweed has migrated away
from OpenSSL 1.0 to 1.1. Is there a current timeline when 4.x branch
will become stable?


6-12 months ago was the plan. :-(




Is there a list of tasks that need to be fixed for 4.x branch to be
considered stable?


<https://wiki.squid-cache.org/ReleaseProcess#General_Release_Process_Guidelines>

We are currently stuck at #3 in that process with a few major bugs 
preventing reaching #4.

(<https://bugs.squid-cache.org/query.cgi?bug_severity=blocker_severity=critical_severity=major_status=UNCONFIRMED_status=MOREINFO_status=NEW_status=ASSIGNED_status=REOPENED=Documentation=helpers%3A%20auth_param=helpers%3A%20external_acl_type=helpers%3A%20log_daemon=helpers%3A%20Security=helpers%3A%20storeid_rewrite=helpers%3A%20url_rewrite=New%20Feature%20Request=other=other%3A%20Content%20Adaptation=other%3A%20Edge%20Side%20Includes%20%28ESI%29=other%3A%20SMP%20awareness=other%3A%20SSL-Bump=Test%20Suite=tools%3A%20cachemgr.cgi=tools%3A%20purge=tools%3A%20squidclient=Squid_format=advanced=4_name=Major%204.x>)

Some of those already have workarounds in v4 and so are planned to 
ignore for purposes of declaring stability. But a full fix for any of 
them (and any other bug) is of course very welcome.


Next release on my calendar is ~6th January. So re-evaluation of all the 
pieces will be happening across the week prior.


Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] OpenSSL 1.1 support

2017-12-18 Thread Amos Jeffries

On 19/12/17 04:48, Adam Majer wrote:

Hi,

Is there a plan of supporting OpenSSL 1.1 in squid 3.5.x branch?



Not currently. Some of the config changes the library imposes may be a 
bit surprising for a stable release.



If you are self-building to get SSL-Bump support I recommend trying to 
use Squid-4 anyway. It should be stable enough for most installations 
and has better SSL-Bump and related behaviours.


Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] 回复: 回复: squid3.5.27 can't show correctly https website.howcan i certain the wrong area in code.

2017-11-12 Thread Amos Jeffries

On 13/11/17 13:54, G~D~Lunatic wrote:
Thank you .I'm a begginer indeed. firstly , i use squid 3.5.27 as a 
transparent proxy. With the proxy , i access some https website like 
www.hupu.com. But the webpage does not show correctly.  There are some 
websize similar such as https://www.zhihu.com , https://www.jd.com/ and 
mail.163.com(163.com is can't access not show correctly). So i want to 
know where problem is or how to deal with it. Need your advice or help.




Okay that is better. But please explain the "not show correctly".

 What is not "showing" ?
 How should it be showing?
 What HTTPS(S) transactions are involved with fetching that problematic 
content?

 How do those transactions appear to be going wrong?
 - not happening at all?
 - error messages?
 - unexpected status coming out of the proxy?

Details, details and lots more details :-)

Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] 回复: squid3.5.27 can't show correctly https website.howcan i certain the wrong area in code.

2017-11-10 Thread Amos Jeffries

[ Please keep replies going to the mailing list. ]

On 10/11/17 21:17, G~D~Lunatic wrote:

very thank you .  What is the TLS/SSL certificate helper?


I mentioned it only because the debugs() lines you were talking were 
talking about a helper - which is a particular *type* of helper program...



Because there're some problems when i accessed serveral https website, i 
think i could add some debugs messages to find where wrong is.



Hold on. This is the first mention you have of any problem happening, 
after starting many threads asking how to debug Squid.


Please describe the details of these "some problems" that are happening. 
Then what you have done so far to look into them, and what you found out 
so far from those actions.





So I just want to add and view debug information.
Do you mind tell me more details on  how squid helps clients access the 
HTTPS website, and which source files  should I focus on.
If squid ask openssl to parse the certificate it received, then where 
did it call the openssl interface API.


Unfortunately what you have been asking about is a whole complicated 
protocol and even more complicated code spread over multiple programs 
and libraries. There is no "just" debug and fix the problem. You have to 
know what pieces to look at first and what those pieces are supposed to 
look like.


You seem to be very much a beginner with these things and that means it 
is probably going to be very hard to do it yourself right now.


To give you proper help we have to know what you are trying to do - not 
just that you are trying to debug something unspecified, but what the 
problem *really* is that you are trying to solve.



Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] squid3.5.27 can't show correctly https website.how can i certain the wrong area in code.

2017-11-09 Thread Amos Jeffries

On 09/11/17 15:19, G~D~Lunatic wrote:
i used squid 3.5.27  . the website is www.hupu.com  . Serveral websites 
like this can't been shown correctly.

i use command "./squid -k debug". The result like
2017/11/08 16:06:03.996 kid1| 41,5| AsyncCall.cc(38) make: make call 
MaintainSwapSpace [call49573]
2017/11/08 16:06:03.996 kid1| 47,5| ufs/UFSSwapDir.cc(448) maintain: 
space still available in /usr/local/squid/var/cache/squid
2017/11/08 16:06:03.996 kid1| 41,7| event.cc(322) schedule: schedule: 
Adding 'MaintainSwapSpace', in 1.00 seconds
2017/11/08 16:06:03.996 kid1| 41,5| AsyncCallQueue.cc(57) fireNext: 
leaving MaintainSwapSpace()


i can't understand because i thought debug message come from following 
codes.
The output you see on the screen with that command is the debug from the 
"squid" process you just ran to start the debugging happening in the 
proxy process that was already running.
 The output from the running Squid proxy is in cache.log. The instant 
it gets the debug signal that log should start to fill with its details, 
until it exists or gets another signal to disable the debugging again.



The above log entries do not come from, and have nothing to do with the 
code lines you quoted below.



debugs(33, 3, "Connection gone while waiting for ssl_crtd helper reply; 
helper reply:" << reply);
debugs(33, 5, HERE << "Certificate for " << sslConnectHostOrIp << " was 
successfully recieved from ssl_crtd");


how can i certain the wrong area in code.




These debugs() lines you quoted are part of the TLS/SSL certificate 
helper API code. So only get output when that code runs.


* The first debugs() line is in the event of a ssl_crtd helper 
disconnecting/terminating/crashing unexpectedly.


This is an error situation, so you should never see it anywhere in your 
logs. If it does happen it means the helper crashed or something equally 
bad. So start debugging the helper itself, not Squid.


* The second debugs() line is in the event of a certificate being 
successfully received by Squid from that helper.


Obviously that should happen a lot and is not much use other than to 
identify timing of when things happened in the TLS/SSL processing.



Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


  1   2   3   4   5   6   7   8   9   10   >