On 12/06/11 00:57, Alex Rousskov wrote:
On 06/10/2011 02:00 PM, Amos Jeffries wrote:
On 11/06/11 05:40, Alex Rousskov wrote:
On 06/04/2011 08:05 AM, Amos Jeffries wrote:
   * re-constructed the Comm::Connection handling in ICAP after
max-connections patch. There is a pconn key problem now.

The new model of pconn pool requires remote IP:port for the key. ICAP
does not have any IP visibly available in closeN() where 3.HEAD
pre-emptively purges idle pconn to a smaller max. So this patch breaks
that optimization. Existing connections will still be closed without
re-pooling as requests finish with them.

I don't see this as a major issue to block the patch merge. But would
like to fix it ASAP if anyone can point out a simple change to do so.


=== modified file 'src/adaptation/icap/ServiceRep.cc'
+    // XXX: this whole feature bases on the false assumption a
service only has one IP
       setMaxConnections();

I disagree with this XXX. The Max-Connections feature assumes that an
ICAP service with multiple IPs can pool all its connections together
because they are all going to the same service. It does not assume that
a service cannot have more than one IP address (although that seems to
be rather rare for other reasons).

OPTIONS appears to be defined *globaly* for the service by one instance.
With detais like features this is reasonable.
With details like network tuning this assumes a precise monoculture
cluster or a singleton instance.


Agreed. This is just how ICAP works. There is no XXX (i.e., problem or
bug in Squid) here.

It would be rather difficult/costly to correctly implement a single ICAP
service that uses multiple IP addresses, supports Max-Connections, and
runs on multiple boxes. There are other ICAP features that are difficult
to support in distributed services. In my experience, folks tend to
load-balance independent ICAP services rather than creating one
distributed ICAP service.


Hmm, okay dropping the XXX at least.


=== modified file 'src/adaptation/icap/ServiceRep.cc'
       const int excess = excessConnections();
       // if we owe connections and have idle pconns, close the latter
+    // XXX:  but ... idle pconn to *where*?

Similar to the above, the answer is "to the service". All connections to
the service are treated the same. Why is that wrong?

So for consideration. An example;
   "The Service" being 10 pieces of hardware each with limited max
capacity of 1000 conn.

You are describing a setup that ICAP Max-Connections does not support
and ICAP, in general, was not designed for. So the questions below
simply do not have answers other than "change your design" or "do not do
it".


Does OPTIONS max-connection contain 1000 or 10,000?

  ... if it was 10,000 is it okay for all 10,000 to go to just one of
those hardware?

... if it was 1000, is it okay for 9 hardware to be bypassed claiming
"overloaded" while they sleep with 0 connections?

... what happens when we plug in 5 more boxes with twice the individual
capacity?

... what happens if someone typos "100" on just one instances limit config?


With the new design of idle pool indexed by destination IP we can easily
avoid all these and maximize the total service capacity.

Except we break ICAP that does not have a concept of a service IP
address (on this level). I do not know what problem by-IP pool design
solves for other protocols, but it seems to be not needed for ICAP.


=== modified file 'src/adaptation/icap/ServiceRep.cc'
       if (excess&&   theIdleConns.count()>   0) {
           const int n = min(excess, theIdleConns.count());
           debugs(93,5, HERE<<   "closing"<<   n<<   " pconns to relief
debt");
-        Ip::Address anyAddr;
-        theIdleConns.closeN(n, cfg().host.termedBuf(), cfg().port,
NULL, anyAddr);
+        theIdleConns.closeN(n, Comm::ConnectionPointer(),
cfg().host.termedBuf());
       }

Can you adjust PconnPool::pop() so that if destLink is NULL, any
destination IP would match? If yes, then the above would still work as
intended, I think.

Possibly.  Provided we drop the earlier discussed ideas of merging the
ICAP and HTTP connections pools.

Merging in what way? We already use the same pooling classes for both
ICAP and HTTP, right?

We have a few emails about using one PconnPool set and using it to push/pop all idle TCP links. With ICAP using a different type of keys as HTTP this becomes a dead idea.


Also, what would break if NULL destLink matches any IP for both ICAP and
HTTP?

IP is the key to select which pool gets used for pop()-'n'-close(). At resent each service "domain" is used as the key.


(Perhaps we should use "any IP" destLink instead of NULL destLink to
make this less of a special case and more close to regular IP semantics?)

It would have to be that way.




I have the idea we should make each Service have its own IdlePool. Dropping the use of a shared ICAP PconnPool. That way none of the operations have to bother with keys and hashing calculations.

What think you?


=== renamed file 'src/ConnectionDetail.h' =>   'src/comm/Connection.h'
+inline std::ostream&
+operator<<   (std::ostream&os, const Comm::Connection&conn)
+{
+    os<<   "FD"<<   conn.fd<<   " local="<<   conn.local<<
+        " remote="<<   conn.remote<<   " flags="<<   conn.flags;
+#if USE_IDENT
+    os<<   " IDENT::"<<   conn.rfc931;
+#endif

Perhaps we should avoid printing undefined/unset members, to reduce the
amount of noise in the debugging logs?

Sure.

* FD -1 /was kind of nice to see closed connections. Do we want to drop
that?

I think we should drop printing negative FD. The connection is closed
when it shows no FD so we are not losing any information here.

This is not a big deal, of course, and, eventually, we should add
Connection::id which will always be there.

Hmm, okay will give it a trial.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.12
  Beta testers wanted for 3.2.0.8 and 3.1.12.2

Reply via email to