Thanks,
This cleared it out.
I saw the patch at the Bugzilla but wasnt sure about it.
Eliezer
On 12/1/2012 1:39 AM, Amos Jeffries wrote:
On 30/11/2012 5:59 p.m., Eliezer Croitoru wrote:
Bump.
I was wondering if there was any progress about this matter?
The xaction-orphans patch has been
On 30/11/2012 5:59 p.m., Eliezer Croitoru wrote:
Bump.
I was wondering if there was any progress about this matter?
The xaction-orphans patch has been picked up separately by one Daniel B.
and is being tracked with
http://bugs.squid-cache.org/show_bug.cgi?id=3688. With only good results
so
Bump.
I was wondering if there was any progress about this matter?
Thanks,
Eliezer
On 9/11/2012 5:00 PM, Alex Rousskov wrote:
On 09/10/2012 05:51 PM, Amos Jeffries wrote:
On 11.09.2012 03:06, Alex Rousskov wrote:
Here is the sketch for connect retries on failures, as I see it:
default
On 09/10/2012 05:51 PM, Amos Jeffries wrote:
> On 11.09.2012 03:06, Alex Rousskov wrote:
Here is the sketch for connect retries on failures, as I see it:
default:
++failRetries_; // TODO: rename to failedTries
// we failed and have to start from
On 11.09.2012 03:06, Alex Rousskov wrote:
On 09/09/2012 02:30 AM, Amos Jeffries wrote:
IIRC the race is made worse by our internal code going timout event
(async) scheduling a call (async), so doubling the async queue
delay.
Which shows up worst on heavily loaded proxies.
True. Unfortunatel
On 09/09/2012 02:30 AM, Amos Jeffries wrote:
> IIRC the race is made worse by our internal code going timout event
> (async) scheduling a call (async), so doubling the async queue delay.
> Which shows up worst on heavily loaded proxies.
True. Unfortunately, the alternative is to deal with ConnOpe
On Fri, 2012-09-07 at 09:17 -0600, Alex Rousskov wrote:
> On 09/07/2012 08:33 AM, Alexander Komyagin wrote:
> > On Fri, 2012-09-07 at 08:15 -0600, Alex Rousskov wrote:
> >> On 09/07/2012 02:32 AM, Alexander Komyagin wrote:
> >>> However, as I stated earlier, the comm.cc problem (actually semantics
On 9/09/2012 8:30 p.m., Amos Jeffries wrote:
On 9/09/2012 5:03 a.m., Alex Rousskov wrote:
On 09/07/2012 09:51 PM, Amos Jeffries wrote:
On 8/09/2012 3:17 a.m., Alex Rousskov wrote:
On 09/07/2012 08:33 AM, Alexander Komyagin wrote:
However, as I stated earlier, the comm.cc problem (actually
sem
On 9/09/2012 5:03 a.m., Alex Rousskov wrote:
On 09/07/2012 09:51 PM, Amos Jeffries wrote:
On 8/09/2012 3:17 a.m., Alex Rousskov wrote:
On 09/07/2012 08:33 AM, Alexander Komyagin wrote:
However, as I stated earlier, the comm.cc problem (actually semantics
problem) persists. I think it should be
On 09/07/2012 09:51 PM, Amos Jeffries wrote:
> On 8/09/2012 3:17 a.m., Alex Rousskov wrote:
>> On 09/07/2012 08:33 AM, Alexander Komyagin wrote:
> However, as I stated earlier, the comm.cc problem (actually semantics
> problem) persists. I think it should be documented that second and
>
On 8/09/2012 3:17 a.m., Alex Rousskov wrote:
On 09/07/2012 08:33 AM, Alexander Komyagin wrote:
On Fri, 2012-09-07 at 08:15 -0600, Alex Rousskov wrote:
On 09/07/2012 02:32 AM, Alexander Komyagin wrote:
However, as I stated earlier, the comm.cc problem (actually semantics
problem) persists. I th
On 09/07/2012 08:33 AM, Alexander Komyagin wrote:
> On Fri, 2012-09-07 at 08:15 -0600, Alex Rousskov wrote:
>> On 09/07/2012 02:32 AM, Alexander Komyagin wrote:
>>> However, as I stated earlier, the comm.cc problem (actually semantics
>>> problem) persists. I think it should be documented that seco
On Fri, 2012-09-07 at 08:15 -0600, Alex Rousskov wrote:
> On 09/07/2012 02:32 AM, Alexander Komyagin wrote:
> > OK. I agree. It sounds rather reasonable to avoid excess code complexity
> > and CPU consuming in order to gain performance for the common case.
>
> I am very glad that we are in agreeme
On 09/07/2012 02:32 AM, Alexander Komyagin wrote:
> OK. I agree. It sounds rather reasonable to avoid excess code complexity
> and CPU consuming in order to gain performance for the common case.
I am very glad that we are in agreement here. Will you work on a patch
to fix ConnOpener?
> However,
OK. I agree. It sounds rather reasonable to avoid excess code complexity
and CPU consuming in order to gain performance for the common case.
However, as I stated earlier, the comm.cc problem (actually semantics
problem) persists. I think it should be documented that second and
subsequent calls to
On 09/06/2012 02:35 AM, Alexander Komyagin wrote:
> On Wed, 2012-09-05 at 09:59 -0600, Alex Rousskov wrote:
>> On 09/05/2012 09:27 AM, Alexander Komyagin wrote:
>>
>>> So you think that it's ok for comm_coonect_addr() to return COMM_OK if
>>> it was called before the appropriate select() notificati
On 09/05/2012 05:48 PM, Amos Jeffries wrote:
> On 06.09.2012 02:30, Alex Rousskov wrote:
>> On 09/05/2012 03:32 AM, Alexander Komyagin wrote:
>>> On Tue, 2012-09-04 at 09:16 -0600, Alex Rousskov wrote:
>>
Again, I hope that this trick is not needed to solve your problem,
and I
am wor
On Wed, 2012-09-05 at 09:59 -0600, Alex Rousskov wrote:
> On 09/05/2012 09:27 AM, Alexander Komyagin wrote:
>
> > So you think that it's ok for comm_coonect_addr() to return COMM_OK if
> > it was called before the appropriate select() notification. Am I right?
>
> Hard to say for sure since comm_
On 06.09.2012 02:30, Alex Rousskov wrote:
On 09/05/2012 03:32 AM, Alexander Komyagin wrote:
On Tue, 2012-09-04 at 09:16 -0600, Alex Rousskov wrote:
Again, I hope that this trick is not needed to solve your problem,
and I
am worried that it will cause more/different problems elsewhere. I
woul
On 09/05/2012 09:27 AM, Alexander Komyagin wrote:
> So you think that it's ok for comm_coonect_addr() to return COMM_OK if
> it was called before the appropriate select() notification. Am I right?
Hard to say for sure since comm_coonect_addr() lacks an API description,
and there are at least thre
So you think that it's ok for comm_coonect_addr() to return COMM_OK if
it was called before the appropriate select() notification. Am I right?
On Wed, 2012-09-05 at 08:30 -0600, Alex Rousskov wrote:
> On 09/05/2012 03:32 AM, Alexander Komyagin wrote:
> > On Tue, 2012-09-04 at 09:16 -0600, Alex Rou
On 09/05/2012 03:32 AM, Alexander Komyagin wrote:
> On Tue, 2012-09-04 at 09:16 -0600, Alex Rousskov wrote:
>> Again, I hope that this trick is not needed to solve your problem, and I
>> am worried that it will cause more/different problems elsewhere. I would
>> recommend fixing CommOpener instead
On Tue, 2012-09-04 at 09:16 -0600, Alex Rousskov wrote:
> On 09/04/2012 03:10 AM, Alexander Komyagin wrote:
> > On Fri, 2012-08-31 at 11:05 -0600, Alex Rousskov wrote:
> >> On 08/31/2012 09:01 AM, Alexander Komyagin wrote:
> >>> Alex, I figured it out, finally! The bug was in comm_connect_addr()
>
On 09/04/2012 03:10 AM, Alexander Komyagin wrote:
> On Fri, 2012-08-31 at 11:05 -0600, Alex Rousskov wrote:
>> On 08/31/2012 09:01 AM, Alexander Komyagin wrote:
>>> Alex, I figured it out, finally! The bug was in comm_connect_addr()
>>> function (I suppose it is kernel-dependent though).
>>>
>>> Co
On 09/01/2012 03:26 AM, Amos Jeffries wrote:
> On 1/09/2012 5:05 a.m., Alex Rousskov wrote:
>> * For the conn-open-timeout patch:
>>
>> Most importantly, the timeout handler should abort the ConnOpener job on
>> the spot rather than go through one more select() try.
> ConnOpener should be obeying
On 09/01/2012 03:26 AM, Amos Jeffries wrote:
>> /// a helper job that connects to the ICAP service and
>> /// calls our "connector" callback; unused on pconns
>> Comm::ConnOpener *opener;
>>
>> It would be better to declare this just above the "connector"
>> declaration to keep opene
On Fri, 2012-08-31 at 11:05 -0600, Alex Rousskov wrote:
> On 08/31/2012 09:01 AM, Alexander Komyagin wrote:
> > Alex, I figured it out, finally! The bug was in comm_connect_addr()
> > function (I suppose it is kernel-dependent though).
> >
> > Consider following call trace:
> > 1) Xaction starts C
On 1/09/2012 5:05 a.m., Alex Rousskov wrote:
On 08/31/2012 09:01 AM, Alexander Komyagin wrote:
Alex, I figured it out, finally! The bug was in comm_connect_addr()
function (I suppose it is kernel-dependent though).
Consider following call trace:
1) Xaction starts ConnOpener in order to create n
On 08/31/2012 09:01 AM, Alexander Komyagin wrote:
> Alex, I figured it out, finally! The bug was in comm_connect_addr()
> function (I suppose it is kernel-dependent though).
>
> Consider following call trace:
> 1) Xaction starts ConnOpener in order to create new connection to ICAP
> 2) ConnOpener
On Thu, 2012-08-30 at 08:29 -0600, Alex Rousskov wrote:
> On 08/30/2012 02:47 AM, Alexander Komyagin wrote:
>
> > What makes me wonder is that HttpStateData async job is created in
> > forward.cc using httpStart(fwd) function:
> >
> > void
> > httpStart(FwdState *fwd)
> > {
> > debugs(11, 3,
On 8/30/2012 5:29 PM, Alex Rousskov wrote:
If ICAP transactions do not detect connection timeout now, it is a bug
that we should fix.
Thank you,
Alex.
This might be the cause to the bug I encountered.
I had the same problem while using ICAP.
by the same I mean:
I wrote ICAP server and while t
On 08/30/2012 02:47 AM, Alexander Komyagin wrote:
> What makes me wonder is that HttpStateData async job is created in
> forward.cc using httpStart(fwd) function:
>
> void
> httpStart(FwdState *fwd)
> {
> debugs(11, 3, "httpStart: \"" <<
> RequestMethodStr(fwd->request->method) << " " << fwd-
On Wed, 2012-08-29 at 10:27 -0600, Alex Rousskov wrote:
>
>
> > Corresponding Xaction
> > objects are not destructed after client request timeout (I use 5 secs
> > for httperf requests)
>
> I am not intimately familiar with httperf (we use Web Polygraph), but I
> assume that httpperf immediate
On 08/29/2012 04:45 AM, Alexander Komyagin wrote:
> I have Squid 3.2.1 (transparent) + c-icap (e.g. clamav). Currently I'm
> testing performance of this setup with httperf tool.
>
> When the number of client http requests is so big that c-icap becomes
> overloaded, some connections from Squid to
Thanks, Alex. My bad. Let me clarify the issue.
I have Squid 3.2.1 (transparent) + c-icap (e.g. clamav). Currently I'm
testing performance of this setup with httperf tool.
When the number of client http requests is so big that c-icap becomes
overloaded, some connections from Squid to icap are sta
On 08/28/2012 08:56 AM, Alexander Komyagin wrote:
> It seems that I've found the problem related to the case when listen
> queue of ICAP server is full and it can accept new connections no more.
Hi Alexander,
It may help if you summarize the problem before (or after) giving
the details. It i
On Mon, 2012-08-27 at 11:57 -0600, Alex Rousskov wrote:
> On 08/27/2012 03:29 AM, Alexander Komyagin wrote:
> > Hi! My setup is Squid 3.2.1 + C-ICAP Antivirus.
> >
> > I noticed that in simulated production setup (e.g. 1200 http requests
> > per sec) there are dozens connections established from s
On 08/27/2012 03:29 AM, Alexander Komyagin wrote:
> Hi! My setup is Squid 3.2.1 + C-ICAP Antivirus.
>
> I noticed that in simulated production setup (e.g. 1200 http requests
> per sec) there are dozens connections established from squid to c-icap
> server, and even after the last client request ti
Hi! My setup is Squid 3.2.1 + C-ICAP Antivirus.
I noticed that in simulated production setup (e.g. 1200 http requests
per sec) there are dozens connections established from squid to c-icap
server, and even after the last client request timeouts, these
connections are still opened/opening.
tcp
39 matches
Mail list logo