Re: UDS Patch

2013-12-05 Thread Jim Jagielski

On Dec 5, 2013, at 2:03 PM, Daniel Ruggeri  wrote:
> 
> httpd-2.4.6 - w new patches
> Requests/sec:  35745.11
> Requests/sec:  36763.18
> Requests/sec:  36568.09
> 
> httpd2.4.6 - original UDS patch
> Requests/sec:  24413.15
> Requests/sec:  24015.11
> Requests/sec:  24346.76
> 
> 

w00t



Re: UDS Patch

2013-12-05 Thread Daniel Ruggeri
Thanks for getting back about that. Two days ago I retried and was able
to tease out what appeared to be environmental variance in my numbers .
After modifying the configuration to eliminate cruft as well as
replacing the app with nothing more than a simple 'hello world' type of
responder (over 32 running processes), I was able to get a much more
reasonable set of numbers. The results, tested over a few hours were
also quite stable:

httpd-2.4.6 - w new patches
Requests/sec:  35745.11
Requests/sec:  36763.18
Requests/sec:  36568.09

httpd2.4.6 - original UDS patch
Requests/sec:  24413.15
Requests/sec:  24015.11
Requests/sec:  24346.76


The nginx server is in use by another application right now so I was
unable to test it for an apples to apples comparison but this
confirms exactly as you expected, the newer patch set is faster than the
original UDS patch. I agree that decoding as well as the string
comparison in the critical path is the most likely culprit there... but
that's old hat anyway.

So, in short, my past test cases were invalid because they included
other bottlenecks. Sorry for unnecessary noise!

--
Daniel Ruggeri

On 12/5/2013 6:54 AM, Jim Jagielski wrote:
> My test setup looks pretty much the same as yours: a simple
> node.js server listening on the UDS path, but mine serves
> just static content.
>
> On Dec 2, 2013, at 7:04 PM, Daniel Ruggeri  wrote:
>
>> I had the same inclination as Cristophe but haven't been able to
>> substantiate anything due to lack of time last week wasn't as kind
>> to my free time as I had hoped. This would be very easy to tweak/test.
>> Within the next day or two I should be able to get back in to perform
>> some rebuilds and do more thorough testing and tampering as I squeeze
>> time in between various work-related crises. Most of my testing is
>> automated-ish, so turnaround from patch to test results is fairly quick.
>>
>> Jim, what does your test setup look like to measure performance delta?
>> My setup is fairly simple with httpd on the frontend targeting a small
>> Node.js backend application... I don't suspect the application is
>> skewing the results because of how consistent the results have been, but
>> I may just remove that from the equation to be absolutely sure.
>>
>> --
>> Daniel Ruggeri
>>
>> On 12/2/2013 8:14 AM, Jim Jagielski wrote:
>>> But from what I see, all of those are during non critical paths.
>>> It's like when workers are being defined, initialized, etc and
>>> that's only done during config or when added via balancer-manager.
>>>
>>> On Dec 2, 2013, at 8:09 AM, Marion et Christophe JAILLET 
>>>  wrote:
>>>
>>>> Hi,
>>>>
>>>>
>>>> one of my thought was the change from
>>>>
>>>>   worker->s->name
>>>>
>>>> to
>>>>
>>>>   ap_proxy_worker_name(r->pool, worker)
>>>>
>>>> in logging function.
>>>>
>>>> ap_proxy_worker_name allocates memory in the pool and performs some 
>>>> operations on strings (apr_pstrcat).
>>>>
>>>>
>>>> These operations are performed in all cases, even if DEBUG messages are 
>>>> not logged.
>>>>
>>>>
>>>> I don't think this should have a real effect on performance. (If I 
>>>> remember well when I looked at it, there is no ap_log_error calls in 
>>>> sensitive code)
>>>>
>>>> Just to be sure, you could try to simplify ap_proxy_worker_name in 
>>>> Daniel's build to remove the apr_pstrcat and check performance with his 
>>>> build.
>>>>
>>>>
>>>> Should you and Daniel have different logging levels, it could explain why 
>>>> you don't measure the same discrepancy.
>>>>
>>>>
>>>>
>>>> Just my 2 cents.
>>>>
>>>> If I have time, I'll give another look tonight.
>>>>
>>>>
>>>> CJ
>>>>
>>>>
>>>>
>>>>
>>>>> Message du 02/12/13 13:46
>>>>> De : "Jim Jagielski" 
>>>>> A : dev@httpd.apache.org
>>>>> Copie à : 
>>>>> Objet : Re: UDS Patch
>>>>>
>>>>> OK, I can't by inspection or by test see any performance
>>>>> differences between the 2 implementations (in fact,
>>>>> the older one, in some benchmarks, was slower due to
>>>>> the string operation

Re: UDS Patch

2013-12-05 Thread Jim Jagielski
My test setup looks pretty much the same as yours: a simple
node.js server listening on the UDS path, but mine serves
just static content.

On Dec 2, 2013, at 7:04 PM, Daniel Ruggeri  wrote:

> I had the same inclination as Cristophe but haven't been able to
> substantiate anything due to lack of time last week wasn't as kind
> to my free time as I had hoped. This would be very easy to tweak/test.
> Within the next day or two I should be able to get back in to perform
> some rebuilds and do more thorough testing and tampering as I squeeze
> time in between various work-related crises. Most of my testing is
> automated-ish, so turnaround from patch to test results is fairly quick.
> 
> Jim, what does your test setup look like to measure performance delta?
> My setup is fairly simple with httpd on the frontend targeting a small
> Node.js backend application... I don't suspect the application is
> skewing the results because of how consistent the results have been, but
> I may just remove that from the equation to be absolutely sure.
> 
> --
> Daniel Ruggeri
> 
> On 12/2/2013 8:14 AM, Jim Jagielski wrote:
>> But from what I see, all of those are during non critical paths.
>> It's like when workers are being defined, initialized, etc and
>> that's only done during config or when added via balancer-manager.
>> 
>> On Dec 2, 2013, at 8:09 AM, Marion et Christophe JAILLET 
>>  wrote:
>> 
>>> Hi,
>>> 
>>> 
>>> one of my thought was the change from
>>> 
>>>   worker->s->name
>>> 
>>> to
>>> 
>>>   ap_proxy_worker_name(r->pool, worker)
>>> 
>>> in logging function.
>>> 
>>> ap_proxy_worker_name allocates memory in the pool and performs some 
>>> operations on strings (apr_pstrcat).
>>> 
>>> 
>>> These operations are performed in all cases, even if DEBUG messages are not 
>>> logged.
>>> 
>>> 
>>> I don't think this should have a real effect on performance. (If I remember 
>>> well when I looked at it, there is no ap_log_error calls in sensitive code)
>>> 
>>> Just to be sure, you could try to simplify ap_proxy_worker_name in Daniel's 
>>> build to remove the apr_pstrcat and check performance with his build.
>>> 
>>> 
>>> Should you and Daniel have different logging levels, it could explain why 
>>> you don't measure the same discrepancy.
>>> 
>>> 
>>> 
>>> Just my 2 cents.
>>> 
>>> If I have time, I'll give another look tonight.
>>> 
>>> 
>>> CJ
>>> 
>>> 
>>> 
>>> 
>>>> Message du 02/12/13 13:46
>>>> De : "Jim Jagielski" 
>>>> A : dev@httpd.apache.org
>>>> Copie à : 
>>>> Objet : Re: UDS Patch
>>>> 
>>>> OK, I can't by inspection or by test see any performance
>>>> differences between the 2 implementations (in fact,
>>>> the older one, in some benchmarks, was slower due to
>>>> the string operations in the critical path)...
>>>> 
>>>> Any ideas?
>>>> 
>>>> On Nov 26, 2013, at 4:23 PM, Jim Jagielski wrote:
>>>> 
>>>>> Thx... the key is httpd-2.4.6-uds-delta.patch and
>>>>> that shows nothing, that I can see, which would
>>>>> result in the "old" being faster than the "new"...
>>>>> especially in the critical section where we do
>>>>> the apr_sockaddr_info_get() stuff...
>>>>> 
>>>>> On Nov 26, 2013, at 3:07 PM, Daniel Ruggeri wrote:
>>>>> 
>>>>>> I reapplied the patches in order to 2.4.6 before r1531340 was added to
>>>>>> the proposal. Attached are the three diff's of use:
>>>>>> httpd-2.4.6-uds-original.patch - Everything in the backport proposal up
>>>>>> to (but not including) r1531340 sans the stuff that doesn't fit
>>>>>> httpd-2.4.6-uds-new.patch - The 2.4 patch proposed with r1511313 applied
>>>>>> first. Note that this doesn't include r1543174
>>>>>> httpd-2.4.6-uds-delta.patch - The delta between the two modified trees
>>>>>> 
>>>>>> --
>>>>>> Daniel Ruggeri
>>>>>> 
>>>>>> On 11/22/2013 5:27 PM, Daniel Ruggeri wrote:
>>>>>>> Sorry, I thought the diffs I sent off list were good enough. I'll have
>>>>>>> to see if I even still have the original build lying around.
>>>>>>> Effectively, I just took the list of patches in the backport proposal
>>>>>>> and applied them one at a time to the 2.4.6 sources. If I can't find the
>>>>>>> build, I'll do the same over and send that instead.
>>>>>>> 
>>>>>>> --
>>>>>>> Daniel Ruggeri
>>>>>> 
>>>> 
> 



Re: UDS Patch

2013-12-02 Thread Daniel Ruggeri
I had the same inclination as Cristophe but haven't been able to
substantiate anything due to lack of time last week wasn't as kind
to my free time as I had hoped. This would be very easy to tweak/test.
Within the next day or two I should be able to get back in to perform
some rebuilds and do more thorough testing and tampering as I squeeze
time in between various work-related crises. Most of my testing is
automated-ish, so turnaround from patch to test results is fairly quick.

Jim, what does your test setup look like to measure performance delta?
My setup is fairly simple with httpd on the frontend targeting a small
Node.js backend application... I don't suspect the application is
skewing the results because of how consistent the results have been, but
I may just remove that from the equation to be absolutely sure.

--
Daniel Ruggeri

On 12/2/2013 8:14 AM, Jim Jagielski wrote:
> But from what I see, all of those are during non critical paths.
> It's like when workers are being defined, initialized, etc and
> that's only done during config or when added via balancer-manager.
>
> On Dec 2, 2013, at 8:09 AM, Marion et Christophe JAILLET 
>  wrote:
>
>> Hi,
>>
>>  
>> one of my thought was the change from
>>
>>worker->s->name
>>
>> to
>>
>>ap_proxy_worker_name(r->pool, worker)
>>
>> in logging function.
>>
>> ap_proxy_worker_name allocates memory in the pool and performs some 
>> operations on strings (apr_pstrcat).
>>
>>  
>> These operations are performed in all cases, even if DEBUG messages are not 
>> logged.
>>
>>  
>> I don't think this should have a real effect on performance. (If I remember 
>> well when I looked at it, there is no ap_log_error calls in sensitive code)
>>
>> Just to be sure, you could try to simplify ap_proxy_worker_name in Daniel's 
>> build to remove the apr_pstrcat and check performance with his build.
>>
>>  
>> Should you and Daniel have different logging levels, it could explain why 
>> you don't measure the same discrepancy.
>>
>>  
>>  
>> Just my 2 cents.
>>
>> If I have time, I'll give another look tonight.
>>
>>  
>> CJ
>>
>>
>>
>>
>>> Message du 02/12/13 13:46
>>> De : "Jim Jagielski" 
>>> A : dev@httpd.apache.org
>>> Copie à : 
>>> Objet : Re: UDS Patch
>>>
>>> OK, I can't by inspection or by test see any performance
>>> differences between the 2 implementations (in fact,
>>> the older one, in some benchmarks, was slower due to
>>> the string operations in the critical path)...
>>>
>>> Any ideas?
>>>
>>> On Nov 26, 2013, at 4:23 PM, Jim Jagielski wrote:
>>>
>>>> Thx... the key is httpd-2.4.6-uds-delta.patch and
>>>> that shows nothing, that I can see, which would
>>>> result in the "old" being faster than the "new"...
>>>> especially in the critical section where we do
>>>> the apr_sockaddr_info_get() stuff...
>>>>
>>>> On Nov 26, 2013, at 3:07 PM, Daniel Ruggeri wrote:
>>>>
>>>>> I reapplied the patches in order to 2.4.6 before r1531340 was added to
>>>>> the proposal. Attached are the three diff's of use:
>>>>> httpd-2.4.6-uds-original.patch - Everything in the backport proposal up
>>>>> to (but not including) r1531340 sans the stuff that doesn't fit
>>>>> httpd-2.4.6-uds-new.patch - The 2.4 patch proposed with r1511313 applied
>>>>> first. Note that this doesn't include r1543174
>>>>> httpd-2.4.6-uds-delta.patch - The delta between the two modified trees
>>>>>
>>>>> --
>>>>> Daniel Ruggeri
>>>>>
>>>>> On 11/22/2013 5:27 PM, Daniel Ruggeri wrote:
>>>>>> Sorry, I thought the diffs I sent off list were good enough. I'll have
>>>>>> to see if I even still have the original build lying around.
>>>>>> Effectively, I just took the list of patches in the backport proposal
>>>>>> and applied them one at a time to the 2.4.6 sources. If I can't find the
>>>>>> build, I'll do the same over and send that instead.
>>>>>>
>>>>>> --
>>>>>> Daniel Ruggeri
>>>>>
>>>



Re: UDS Patch

2013-12-02 Thread Jim Jagielski
But from what I see, all of those are during non critical paths.
It's like when workers are being defined, initialized, etc and
that's only done during config or when added via balancer-manager.

On Dec 2, 2013, at 8:09 AM, Marion et Christophe JAILLET 
 wrote:

> Hi,
> 
>  
> one of my thought was the change from
> 
>worker->s->name
> 
> to
> 
>ap_proxy_worker_name(r->pool, worker)
> 
> in logging function.
> 
> ap_proxy_worker_name allocates memory in the pool and performs some 
> operations on strings (apr_pstrcat).
> 
>  
> These operations are performed in all cases, even if DEBUG messages are not 
> logged.
> 
>  
> I don't think this should have a real effect on performance. (If I remember 
> well when I looked at it, there is no ap_log_error calls in sensitive code)
> 
> Just to be sure, you could try to simplify ap_proxy_worker_name in Daniel's 
> build to remove the apr_pstrcat and check performance with his build.
> 
>  
> Should you and Daniel have different logging levels, it could explain why you 
> don't measure the same discrepancy.
> 
>  
>  
> Just my 2 cents.
> 
> If I have time, I'll give another look tonight.
> 
>  
> CJ
> 
> 
> 
> 
> > Message du 02/12/13 13:46
> > De : "Jim Jagielski" 
> > A : dev@httpd.apache.org
> > Copie à : 
> > Objet : Re: UDS Patch
> > 
> > OK, I can't by inspection or by test see any performance
> > differences between the 2 implementations (in fact,
> > the older one, in some benchmarks, was slower due to
> > the string operations in the critical path)...
> > 
> > Any ideas?
> > 
> > On Nov 26, 2013, at 4:23 PM, Jim Jagielski wrote:
> > 
> > > Thx... the key is httpd-2.4.6-uds-delta.patch and
> > > that shows nothing, that I can see, which would
> > > result in the "old" being faster than the "new"...
> > > especially in the critical section where we do
> > > the apr_sockaddr_info_get() stuff...
> > > 
> > > On Nov 26, 2013, at 3:07 PM, Daniel Ruggeri wrote:
> > > 
> > >> I reapplied the patches in order to 2.4.6 before r1531340 was added to
> > >> the proposal. Attached are the three diff's of use:
> > >> httpd-2.4.6-uds-original.patch - Everything in the backport proposal up
> > >> to (but not including) r1531340 sans the stuff that doesn't fit
> > >> httpd-2.4.6-uds-new.patch - The 2.4 patch proposed with r1511313 applied
> > >> first. Note that this doesn't include r1543174
> > >> httpd-2.4.6-uds-delta.patch - The delta between the two modified trees
> > >> 
> > >> --
> > >> Daniel Ruggeri
> > >> 
> > >> On 11/22/2013 5:27 PM, Daniel Ruggeri wrote:
> > >>> Sorry, I thought the diffs I sent off list were good enough. I'll have
> > >>> to see if I even still have the original build lying around.
> > >>> Effectively, I just took the list of patches in the backport proposal
> > >>> and applied them one at a time to the 2.4.6 sources. If I can't find the
> > >>> build, I'll do the same over and send that instead.
> > >>> 
> > >>> --
> > >>> Daniel Ruggeri
> > >> 
> > >> 
> > > 
> > 
> >



Re: UDS Patch

2013-12-02 Thread Marion et Christophe JAILLET
Hi,

 

one of my thought was the change from

   worker->s->name

to

   ap_proxy_worker_name(r->pool, worker)

in logging function.

ap_proxy_worker_name allocates memory in the pool and performs some operations 
on strings (apr_pstrcat).

 

These operations are performed in all cases, even if DEBUG messages are not 
logged.

 

I don't think this should have a real effect on performance. (If I remember 
well when I looked at it, there is no ap_log_error calls in sensitive code)

Just to be sure, you could try to simplify ap_proxy_worker_name in Daniel's 
build to remove the apr_pstrcat and check performance with his build.

 

Should you and Daniel have different logging levels, it could explain why you 
don't measure the same discrepancy.

 

 

Just my 2 cents.

If I have time, I'll give another look tonight.

 

CJ





> Message du 02/12/13 13:46
> De : "Jim Jagielski" 
> A : dev@httpd.apache.org
> Copie à : 
> Objet : Re: UDS Patch
> 
> OK, I can't by inspection or by test see any performance
> differences between the 2 implementations (in fact,
> the older one, in some benchmarks, was slower due to
> the string operations in the critical path)...
> 
> Any ideas?
> 
> On Nov 26, 2013, at 4:23 PM, Jim Jagielski wrote:
> 
> > Thx... the key is httpd-2.4.6-uds-delta.patch and
> > that shows nothing, that I can see, which would
> > result in the "old" being faster than the "new"...
> > especially in the critical section where we do
> > the apr_sockaddr_info_get() stuff...
> > 
> > On Nov 26, 2013, at 3:07 PM, Daniel Ruggeri wrote:
> > 
> >> I reapplied the patches in order to 2.4.6 before r1531340 was added to
> >> the proposal. Attached are the three diff's of use:
> >> httpd-2.4.6-uds-original.patch - Everything in the backport proposal up
> >> to (but not including) r1531340 sans the stuff that doesn't fit
> >> httpd-2.4.6-uds-new.patch - The 2.4 patch proposed with r1511313 applied
> >> first. Note that this doesn't include r1543174
> >> httpd-2.4.6-uds-delta.patch - The delta between the two modified trees
> >> 
> >> --
> >> Daniel Ruggeri
> >> 
> >> On 11/22/2013 5:27 PM, Daniel Ruggeri wrote:
> >>> Sorry, I thought the diffs I sent off list were good enough. I'll have
> >>> to see if I even still have the original build lying around.
> >>> Effectively, I just took the list of patches in the backport proposal
> >>> and applied them one at a time to the 2.4.6 sources. If I can't find the
> >>> build, I'll do the same over and send that instead.
> >>> 
> >>> --
> >>> Daniel Ruggeri
> >> 
> >> 
> > 
> 
>

Re: UDS Patch

2013-12-02 Thread Jim Jagielski
OK, I can't by inspection or by test see any performance
differences between the 2 implementations (in fact,
the older one, in some benchmarks, was slower due to
the string operations in the critical path)...

Any ideas?

On Nov 26, 2013, at 4:23 PM, Jim Jagielski  wrote:

> Thx... the key is httpd-2.4.6-uds-delta.patch and
> that shows nothing, that I can see, which would
> result in the "old" being faster than the "new"...
> especially in the critical section where we do
> the apr_sockaddr_info_get() stuff...
> 
> On Nov 26, 2013, at 3:07 PM, Daniel Ruggeri  wrote:
> 
>> I reapplied the patches in order to 2.4.6 before r1531340 was added to
>> the proposal. Attached are the three diff's of use:
>> httpd-2.4.6-uds-original.patch - Everything in the backport proposal up
>> to (but not including) r1531340 sans the stuff that doesn't fit
>> httpd-2.4.6-uds-new.patch - The 2.4 patch proposed with r1511313 applied
>> first. Note that this doesn't include r1543174
>> httpd-2.4.6-uds-delta.patch - The delta between the two modified trees
>> 
>> --
>> Daniel Ruggeri
>> 
>> On 11/22/2013 5:27 PM, Daniel Ruggeri wrote:
>>> Sorry, I thought the diffs I sent off list were good enough. I'll have
>>> to see if I even still have the original build lying around.
>>> Effectively, I just took the list of patches in the backport proposal
>>> and applied them one at a time to the 2.4.6 sources. If I can't find the
>>> build, I'll do the same over and send that instead.
>>> 
>>> --
>>> Daniel Ruggeri
>> 
>> 
> 



Re: UDS Patch

2013-11-26 Thread Jim Jagielski
Thx... the key is httpd-2.4.6-uds-delta.patch and
that shows nothing, that I can see, which would
result in the "old" being faster than the "new"...
especially in the critical section where we do
the apr_sockaddr_info_get() stuff...

On Nov 26, 2013, at 3:07 PM, Daniel Ruggeri  wrote:

> I reapplied the patches in order to 2.4.6 before r1531340 was added to
> the proposal. Attached are the three diff's of use:
> httpd-2.4.6-uds-original.patch - Everything in the backport proposal up
> to (but not including) r1531340 sans the stuff that doesn't fit
> httpd-2.4.6-uds-new.patch - The 2.4 patch proposed with r1511313 applied
> first. Note that this doesn't include r1543174
> httpd-2.4.6-uds-delta.patch - The delta between the two modified trees
> 
> --
> Daniel Ruggeri
> 
> On 11/22/2013 5:27 PM, Daniel Ruggeri wrote:
>> Sorry, I thought the diffs I sent off list were good enough. I'll have
>> to see if I even still have the original build lying around.
>> Effectively, I just took the list of patches in the backport proposal
>> and applied them one at a time to the 2.4.6 sources. If I can't find the
>> build, I'll do the same over and send that instead.
>> 
>> --
>> Daniel Ruggeri
> 
> 



Re: UDS Patch

2013-11-22 Thread Daniel Ruggeri
Sorry, I thought the diffs I sent off list were good enough. I'll have
to see if I even still have the original build lying around.
Effectively, I just took the list of patches in the backport proposal
and applied them one at a time to the 2.4.6 sources. If I can't find the
build, I'll do the same over and send that instead.

--
Daniel Ruggeri

On 11/22/2013 10:38 AM, Jim Jagielski wrote:
> Any luck with generating the diff yet?
>
> On Nov 19, 2013, at 3:08 PM, Jim Jagielski  wrote:
>
>> The main thing is that it would be interesting to see
>> the diffs between '2.4.6 w the (several) originally proposed UDS patches 
>> applied in order'
>> and '2.4.6 w proposed backport'...
>>
>> Those diffs should show just the differences between the UDS 
>> implementations...
>>
>> On Nov 19, 2013, at 2:51 PM, Daniel Ruggeri  wrote:
>>
>>> Yes, agreed. Not sure if I made it clear, but I did apply r1511313 for
>>> the tests I did today (but not the one from yesterday).
>>>
>>> Of the several emails sent, the following have been tested:
>>> 2.4.6 w the (several) originally proposed UDS patches applied in order
>>> 2.4.6 w proposed backport (the 2 chunks around the DNS changes fail to
>>> apply since they do not exist in 2.4.6)
>>> 2.4.6 w r1511313 + proposed backport + r1543174
>>>
>>> I DID double check that the machine wasn't requesting DNS lookups for
>>> the socket name or anything strange against the DNS server - but that
>>> was only for the test I ran today.
>>>
>>> --
>>> Daniel Ruggeri
>>>
>>> On 11/19/2013 1:43 PM, Jim Jagielski wrote:
 OK... the DNS lookup code seems to have changed between 2.4.6 and 2.4.7:

https://svn.apache.org/viewvc?view=revision&revision=1511313

 So I'm wondering if there's something there.



Re: UDS Patch

2013-11-22 Thread Jim Jagielski
Any luck with generating the diff yet?

On Nov 19, 2013, at 3:08 PM, Jim Jagielski  wrote:

> The main thing is that it would be interesting to see
> the diffs between '2.4.6 w the (several) originally proposed UDS patches 
> applied in order'
> and '2.4.6 w proposed backport'...
> 
> Those diffs should show just the differences between the UDS 
> implementations...
> 
> On Nov 19, 2013, at 2:51 PM, Daniel Ruggeri  wrote:
> 
>> Yes, agreed. Not sure if I made it clear, but I did apply r1511313 for
>> the tests I did today (but not the one from yesterday).
>> 
>> Of the several emails sent, the following have been tested:
>> 2.4.6 w the (several) originally proposed UDS patches applied in order
>> 2.4.6 w proposed backport (the 2 chunks around the DNS changes fail to
>> apply since they do not exist in 2.4.6)
>> 2.4.6 w r1511313 + proposed backport + r1543174
>> 
>> I DID double check that the machine wasn't requesting DNS lookups for
>> the socket name or anything strange against the DNS server - but that
>> was only for the test I ran today.
>> 
>> --
>> Daniel Ruggeri
>> 
>> On 11/19/2013 1:43 PM, Jim Jagielski wrote:
>>> OK... the DNS lookup code seems to have changed between 2.4.6 and 2.4.7:
>>> 
>>> https://svn.apache.org/viewvc?view=revision&revision=1511313
>>> 
>>> So I'm wondering if there's something there.
>> 
> 



Re: UDS Patch

2013-11-19 Thread Jim Jagielski
The main thing is that it would be interesting to see
the diffs between '2.4.6 w the (several) originally proposed UDS patches 
applied in order'
and '2.4.6 w proposed backport'...

Those diffs should show just the differences between the UDS implementations...

On Nov 19, 2013, at 2:51 PM, Daniel Ruggeri  wrote:

> Yes, agreed. Not sure if I made it clear, but I did apply r1511313 for
> the tests I did today (but not the one from yesterday).
> 
> Of the several emails sent, the following have been tested:
> 2.4.6 w the (several) originally proposed UDS patches applied in order
> 2.4.6 w proposed backport (the 2 chunks around the DNS changes fail to
> apply since they do not exist in 2.4.6)
> 2.4.6 w r1511313 + proposed backport + r1543174
> 
> I DID double check that the machine wasn't requesting DNS lookups for
> the socket name or anything strange against the DNS server - but that
> was only for the test I ran today.
> 
> --
> Daniel Ruggeri
> 
> On 11/19/2013 1:43 PM, Jim Jagielski wrote:
>> OK... the DNS lookup code seems to have changed between 2.4.6 and 2.4.7:
>> 
>>  https://svn.apache.org/viewvc?view=revision&revision=1511313
>> 
>> So I'm wondering if there's something there.
> 



Re: UDS Patch

2013-11-19 Thread Jim Jagielski
Nope... nevermind. You said it happened w/ 2.4.6 so that's
moot. 

On Nov 19, 2013, at 2:43 PM, Jim Jagielski  wrote:

> OK... the DNS lookup code seems to have changed between 2.4.6 and 2.4.7:
> 
>   https://svn.apache.org/viewvc?view=revision&revision=1511313
> 
> So I'm wondering if there's something there.
> 
> On Nov 19, 2013, at 12:08 PM, Jim Jagielski  wrote:
> 
>> That's just weird...
>> 
>> On Nov 19, 2013, at 11:33 AM, Daniel Ruggeri  wrote:
>> 
>>> Well, I don't have good news to report... doesn't seem to be a
>>> significant change in behavior...
>>> nginx:
>>> Requests/sec:   5082.43
>>> Requests/sec:   5111.94
>>> Requests/sec:   5063.27
>>> 
>>> 2.4.6 - First UDS patch:
>>> Requests/sec:   4733.09
>>> Requests/sec:   4529.49
>>> Requests/sec:   4573.27
>>> 
>>> 2.4.6 - r1511313 + new UDS patch + r1543174:
>>> Requests/sec:   3774.41
>>> Requests/sec:   3878.02
>>> Requests/sec:   3852.34
>>> 
>>> Will try to look into this next week...
>>> 
>>> --
>>> Daniel Ruggeri
>>> 
>>> On 11/18/2013 6:37 PM, Daniel Ruggeri wrote:
>>>> On 11/18/2013 3:38 PM, Jim Jagielski wrote:
>>>>> Can you retry with this applied:
>>>>> 
>>>>>   https://svn.apache.org/viewvc?view=revision&revision=1543174
>>>> Definitely. I'll report back tomorrow so long as the universe wills
>>>> it... but one last note
>>>> 
>>>> I failed to mention in my original notes that there were two hunks that
>>>> didn't apply cleanly to 2.4.6 - these appear to be from this change:
>>>> https://svn.apache.org/viewvc/httpd/httpd/branches/2.4.x/modules/proxy/proxy_util.c?r1=1511313&r2=1511312&pathrev=1511313
>>>> ... which is in the neighborhood of what you adjusted in r1543174... but
>>>> doesn't appear to conflict directly.
>>>> 
>>>> I'm thinking I should also apply r1511313 to 2.4.6 as a prereq to
>>>> r1543174 in order to remove ambiguity... I'm frankly not sure if the
>>>> machine was performing DNS lookups during the test or not (and I have
>>>> only given this a cursory review), but that would *definitely* account
>>>> for a measurable slowdown.
>>>> 
>>>> The context of what was rejected:
>>>>> --- modules/proxy/proxy_util.c
>>>>> +++ modules/proxy/proxy_util.c
>>>>> @@ -2228,7 +2324,8 @@
>>>>>   conn->port = uri->port;
>>>>>   }
>>>>>   socket_cleanup(conn);
>>>>> -if (!worker->s->is_address_reusable || worker->s->disablereuse) {
>>>>> +if (!(*worker->s->uds_path) &&
>>>>> +(!worker->s->is_address_reusable ||
>>>>> worker->s->disablereuse)) {
>>>>>   /*
>>>>>* Only do a lookup if we should not reuse the backend
>>>>> address.
>>>>>* Otherwise we will look it up once for the worker.
>>>>> @@ -2239,7 +2336,7 @@
>>>>>   conn->pool);
>>>>>   }
>>>>>   }
>>>>> -if (worker->s->is_address_reusable && !worker->s->disablereuse) {
>>>>> +if (!(*worker->s->uds_path) && worker->s->is_address_reusable &&
>>>>> !worker->s->disablereuse) {
>>>>>   /*
>>>>>* Looking up the backend address for the worker only makes
>>>>> sense if
>>>>>* we can reuse the address.
>>>> I'll have to see what the delta with both patches applied turns out to 
>>>> be...
>>>> 
>>>> --
>>>> Daniel Ruggeri
>>>> 
>>> 
>> 
> 



Re: UDS Patch

2013-11-19 Thread Daniel Ruggeri
Yes, agreed. Not sure if I made it clear, but I did apply r1511313 for
the tests I did today (but not the one from yesterday).

Of the several emails sent, the following have been tested:
2.4.6 w the (several) originally proposed UDS patches applied in order
2.4.6 w proposed backport (the 2 chunks around the DNS changes fail to
apply since they do not exist in 2.4.6)
2.4.6 w r1511313 + proposed backport + r1543174

I DID double check that the machine wasn't requesting DNS lookups for
the socket name or anything strange against the DNS server - but that
was only for the test I ran today.

--
Daniel Ruggeri

On 11/19/2013 1:43 PM, Jim Jagielski wrote:
> OK... the DNS lookup code seems to have changed between 2.4.6 and 2.4.7:
>
>   https://svn.apache.org/viewvc?view=revision&revision=1511313
>
> So I'm wondering if there's something there.



Re: UDS Patch

2013-11-19 Thread Jim Jagielski
OK... the DNS lookup code seems to have changed between 2.4.6 and 2.4.7:

https://svn.apache.org/viewvc?view=revision&revision=1511313

So I'm wondering if there's something there.

On Nov 19, 2013, at 12:08 PM, Jim Jagielski  wrote:

> That's just weird...
> 
> On Nov 19, 2013, at 11:33 AM, Daniel Ruggeri  wrote:
> 
>> Well, I don't have good news to report... doesn't seem to be a
>> significant change in behavior...
>> nginx:
>> Requests/sec:   5082.43
>> Requests/sec:   5111.94
>> Requests/sec:   5063.27
>> 
>> 2.4.6 - First UDS patch:
>> Requests/sec:   4733.09
>> Requests/sec:   4529.49
>> Requests/sec:   4573.27
>> 
>> 2.4.6 - r1511313 + new UDS patch + r1543174:
>> Requests/sec:   3774.41
>> Requests/sec:   3878.02
>> Requests/sec:   3852.34
>> 
>> Will try to look into this next week...
>> 
>> --
>> Daniel Ruggeri
>> 
>> On 11/18/2013 6:37 PM, Daniel Ruggeri wrote:
>>> On 11/18/2013 3:38 PM, Jim Jagielski wrote:
>>>> Can you retry with this applied:
>>>> 
>>>>https://svn.apache.org/viewvc?view=revision&revision=1543174
>>> Definitely. I'll report back tomorrow so long as the universe wills
>>> it... but one last note
>>> 
>>> I failed to mention in my original notes that there were two hunks that
>>> didn't apply cleanly to 2.4.6 - these appear to be from this change:
>>> https://svn.apache.org/viewvc/httpd/httpd/branches/2.4.x/modules/proxy/proxy_util.c?r1=1511313&r2=1511312&pathrev=1511313
>>> ... which is in the neighborhood of what you adjusted in r1543174... but
>>> doesn't appear to conflict directly.
>>> 
>>> I'm thinking I should also apply r1511313 to 2.4.6 as a prereq to
>>> r1543174 in order to remove ambiguity... I'm frankly not sure if the
>>> machine was performing DNS lookups during the test or not (and I have
>>> only given this a cursory review), but that would *definitely* account
>>> for a measurable slowdown.
>>> 
>>> The context of what was rejected:
>>>> --- modules/proxy/proxy_util.c
>>>> +++ modules/proxy/proxy_util.c
>>>> @@ -2228,7 +2324,8 @@
>>>>conn->port = uri->port;
>>>>}
>>>>socket_cleanup(conn);
>>>> -if (!worker->s->is_address_reusable || worker->s->disablereuse) {
>>>> +if (!(*worker->s->uds_path) &&
>>>> +(!worker->s->is_address_reusable ||
>>>> worker->s->disablereuse)) {
>>>>/*
>>>> * Only do a lookup if we should not reuse the backend
>>>> address.
>>>> * Otherwise we will look it up once for the worker.
>>>> @@ -2239,7 +2336,7 @@
>>>>conn->pool);
>>>>}
>>>>}
>>>> -if (worker->s->is_address_reusable && !worker->s->disablereuse) {
>>>> +if (!(*worker->s->uds_path) && worker->s->is_address_reusable &&
>>>> !worker->s->disablereuse) {
>>>>/*
>>>> * Looking up the backend address for the worker only makes
>>>> sense if
>>>> * we can reuse the address.
>>> I'll have to see what the delta with both patches applied turns out to be...
>>> 
>>> --
>>> Daniel Ruggeri
>>> 
>> 
> 



Re: UDS Patch

2013-11-19 Thread Jim Jagielski
Can you provide a 'diff -u' of the 2 2.4.6 sources?

Thx!

On Nov 19, 2013, at 11:33 AM, Daniel Ruggeri  wrote:

> Well, I don't have good news to report... doesn't seem to be a
> significant change in behavior...
> nginx:
> Requests/sec:   5082.43
> Requests/sec:   5111.94
> Requests/sec:   5063.27
> 
> 2.4.6 - First UDS patch:
> Requests/sec:   4733.09
> Requests/sec:   4529.49
> Requests/sec:   4573.27
> 
> 2.4.6 - r1511313 + new UDS patch + r1543174:
> Requests/sec:   3774.41
> Requests/sec:   3878.02
> Requests/sec:   3852.34
> 
> Will try to look into this next week...
> 
> --
> Daniel Ruggeri
> 
> On 11/18/2013 6:37 PM, Daniel Ruggeri wrote:
>> On 11/18/2013 3:38 PM, Jim Jagielski wrote:
>>> Can you retry with this applied:
>>> 
>>> https://svn.apache.org/viewvc?view=revision&revision=1543174
>> Definitely. I'll report back tomorrow so long as the universe wills
>> it... but one last note
>> 
>> I failed to mention in my original notes that there were two hunks that
>> didn't apply cleanly to 2.4.6 - these appear to be from this change:
>> https://svn.apache.org/viewvc/httpd/httpd/branches/2.4.x/modules/proxy/proxy_util.c?r1=1511313&r2=1511312&pathrev=1511313
>> ... which is in the neighborhood of what you adjusted in r1543174... but
>> doesn't appear to conflict directly.
>> 
>> I'm thinking I should also apply r1511313 to 2.4.6 as a prereq to
>> r1543174 in order to remove ambiguity... I'm frankly not sure if the
>> machine was performing DNS lookups during the test or not (and I have
>> only given this a cursory review), but that would *definitely* account
>> for a measurable slowdown.
>> 
>> The context of what was rejected:
>>> --- modules/proxy/proxy_util.c
>>> +++ modules/proxy/proxy_util.c
>>> @@ -2228,7 +2324,8 @@
>>> conn->port = uri->port;
>>> }
>>> socket_cleanup(conn);
>>> -if (!worker->s->is_address_reusable || worker->s->disablereuse) {
>>> +if (!(*worker->s->uds_path) &&
>>> +(!worker->s->is_address_reusable ||
>>> worker->s->disablereuse)) {
>>> /*
>>>  * Only do a lookup if we should not reuse the backend
>>> address.
>>>  * Otherwise we will look it up once for the worker.
>>> @@ -2239,7 +2336,7 @@
>>> conn->pool);
>>> }
>>> }
>>> -if (worker->s->is_address_reusable && !worker->s->disablereuse) {
>>> +if (!(*worker->s->uds_path) && worker->s->is_address_reusable &&
>>> !worker->s->disablereuse) {
>>> /*
>>>  * Looking up the backend address for the worker only makes
>>> sense if
>>>  * we can reuse the address.
>> I'll have to see what the delta with both patches applied turns out to be...
>> 
>> --
>> Daniel Ruggeri
>> 
> 



Re: UDS Patch

2013-11-19 Thread Jim Jagielski
That's just weird...

On Nov 19, 2013, at 11:33 AM, Daniel Ruggeri  wrote:

> Well, I don't have good news to report... doesn't seem to be a
> significant change in behavior...
> nginx:
> Requests/sec:   5082.43
> Requests/sec:   5111.94
> Requests/sec:   5063.27
> 
> 2.4.6 - First UDS patch:
> Requests/sec:   4733.09
> Requests/sec:   4529.49
> Requests/sec:   4573.27
> 
> 2.4.6 - r1511313 + new UDS patch + r1543174:
> Requests/sec:   3774.41
> Requests/sec:   3878.02
> Requests/sec:   3852.34
> 
> Will try to look into this next week...
> 
> --
> Daniel Ruggeri
> 
> On 11/18/2013 6:37 PM, Daniel Ruggeri wrote:
>> On 11/18/2013 3:38 PM, Jim Jagielski wrote:
>>> Can you retry with this applied:
>>> 
>>> https://svn.apache.org/viewvc?view=revision&revision=1543174
>> Definitely. I'll report back tomorrow so long as the universe wills
>> it... but one last note
>> 
>> I failed to mention in my original notes that there were two hunks that
>> didn't apply cleanly to 2.4.6 - these appear to be from this change:
>> https://svn.apache.org/viewvc/httpd/httpd/branches/2.4.x/modules/proxy/proxy_util.c?r1=1511313&r2=1511312&pathrev=1511313
>> ... which is in the neighborhood of what you adjusted in r1543174... but
>> doesn't appear to conflict directly.
>> 
>> I'm thinking I should also apply r1511313 to 2.4.6 as a prereq to
>> r1543174 in order to remove ambiguity... I'm frankly not sure if the
>> machine was performing DNS lookups during the test or not (and I have
>> only given this a cursory review), but that would *definitely* account
>> for a measurable slowdown.
>> 
>> The context of what was rejected:
>>> --- modules/proxy/proxy_util.c
>>> +++ modules/proxy/proxy_util.c
>>> @@ -2228,7 +2324,8 @@
>>> conn->port = uri->port;
>>> }
>>> socket_cleanup(conn);
>>> -if (!worker->s->is_address_reusable || worker->s->disablereuse) {
>>> +if (!(*worker->s->uds_path) &&
>>> +(!worker->s->is_address_reusable ||
>>> worker->s->disablereuse)) {
>>> /*
>>>  * Only do a lookup if we should not reuse the backend
>>> address.
>>>  * Otherwise we will look it up once for the worker.
>>> @@ -2239,7 +2336,7 @@
>>> conn->pool);
>>> }
>>> }
>>> -if (worker->s->is_address_reusable && !worker->s->disablereuse) {
>>> +if (!(*worker->s->uds_path) && worker->s->is_address_reusable &&
>>> !worker->s->disablereuse) {
>>> /*
>>>  * Looking up the backend address for the worker only makes
>>> sense if
>>>  * we can reuse the address.
>> I'll have to see what the delta with both patches applied turns out to be...
>> 
>> --
>> Daniel Ruggeri
>> 
> 



Re: UDS Patch

2013-11-19 Thread Daniel Ruggeri
Well, I don't have good news to report... doesn't seem to be a
significant change in behavior...
nginx:
Requests/sec:   5082.43
Requests/sec:   5111.94
Requests/sec:   5063.27

2.4.6 - First UDS patch:
Requests/sec:   4733.09
Requests/sec:   4529.49
Requests/sec:   4573.27

2.4.6 - r1511313 + new UDS patch + r1543174:
Requests/sec:   3774.41
Requests/sec:   3878.02
Requests/sec:   3852.34

Will try to look into this next week...

--
Daniel Ruggeri

On 11/18/2013 6:37 PM, Daniel Ruggeri wrote:
> On 11/18/2013 3:38 PM, Jim Jagielski wrote:
>> Can you retry with this applied:
>>
>>  https://svn.apache.org/viewvc?view=revision&revision=1543174
> Definitely. I'll report back tomorrow so long as the universe wills
> it... but one last note
>
> I failed to mention in my original notes that there were two hunks that
> didn't apply cleanly to 2.4.6 - these appear to be from this change:
> https://svn.apache.org/viewvc/httpd/httpd/branches/2.4.x/modules/proxy/proxy_util.c?r1=1511313&r2=1511312&pathrev=1511313
> ... which is in the neighborhood of what you adjusted in r1543174... but
> doesn't appear to conflict directly.
>
> I'm thinking I should also apply r1511313 to 2.4.6 as a prereq to
> r1543174 in order to remove ambiguity... I'm frankly not sure if the
> machine was performing DNS lookups during the test or not (and I have
> only given this a cursory review), but that would *definitely* account
> for a measurable slowdown.
>
> The context of what was rejected:
>> --- modules/proxy/proxy_util.c
>> +++ modules/proxy/proxy_util.c
>> @@ -2228,7 +2324,8 @@
>>  conn->port = uri->port;
>>  }
>>  socket_cleanup(conn);
>> -if (!worker->s->is_address_reusable || worker->s->disablereuse) {
>> +if (!(*worker->s->uds_path) &&
>> +(!worker->s->is_address_reusable ||
>> worker->s->disablereuse)) {
>>  /*
>>   * Only do a lookup if we should not reuse the backend
>> address.
>>   * Otherwise we will look it up once for the worker.
>> @@ -2239,7 +2336,7 @@
>>  conn->pool);
>>  }
>>  }
>> -if (worker->s->is_address_reusable && !worker->s->disablereuse) {
>> +if (!(*worker->s->uds_path) && worker->s->is_address_reusable &&
>> !worker->s->disablereuse) {
>>  /*
>>   * Looking up the backend address for the worker only makes
>> sense if
>>   * we can reuse the address.
> I'll have to see what the delta with both patches applied turns out to be...
>
> --
> Daniel Ruggeri
>



Re: UDS Patch

2013-11-18 Thread Daniel Ruggeri
On 11/18/2013 3:38 PM, Jim Jagielski wrote:
> Can you retry with this applied:
>
>   https://svn.apache.org/viewvc?view=revision&revision=1543174

Definitely. I'll report back tomorrow so long as the universe wills
it... but one last note

I failed to mention in my original notes that there were two hunks that
didn't apply cleanly to 2.4.6 - these appear to be from this change:
https://svn.apache.org/viewvc/httpd/httpd/branches/2.4.x/modules/proxy/proxy_util.c?r1=1511313&r2=1511312&pathrev=1511313
... which is in the neighborhood of what you adjusted in r1543174... but
doesn't appear to conflict directly.

I'm thinking I should also apply r1511313 to 2.4.6 as a prereq to
r1543174 in order to remove ambiguity... I'm frankly not sure if the
machine was performing DNS lookups during the test or not (and I have
only given this a cursory review), but that would *definitely* account
for a measurable slowdown.

The context of what was rejected:
> --- modules/proxy/proxy_util.c
> +++ modules/proxy/proxy_util.c
> @@ -2228,7 +2324,8 @@
>  conn->port = uri->port;
>  }
>  socket_cleanup(conn);
> -if (!worker->s->is_address_reusable || worker->s->disablereuse) {
> +if (!(*worker->s->uds_path) &&
> +(!worker->s->is_address_reusable ||
> worker->s->disablereuse)) {
>  /*
>   * Only do a lookup if we should not reuse the backend
> address.
>   * Otherwise we will look it up once for the worker.
> @@ -2239,7 +2336,7 @@
>  conn->pool);
>  }
>  }
> -if (worker->s->is_address_reusable && !worker->s->disablereuse) {
> +if (!(*worker->s->uds_path) && worker->s->is_address_reusable &&
> !worker->s->disablereuse) {
>  /*
>   * Looking up the backend address for the worker only makes
> sense if
>   * we can reuse the address.

I'll have to see what the delta with both patches applied turns out to be...

--
Daniel Ruggeri



UDS Patch

2013-11-18 Thread Jim Jagielski
Can you retry with this applied:

https://svn.apache.org/viewvc?view=revision&revision=1543174


On Nov 18, 2013, at 2:39 PM, Daniel Ruggeri  wrote:

> And... this is a bit discouraging, but as a comparison to the older UDS
> patch
> 2.4.6 + original UDS patch:
> Requests/sec:   5347.17
> Requests/sec:   5102.16
> Requests/sec:   5074.15
> 
> This is a sizable difference... Note that the current 2.4 backport
> proposal was applied to 2.4.6 since that is what I tested the original
> patch with (to keep everything apples to apples).
> 
> I'll jump in to take a look at this when time is available (next week?)
> but would like to fish for any immediate thoughts in the mean time.
> 
> --
> Daniel Ruggeri
> 
> On 11/18/2013 1:11 PM, Daniel Ruggeri wrote:
>> Oops - I copypasta'd the per-thread stats. Total stats for the test follow:
>> httpd:
>> Requests/sec:   4633.17
>> Requests/sec:   4664.49
>> Requests/sec:   4657.63
>> 
>> nginx:
>> Requests/sec:   5701.16
>> Requests/sec:   5798.08
>> Requests/sec:   5584.60
> 



Re: 2.4.x with uds patch; FastCGI broken?

2013-11-16 Thread Jim Jagielski
FWIW, this isn't related to UDS at all, except that we
found this bug due to UDS. 

On Nov 16, 2013, at 12:47 PM, Jim Jagielski  wrote:

> OK, I think I know what it is, and it's simple (if true),
> but a pain.
> 
> The issue is that during the proxypass stuff, we tuck away the
> name, which may, or may not, include a port designation, depending
> on if the URL passed does. All well and good.
> 
> The problem is that during the proxy_hook_canon_handler()
> phase, some submodules, like fcgi, always attach the port
> to the URL if it doesn't include one. http does not; it
> only adds it iff the determined port != the default port.
> 
> Soo when mod_proxy tries to find the correct worker,
> because the fcgi URL in ProxyPass didn't have :8000 but
> the canon URL will have it added, they will be see
> as different, and so the defined worker will not be
> used; the default and generic reverse proxy worker will.
> 
> We could ensure that when each worker is defined, we always
> add the port, even if not provided. But, this will involve
> changing proxy_http_canon(). I'm not sure I like this
> since it adds additional storage for no real purpose, plus
> adds some cycles to the strcasecmp for finding the workers.
> 
> Instead, I think that fcgi (and ajp as well), should do their
> proxy_*_canon the same as http. However, since apr_uri_port_of_scheme()
> doesn't know about these, we need to also create a
> ap_proxy_port_of_scheme().
> 
> Unless I hear otherwise, that's what I'll be working
> on.
> 



Re: 2.4.x with uds patch; FastCGI broken?

2013-11-16 Thread Jim Jagielski
OK, I think I know what it is, and it's simple (if true),
but a pain.

The issue is that during the proxypass stuff, we tuck away the
name, which may, or may not, include a port designation, depending
on if the URL passed does. All well and good.

The problem is that during the proxy_hook_canon_handler()
phase, some submodules, like fcgi, always attach the port
to the URL if it doesn't include one. http does not; it
only adds it iff the determined port != the default port.

Soo when mod_proxy tries to find the correct worker,
because the fcgi URL in ProxyPass didn't have :8000 but
the canon URL will have it added, they will be see
as different, and so the defined worker will not be
used; the default and generic reverse proxy worker will.

We could ensure that when each worker is defined, we always
add the port, even if not provided. But, this will involve
changing proxy_http_canon(). I'm not sure I like this
since it adds additional storage for no real purpose, plus
adds some cycles to the strcasecmp for finding the workers.

Instead, I think that fcgi (and ajp as well), should do their
proxy_*_canon the same as http. However, since apr_uri_port_of_scheme()
doesn't know about these, we need to also create a
ap_proxy_port_of_scheme().

Unless I hear otherwise, that's what I'll be working
on.


Re: 2.4.x with uds patch; FastCGI broken?

2013-11-16 Thread Jim Jagielski

On Nov 15, 2013, at 8:11 PM, Kyle Johnson  wrote:
> [proxy_fcgi:debug] [pid 30116:tid 140366778955520] mod_proxy_fcgi.c(73): 
> [client 66.192.178.3:54534] AH01060: set r->filename to 
> proxy:fcgi://localhost:8000/www/index.php


Also need to see if the above is a factor...



Re: 2.4.x with uds patch; FastCGI broken?

2013-11-16 Thread Jim Jagielski
The below is the pertinent parts:

On Nov 15, 2013, at 8:11 PM, Kyle Johnson  wrote:
> [proxy_fcgi:debug] [pid 30116:tid 140366778955520] mod_proxy_fcgi.c(764): 
> [client 66.192.178.3:54534] AH01076: url: fcgi://localhost:8000/www/index.php 
> proxyname: (null) proxyport: 0
> [proxy_fcgi:debug] [pid 30116:tid 140366778955520] mod_proxy_fcgi.c(774): 
> [client 66.192.178.3:54534] AH01078: serving URL 
> //localhost:8000/www/index.php
> [proxy:debug] [pid 30116:tid 140366778955520] proxy_util.c(2101): AH00942: 
> FCGI: has acquired connection for (*)

...

> [proxy:debug] [pid 30721:tid 139929963685632] proxy_util.c(2101): AH00942: 
> HTTP: has acquired connection for (localhost)

From this, it looks like fcgi is trying to use the general reverse
proxy worker (proxy:reverse has the name "*")... So I'm guessing it's something
between how http and fcgi look for their worker.

That gives me an idea where to look.



Re: 2.4.x with uds patch; FastCGI broken?

2013-11-16 Thread Jim Jagielski
Thx for the bug report... I'll investigate.

On Nov 15, 2013, at 8:11 PM, Kyle Johnson  wrote:

> I've been attempting to test the uds support on 2.4.x with the uds patch 
> (<http://people.apache.org/~jim/patches/uds-2.4.patch>).
> 
> I'm assuming HTTP works (it would appear based on the mailing list that the 
> author has been tested it with HTTP). However, when I try using fcgi the 
> socket appears to be lost somewhere.
> 
> Example config line:
> 
> 
> 
> 
> ProxyPass / unix:/var/run/php/php-fpm.sock|fcgi://localhost/www/
> 
> 
> 
> 
> And the error log with debugging on (browsing to http://localhost/index.php):
> 
> [proxy:debug] [pid 30127:tid 140367020521216] proxy_util.c(1773): AH00925: 
> initializing worker unix:/var/run/php/php-fpm.sock|fcgi://localhost/www/ 
> shared
> [proxy:debug] [pid 30127:tid 140367020521216] proxy_util.c(1815): AH00927: 
> initializing worker unix:/var/run/php/php-fpm.sock|fcgi://localhost/www/ local
> [[proxy:debug] [pid 30127:tid 140367020521216] proxy_util.c(1850): AH00930: 
> initialized pool in child 30127 for (localhost) min=0 max=64 smax=64
> [proxy_fcgi:debug] [pid 30116:tid 140366778955520] mod_proxy_fcgi.c(73): 
> [client 66.192.178.3:54534] AH01060: set r->filename to 
> proxy:fcgi://localhost:8000/www/index.php
> [proxy:debug] [pid 30116:tid 140366778955520] mod_proxy.c(1104): [client 
> 66.192.178.3:54534] AH01143: Running scheme fcgi handler (attempt 0)
> [proxy_fcgi:debug] [pid 30116:tid 140366778955520] mod_proxy_fcgi.c(764): 
> [client 66.192.178.3:54534] AH01076: url: fcgi://localhost:8000/www/index.php 
> proxyname: (null) proxyport: 0
> [proxy_fcgi:debug] [pid 30116:tid 140366778955520] mod_proxy_fcgi.c(774): 
> [client 66.192.178.3:54534] AH01078: serving URL 
> //localhost:8000/www/index.php
> [proxy:debug] [pid 30116:tid 140366778955520] proxy_util.c(2101): AH00942: 
> FCGI: has acquired connection for (*)
> [proxy:debug] [pid 30116:tid 140366778955520] proxy_util.c(2176): [client 
> 66.192.178.3:54534] AH00944: connecting //localhost:8000/www/index.php to 
> localhost:8000
> [proxy:debug] [pid 30116:tid 140366778955520] proxy_util.c(2311): [client 
> 66.192.178.3:54534] AH00947: connected /www/index.php to localhost:8000
> [proxy:error] [pid 30116:tid 140366778955520] (111)Connection refused: 
> AH00957: FCGI: attempt to connect to 127.0.0.1:8000 (*) failed
> [proxy_fcgi:error] [pid 30116:tid 140366778955520] [client 
> 66.192.178.3:54534] AH01079: failed to make connection to backend: localhost
> [proxy:debug] [pid 30116:tid 140366778955520] proxy_util.c(2139): AH00943: 
> FCGI: has released connection for (*)
> 
> If I switch from fcgi to http, I get a log that's a bit different and see 
> some lines which lead me to believe the uds is simply lost somewhere in the 
> fcgi setup that it isn't in the http setup (this errors out as the socket is 
> expecting fcgi, not http):
> 
> ProxyPass / unix:/var/run/php/php-fpm.sock|http://localhost/www/
> 
> [proxy:debug] [pid 30707:tid 139930058049280] proxy_util.c(1773): AH00925: 
> initializing worker unix:/var/run/php/php-fpm.sock|http://localhost/www/ 
> shared
> [proxy:debug] [pid 30707:tid 139930058049280] proxy_util.c(1815): AH00927: 
> initializing worker unix:/var/run/php/php-fpm.sock|http://localhost/www/ local
> [proxy:debug] [pid 30707:tid 139930058049280] proxy_util.c(1850): AH00930: 
> initialized pool in child 30707 for (localhost) min=0 max=64 smax=64
> [proxy:debug] [pid 30721:tid 139929963685632] mod_proxy.c(1104): [client 
> 66.192.178.3:54544] AH01143: Running scheme http handler (attempt 0)
> [proxy:debug] [pid 30721:tid 139929963685632] proxy_util.c(2101): AH00942: 
> HTTP: has acquired connection for (localhost)
> [proxy:debug] [pid 30721:tid 139929963685632] proxy_util.c(2115): AH02545: 
> HTTP: has determined UDS as /var/run/php/php-fpm.sock
> [proxy:debug] [pid 30721:tid 139929963685632] proxy_util.c(2176): [client 
> 66.192.178.3:54544] AH00944: connecting http://localhost/www/index.php to 
> localhost:80
> [proxy:debug] [pid 30721:tid 139929963685632] proxy_util.c(2311): [client 
> 66.192.178.3:54544] AH00947: connected /www/index.php to localhost:80
> [proxy:error] [pid 30721:tid 139929963685632] (2)No such file or directory: 
> AH02454: HTTP: attempt to connect to Unix domain socket 
> /var/run/php/php-fpm.sock (localhost) failed
> [proxy:error] [pid 30721:tid 139929963685632] AH00959: 
> ap_proxy_connect_backend disabling worker for (localhost) for 60s
> [proxy_http:error] [pid 30721:tid 139929963685632] [client 
> 66.192.178.3:54544] AH01114: HTTP: failed to make connection to backend: 
> localhost
> [proxy:debug] [pid 30721:tid 139929963685632] proxy_util.c(2139): AH00943: 
> HTTP: has 

2.4.x with uds patch; FastCGI broken?

2013-11-15 Thread Kyle Johnson
I've been attempting to test the uds support on 2.4.x with the uds patch (<
http://people.apache.org/~jim/patches/uds-2.4.patch>).

I'm assuming HTTP works (it would appear based on the mailing list that the
author has been tested it with HTTP). However, when I try using fcgi the
socket appears to be lost somewhere.

Example config line:

ProxyPass / unix:/var/run/php/php-fpm.sock|fcgi://localhost/www/

And the error log with debugging on (browsing to http://localhost/index.php
):

[proxy:debug] [pid 30127:tid 140367020521216] proxy_util.c(1773): AH00925:
initializing worker unix:/var/run/php/php-fpm.sock|fcgi://localhost/www/
shared
[proxy:debug] [pid 30127:tid 140367020521216] proxy_util.c(1815): AH00927:
initializing worker unix:/var/run/php/php-fpm.sock|fcgi://localhost/www/
local
[[proxy:debug] [pid 30127:tid 140367020521216] proxy_util.c(1850): AH00930:
initialized pool in child 30127 for (localhost) min=0 max=64 smax=64
[proxy_fcgi:debug] [pid 30116:tid 140366778955520] mod_proxy_fcgi.c(73):
[client 66.192.178.3:54534] AH01060: set r->filename to
proxy:fcgi://localhost:8000/www/index.php
[proxy:debug] [pid 30116:tid 140366778955520] mod_proxy.c(1104): [client
66.192.178.3:54534] AH01143: Running scheme fcgi handler (attempt 0)
[proxy_fcgi:debug] [pid 30116:tid 140366778955520] mod_proxy_fcgi.c(764):
[client 66.192.178.3:54534] AH01076: url:
fcgi://localhost:8000/www/index.php proxyname: (null) proxyport: 0
[proxy_fcgi:debug] [pid 30116:tid 140366778955520] mod_proxy_fcgi.c(774):
[client 66.192.178.3:54534] AH01078: serving URL
//localhost:8000/www/index.php
[proxy:debug] [pid 30116:tid 140366778955520] proxy_util.c(2101): AH00942:
FCGI: has acquired connection for (*)
[proxy:debug] [pid 30116:tid 140366778955520] proxy_util.c(2176): [client
66.192.178.3:54534] AH00944: connecting //localhost:8000/www/index.php to
localhost:8000
[proxy:debug] [pid 30116:tid 140366778955520] proxy_util.c(2311): [client
66.192.178.3:54534] AH00947: connected /www/index.php to localhost:8000
[proxy:error] [pid 30116:tid 140366778955520] (111)Connection refused:
AH00957: FCGI: attempt to connect to 127.0.0.1:8000 (*) failed
[proxy_fcgi:error] [pid 30116:tid 140366778955520] [client
66.192.178.3:54534] AH01079: failed to make connection to backend: localhost
[proxy:debug] [pid 30116:tid 140366778955520] proxy_util.c(2139): AH00943:
FCGI: has released connection for (*)

If I switch from fcgi to http, I get a log that's a bit different and see
some lines which lead me to believe the uds is simply lost somewhere in the
fcgi setup that it isn't in the http setup (this errors out as the socket
is expecting fcgi, not http):

ProxyPass / unix:/var/run/php/php-fpm.sock|http://localhost/www/

[proxy:debug] [pid 30707:tid 139930058049280] proxy_util.c(1773): AH00925:
initializing worker unix:/var/run/php/php-fpm.sock|http://localhost/www/shared
[proxy:debug] [pid 30707:tid 139930058049280] proxy_util.c(1815): AH00927:
initializing worker unix:/var/run/php/php-fpm.sock|http://localhost/www/local
[proxy:debug] [pid 30707:tid 139930058049280] proxy_util.c(1850): AH00930:
initialized pool in child 30707 for (localhost) min=0 max=64 smax=64
[proxy:debug] [pid 30721:tid 139929963685632] mod_proxy.c(1104): [client
66.192.178.3:54544] AH01143: Running scheme http handler (attempt 0)
[proxy:debug] [pid 30721:tid 139929963685632] proxy_util.c(2101): AH00942:
HTTP: has acquired connection for (localhost)
[proxy:debug] [pid 30721:tid 139929963685632] proxy_util.c(2115): AH02545:
HTTP: has determined UDS as /var/run/php/php-fpm.sock
[proxy:debug] [pid 30721:tid 139929963685632] proxy_util.c(2176): [client
66.192.178.3:54544] AH00944: connecting http://localhost/www/index.php to
localhost:80
[proxy:debug] [pid 30721:tid 139929963685632] proxy_util.c(2311): [client
66.192.178.3:54544] AH00947: connected /www/index.php to localhost:80
[proxy:error] [pid 30721:tid 139929963685632] (2)No such file or directory:
AH02454: HTTP: attempt to connect to Unix domain socket
/var/run/php/php-fpm.sock (localhost) failed
[proxy:error] [pid 30721:tid 139929963685632] AH00959:
ap_proxy_connect_backend disabling worker for (localhost) for 60s
[proxy_http:error] [pid 30721:tid 139929963685632] [client
66.192.178.3:54544] AH01114: HTTP: failed to make connection to backend:
localhost
[proxy:debug] [pid 30721:tid 139929963685632] proxy_util.c(2139): AH00943:
HTTP: has released connection for (localhost)

Note in this case the line "HTTP: has determined UDS as
/var/run/php/php-fpm.sock" which is missing from the fcgi case.

Digging through the code is proving difficult for me to trace where the
path for fcgi breaks down vs http.


As an aside, I'm fairly certain that in the case of a fcgi something to the
effect of:
unix:/var/run/php/php-fpm.sock|fcgi:///www/

... should be a valid configuration, however this results in the following
error:

[proxy_fcgi:error] [pid 31259:tid 140608094070528] [client
66.192.178.3:54554]