RE: Better SO_REUSEPORT

2015-10-08 Thread Lu, Yingqi
Great!! Thank you very much for sharing this with us!

Thanks,
Yingqi

-Original Message-
From: Yann Ylavic [mailto:ylavic@gmail.com] 
Sent: Thursday, October 08, 2015 10:46 AM
To: dev@httpd.apache.org
Cc: Lu, Yingqi
Subject: Better SO_REUSEPORT

Looks like we can do even better/faster with it (and latest Linux kernels), 
soon :)

https://www.mail-archive.com/netdev@vger.kernel.org/msg81804.html

Promising!


RE: T of 2.4.17 this week

2015-10-05 Thread Lu, Yingqi
Thank you, Yann!

Regards,
Yingqi

-Original Message-
From: Yann Ylavic [mailto:ylavic@gmail.com] 
Sent: Monday, October 05, 2015 11:38 AM
To: dev@httpd.apache.org
Subject: Re: T of 2.4.17 this week

Hi Yingqi,

this was done in r1705492, will be in 2.4.17.

Regards,
Yann.


On Mon, Oct 5, 2015 at 8:31 PM, Lu, Yingqi <yingqi...@intel.com> wrote:
> Hi Jim,
>
> Can you please look and incorporate the SO_REUSEPORT patch for this release 
> as well?
>
> Thanks,
> Yingqi
>
> -Original Message-
> From: olli hauer [mailto:oha...@gmx.de]
> Sent: Monday, October 05, 2015 11:30 AM
> To: dev@httpd.apache.org
> Subject: Re: T of 2.4.17 this week
>
> On 2015-10-05 17:54, Jim Jagielski wrote:
>> I propose a T of 2.4.17 this week with a release for next. I will 
>> RM.
>>
>> Comments?
>>
>
> Hi Jim,
>
> would you mind to look at #58126 ?
>
> It contains a simple patch for acinclude.m4 to suppress warnings about 
> underquoted calls to AC_DEFUN.
>
> --
> Thanks,
> olli


RE: T of 2.4.17 this week

2015-10-05 Thread Lu, Yingqi
Hi Jim,

Can you please look and incorporate the SO_REUSEPORT patch for this release as 
well?

Thanks,
Yingqi

-Original Message-
From: olli hauer [mailto:oha...@gmx.de] 
Sent: Monday, October 05, 2015 11:30 AM
To: dev@httpd.apache.org
Subject: Re: T of 2.4.17 this week

On 2015-10-05 17:54, Jim Jagielski wrote:
> I propose a T of 2.4.17 this week with a release for next. I will 
> RM.
> 
> Comments?
> 

Hi Jim,

would you mind to look at #58126 ?

It contains a simple patch for acinclude.m4 to suppress warnings about 
underquoted calls to AC_DEFUN.

--
Thanks,
olli


RE: Time for 2.4.17 soonish?

2015-09-22 Thread Lu, Yingqi
Yann,

Thanks very much for bringing it up. We would love to see that SO_REUSEPORT 
patch being merged into the release version of the httpd. It has been a long 
time :)

Thanks,
Yingqi

-Original Message-
From: Yann Ylavic [mailto:ylavic@gmail.com] 
Sent: Tuesday, September 22, 2015 1:23 PM
To: dev@httpd.apache.org
Subject: Re: Time for 2.4.17 soonish?

+1, possibly with SO_REUSEPORT too.

On Tue, Sep 22, 2015 at 9:49 PM, Jim Jagielski  wrote:
> I'd like to propose that we think about a 2.4.17 release within a 
> coupla weeks and that we try to get http/2 support in for that 
> release.
>


RE: TR of 2.4.13

2015-06-02 Thread Lu, Yingqi
Hi All,

The SO_REUSEPORT patch is only lack one vote now. It would be really great if 
we can make it this time.

Guys, please test it out and vote for us if you like it. It boosts your 
performance on bigger Xeon systems.

Thanks,
Yingqi

-Original Message-
From: Jeff Trawick [mailto:traw...@gmail.com] 
Sent: Tuesday, June 02, 2015 5:50 AM
To: dev@httpd.apache.org
Subject: Re: TR of 2.4.13

On 06/02/2015 07:32 AM, Jim Jagielski wrote:
 Although there are some cool things that I'd like to see in 2.4.13, I 
 don't want to hold off any longer (plus, those cool things would be 
 good incentive for a 2.4.14 sooner rather than later).

 I plan to TR 2.4.13 on Thurs, by Noon eastern.

 From a presentation last week to a Python user group: These slides refer to 
some small features introduced in httpd 2.4.13, which will be available very 
soon. ;)



RE: SO_REUSEPORT

2015-05-17 Thread Lu, Yingqi
Hi Yann,

Thank you very much for your help!

Yingqi

-Original Message-
From: Yann Ylavic [mailto:ylavic@gmail.com] 
Sent: Saturday, May 16, 2015 3:37 AM
To: httpd
Subject: Re: SO_REUSEPORT

On Fri, May 15, 2015 at 5:12 PM, Jeff Trawick traw...@gmail.com wrote:
 On Fri, May 15, 2015 at 11:02 AM, William A Rowe Jr wr...@rowe-clan.net
 wrote:

 My chief concern was that the phrase Common Log has a specific meaning
 to us.

 ap_mpm_common_log_startup() or something else descriptive would be a
 better name, but our crew is famous for not being terrific namers of things
 :)

 Did this compile with no warnings?  It seems statics were used without
 being explicitly initialized, and I don't have my copy of KR to check that
 these are always NULL, but guessing that's so.


 yes; but ISTR that NetWare is the reason for explicit initialization in some
 of our multi-platform code; I dunno the rhyme

s/ap_log_common/ap_log_mpm_common/ in r1679714 and added to backport proposal.

Regarding globals/statics explicit initializations (implicit
initialization to {0} is required by the C standard), I don't think it
is necessary/suitable since and it may move these variables from the
.bss to the .data section, the former being quicker to initialize (as
a whole) at load time (though explicit initializations to {0} usually
go to .bss too but it depends on the compiler and/or flags, and we
don't explicitely need .data for those).
So I did not change the code wrt this...


RE: SO_REUSEPORT

2015-05-14 Thread Lu, Yingqi
Hi All,

I just want to check if anyone gets chances to check the SO_REUSEPORT patch? 
Any feedback?

Thanks,
Yingqi

-Original Message-
From: Lu, Yingqi [mailto:yingqi...@intel.com] 
Sent: Friday, May 08, 2015 8:58 AM
To: dev@httpd.apache.org
Subject: RE: SO_REUSEPORT

Hi Christophe, Jim and Yann,

Thank you very much for your consideration of putting SO_REUSEPORT patch in the 
2.4 stable release. 

I am also very happy that you find the white paper :-) All the most recent 
testing results are included in the white paper. Also, we have tested the 
(graceful) restart on the patch (previously, there was a bug.), it should be 
fine now. Please test it to confirm.

Please let me know if you need anything else. Your help is appreciated. 

Thanks,
Yingqi

-Original Message-
From: Yann Ylavic [mailto:ylavic@gmail.com]
Sent: Friday, May 08, 2015 5:02 AM
To: httpd
Subject: Re: SO_REUSEPORT

On Fri, May 8, 2015 at 9:44 AM, Christophe JAILLET 
christophe.jail...@wanadoo.fr wrote:

 Maybe, 2.4.14 could focus on reviewing/merging this patch and 
 associated performance improvement?
 To help adoption, maybe an ASF server could be upgraded with a 
 SO_REUSEPORT patched version of Apache to have our own measurements 
 and see how it scales in a real world application.

I did some testing with an injector at the time of the proposal (on a 2.2.x 
version of the patch, so mainly with worker), and can confirm that it really 
scales much better.
Where httpd without SO_REUSEPORT stops accepting/handling connections, it 
continues to shine with the option/buckets enabled.
(I don't have the numbers for now, need to search deeper, btw the ones from 
Intel are probably more of interest...)

So regarding the upgrade on infra, the difference may not be obvious if the 
tested machine is not at the limits.

One thing that probably is worth testing is (graceful) restarts, though.

Regards,
Yann.


RE: SO_REUSEPORT

2015-05-14 Thread Lu, Yingqi
Thank you very much for your help, Yann!

All, please test the patch and vote for us if you like it :-)

Thanks,
Yingqi

-Original Message-
From: Yann Ylavic [mailto:ylavic@gmail.com] 
Sent: Thursday, May 14, 2015 4:45 PM
To: httpd
Subject: Re: SO_REUSEPORT

Hi Yingqi,

2 votes already (on 3), it makes its way ;)

Regards,
Yann.


On Fri, May 15, 2015 at 1:00 AM, Lu, Yingqi yingqi...@intel.com wrote:
 Hi All,

 I just want to check if anyone gets chances to check the SO_REUSEPORT patch? 
 Any feedback?

 Thanks,
 Yingqi

 -Original Message-
 From: Lu, Yingqi [mailto:yingqi...@intel.com]
 Sent: Friday, May 08, 2015 8:58 AM
 To: dev@httpd.apache.org
 Subject: RE: SO_REUSEPORT

 Hi Christophe, Jim and Yann,

 Thank you very much for your consideration of putting SO_REUSEPORT patch in 
 the 2.4 stable release.

 I am also very happy that you find the white paper :-) All the most recent 
 testing results are included in the white paper. Also, we have tested the 
 (graceful) restart on the patch (previously, there was a bug.), it should be 
 fine now. Please test it to confirm.

 Please let me know if you need anything else. Your help is appreciated.

 Thanks,
 Yingqi

 -Original Message-
 From: Yann Ylavic [mailto:ylavic@gmail.com]
 Sent: Friday, May 08, 2015 5:02 AM
 To: httpd
 Subject: Re: SO_REUSEPORT

 On Fri, May 8, 2015 at 9:44 AM, Christophe JAILLET 
 christophe.jail...@wanadoo.fr wrote:

 Maybe, 2.4.14 could focus on reviewing/merging this patch and 
 associated performance improvement?
 To help adoption, maybe an ASF server could be upgraded with a 
 SO_REUSEPORT patched version of Apache to have our own measurements 
 and see how it scales in a real world application.

 I did some testing with an injector at the time of the proposal (on a 2.2.x 
 version of the patch, so mainly with worker), and can confirm that it really 
 scales much better.
 Where httpd without SO_REUSEPORT stops accepting/handling connections, it 
 continues to shine with the option/buckets enabled.
 (I don't have the numbers for now, need to search deeper, btw the ones 
 from Intel are probably more of interest...)

 So regarding the upgrade on infra, the difference may not be obvious if the 
 tested machine is not at the limits.

 One thing that probably is worth testing is (graceful) restarts, though.

 Regards,
 Yann.


RE: SO_REUSEPORT

2015-05-08 Thread Lu, Yingqi
Hi Christophe, Jim and Yann,

Thank you very much for your consideration of putting SO_REUSEPORT patch in the 
2.4 stable release. 

I am also very happy that you find the white paper :-) All the most recent 
testing results are included in the white paper. Also, we have tested the 
(graceful) restart on the patch (previously, there was a bug.), it should be 
fine now. Please test it to confirm.

Please let me know if you need anything else. Your help is appreciated. 

Thanks,
Yingqi

-Original Message-
From: Yann Ylavic [mailto:ylavic@gmail.com] 
Sent: Friday, May 08, 2015 5:02 AM
To: httpd
Subject: Re: SO_REUSEPORT

On Fri, May 8, 2015 at 9:44 AM, Christophe JAILLET 
christophe.jail...@wanadoo.fr wrote:

 Maybe, 2.4.14 could focus on reviewing/merging this patch and 
 associated performance improvement?
 To help adoption, maybe an ASF server could be upgraded with a 
 SO_REUSEPORT patched version of Apache to have our own measurements 
 and see how it scales in a real world application.

I did some testing with an injector at the time of the proposal (on a 2.2.x 
version of the patch, so mainly with worker), and can confirm that it really 
scales much better.
Where httpd without SO_REUSEPORT stops accepting/handling connections, it 
continues to shine with the option/buckets enabled.
(I don't have the numbers for now, need to search deeper, btw the ones from 
Intel are probably more of interest...)

So regarding the upgrade on infra, the difference may not be obvious if the 
tested machine is not at the limits.

One thing that probably is worth testing is (graceful) restarts, though.

Regards,
Yann.


RE: [NOTICE] Intend to TR 2.4.12 tomorrow (Thurs, Jan 22)

2015-01-27 Thread Lu, Yingqi
Hi Bill,

Thanks for your comments and explanation on the procedure. 

If you have time, please take some time to review this patch and let me know if 
you see any issues. 

Thanks,
Yingqi

-Original Message-
From: William A. Rowe Jr. [mailto:wr...@rowe-clan.net] 
Sent: Monday, January 26, 2015 10:56 AM
To: Lu, Yingqi
Cc: dev@httpd.apache.org
Subject: Re: [NOTICE] Intend to TR 2.4.12 tomorrow (Thurs, Jan 22)

On Thu, 22 Jan 2015 18:08:20 +
Lu, Yingqi yingqi...@intel.com wrote:

 Hi Jim,
 
 Thanks for the update!
 
 A quick question on the review and testing procedure. Right now, Yann 
 Ylavic already made available a 2.4 version of the patch. The link is 
 included at http://svn.apache.org/r1651967 . Is this good enough or is 
 there anything additional needed at this point? If there is no bug 
 reported, when 2.4.13 time is approaching, the patch will 
 automatically be on the commit list or is there anything else we need 
 to make sure?
 
 Thanks,
 Yingqi
 
 -Original Message-
 From: Jim Jagielski [mailto:j...@jagunet.com]
 Sent: Thursday, January 22, 2015 9:37 AM
 To: dev@httpd.apache.org
 Subject: Re: [NOTICE] Intend to TR 2.4.12 tomorrow (Thurs, Jan 22)
 
 Since that was a very new patch, there was a risk in adding it to
 2.4.11/12 since it would not have had enough time for adequate 
 testing. It will be proposed for 2.4.13.


Yingqi,

a patch of this scale is generally added to the branch early, once three 
committers have had a chance to review it.  By living on the 2.4 branch for 
some time (long before 2.4.13 is considered for release) we have the best 
chance of catching a problem between now and the release of 2.4.13.  

It would be just as unlikely to be added just before 2.4.13 is tagged because 
that doesn't give enough of the community time to review it.

Yours,

Bill


RE: [NOTICE] Intend to TR 2.4.12 tomorrow (Thurs, Jan 22)

2015-01-23 Thread Lu, Yingqi
Hi Yann,

Thanks for your explanation. Now, I understand it much better :-)

Everyone, please take some time to review our SO_REUSEPORT patch and let us 
know your feedback. 

Thanks,
Yingqi

-Original Message-
From: Yann Ylavic [mailto:ylavic@gmail.com] 
Sent: Friday, January 23, 2015 4:04 AM
To: httpd
Subject: Re: [NOTICE] Intend to TR 2.4.12 tomorrow (Thurs, Jan 22)

Hi Yingqi,

On Thu, Jan 22, 2015 at 7:08 PM, Lu, Yingqi yingqi...@intel.com wrote:

 A quick question on the review and testing procedure. Right now, Yann Ylavic 
 already made available a 2.4 version of the patch. The link is included at 
 http://svn.apache.org/r1651967 . Is this good enough or is there anything 
 additional needed at this point? If there is no bug reported, when 2.4.13 
 time is approaching, the patch will automatically be on the commit list or is 
 there anything else we need to make sure?

The 2.4.x backport proposal needs 3 committers positive reviews
(votes) to be accepted and included in 2.4.x.
I already added my +1 for it, we just now have to wait for 2 more ones (without 
any -1)...

Regards,
Yann.


RE: [NOTICE] Intend to TR 2.4.12 tomorrow (Thurs, Jan 22)

2015-01-22 Thread Lu, Yingqi
Hi Jim,

Is the SO_REUSPORT patch committed in this build?

Thanks,
Lucy

-Original Message-
From: Jim Jagielski [mailto:j...@jagunet.com] 
Sent: Thursday, January 22, 2015 3:43 AM
To: dev@httpd.apache.org
Subject: Re: [NOTICE] Intend to TR 2.4.12 tomorrow (Thurs, Jan 22)

I expect to TR around 1pm eastern... all code and patches expected to be in 
2.4.12 are currently committed. No further changes expected.


RE: [NOTICE] Intend to TR 2.4.12 tomorrow (Thurs, Jan 22)

2015-01-22 Thread Lu, Yingqi
Hi Jim,

Thanks for the update!

A quick question on the review and testing procedure. Right now, Yann Ylavic 
already made available a 2.4 version of the patch. The link is included at 
http://svn.apache.org/r1651967 . Is this good enough or is there anything 
additional needed at this point? If there is no bug reported, when 2.4.13 time 
is approaching, the patch will automatically be on the commit list or is there 
anything else we need to make sure?

Thanks,
Yingqi

-Original Message-
From: Jim Jagielski [mailto:j...@jagunet.com] 
Sent: Thursday, January 22, 2015 9:37 AM
To: dev@httpd.apache.org
Subject: Re: [NOTICE] Intend to TR 2.4.12 tomorrow (Thurs, Jan 22)

Since that was a very new patch, there was a risk in adding it to 2.4.11/12 
since it would not have had enough time for adequate testing. It will be 
proposed for 2.4.13.

 On Jan 22, 2015, at 12:14 PM, Lu, Yingqi yingqi...@intel.com wrote:
 
 Hi Jim,
 
 Is the SO_REUSPORT patch committed in this build?
 
 Thanks,
 Lucy
 
 -Original Message-
 From: Jim Jagielski [mailto:j...@jagunet.com]
 Sent: Thursday, January 22, 2015 3:43 AM
 To: dev@httpd.apache.org
 Subject: Re: [NOTICE] Intend to TR 2.4.12 tomorrow (Thurs, Jan 22)
 
 I expect to TR around 1pm eastern... all code and patches expected to be in 
 2.4.12 are currently committed. No further changes expected.



RE: AW: Time for 2.4.11

2015-01-21 Thread Lu, Yingqi
Hi All,

I just want to check if there is any feedback on this? 2.4.12 TR will be 
opened tomorrow, it would be great to get your feedback soon.

Thanks,
Lucy

-Original Message-
From: Lu, Yingqi [mailto:yingqi...@intel.com] 
Sent: Monday, January 19, 2015 4:32 PM
To: dev@httpd.apache.org
Subject: RE: AW: Time for 2.4.11

Hi All,

Sorry for the delay. Below is a draft version of the documentation on 
ListenCoresBucketsRatio. Please chime in with your feedback and comments. 
This is my first time to help on the documentation. Please let me know if this 
is sufficient or I need to follow some specific format.

Also, the 2.4 backports of the SO_REUSPORT patch has already been proposed at 
http://svn.apache.org/r1651967. This also includes the link to the 2.4 version 
of the patch. Thanks very much to Yann Ylavic for his help! Everyone, please 
take some time to review this patch and let me know your feedback and comments. 
If you like it, please give us a positive vote for it to be added in 2.4.12.

ListenCoresBucketsRatio Directive

Description: Enables duplicated listener (to use SO_REUSEPORT) feature
Syntax: ListenCoresBucketsRatio num
Default:ListenCoresBucketsRatio 0
Context: server config
Status: Core
Module: core
The SO_REUPOSTPORT feature introduced in Linux kernel 3.9 enables multiple 
sockets to listen to the same IP:port and automatically round robins 
connections. ListenCoresBucketsRatio is the configuration directive that sets 
the ratio between number of active CPU threads and number of listener buckets. 
For each of the listen bucket, there will be 1 listener and 1 accept mutex 
assigned. Default value of ListenCoresBucketsRatio is 0 which means there is 
only 1 listener bucket so that there is 1 listener and 1 accept mutex. When it 
is set to between 1 and number of active CPU threads, Apache httpd will first 
check if the SO_REUSEPORT feature is supported in the kernel. If yes, the 
number of listener buckets will be calculated as total number of active CPU 
threads/ ListenCoresBucketsRatio. In some testing cases, especially on big 
core count systems, enabling this feature (set it non-zero) has been tested to 
show significant performance improvement with response time reductions. 

When ListenCoresBucketsRatio is set to non-zero, Apache httpd checks on the 
StartServers/ MinSpareServers/ MaxSpareServers/ MinSpareThreads/ 
MaxSpareThreads directives and make sure there is always at least 1 httpd 
process per listener bucket. You may need to tune these directives based on 
your own environment, a good starting point is to start httpd with 2-4 
processes per listener bucket (for example StartServers = 2 * number of 
listener bucket), keep at least 1 of them idle. You can increase the values 
later if needed.

Thanks,
Yingqi

-Original Message-
From: Lu, Yingqi [mailto:yingqi...@intel.com]
Sent: Thursday, January 15, 2015 9:39 AM
To: dev@httpd.apache.org
Subject: RE: AW: Time for 2.4.11

Hi Yann,

Thanks very much for your help! 

Yes, I think I can help to document the ListenCoresBucketsRatio, at least to 
draft it. Also, I think I can share the settings from our testing regarding to 
this work.

I will send them later this week.

Thanks,
Yingqi

-Original Message-
From: Yann Ylavic [mailto:ylavic@gmail.com]
Sent: Thursday, January 15, 2015 2:06 AM
To: httpd
Subject: Re: AW: Time for 2.4.11

On Thu, Jan 15, 2015 at 9:25 AM, Yann Ylavic ylavic@gmail.com wrote:

 There is still missing the ListenCoresBucketsRatio documentation, 
 and I don't think I can do it today, could you?

Also, would you share maybe some recommended settings 
({Min,Max}Spare*,ServerLimit, StartServer, ...) wrt bucketing and this new 
directive?

I did some testing (though with linux-3.14, and httpd-2.2.x backport of the 
patch), and it seems it really helps the scalability (at the limits).
And I did not notice any special dysfunctioning either, including during 
(graceful) restarts.
So +1 for me.

But, since the patch is quite big, it may be hard for reviewers to (in)validate 
today (thurs here, which seems to be the TR date).
So maybe we can take more time for this (with a patch already available for 
those who care) and wait until 2.4.12 (next next)?
What do you (reviewers) think?

Thanks,
Yann.


RE: AW: Time for 2.4.11

2015-01-19 Thread Lu, Yingqi
Hi All,

Sorry for the delay. Below is a draft version of the documentation on 
ListenCoresBucketsRatio. Please chime in with your feedback and comments. 
This is my first time to help on the documentation. Please let me know if this 
is sufficient or I need to follow some specific format.

Also, the 2.4 backports of the SO_REUSPORT patch has already been proposed at 
http://svn.apache.org/r1651967. This also includes the link to the 2.4 version 
of the patch. Thanks very much to Yann Ylavic for his help! Everyone, please 
take some time to review this patch and let me know your feedback and comments. 
If you like it, please give us a positive vote for it to be added in 2.4.12.

ListenCoresBucketsRatio Directive

Description: Enables duplicated listener (to use SO_REUSEPORT) feature
Syntax: ListenCoresBucketsRatio num
Default:ListenCoresBucketsRatio 0
Context: server config
Status: Core
Module: core
The SO_REUPOSTPORT feature introduced in Linux kernel 3.9 enables multiple 
sockets to listen to the same IP:port and automatically round robins 
connections. ListenCoresBucketsRatio is the configuration directive that sets 
the ratio between number of active CPU threads and number of listener buckets. 
For each of the listen bucket, there will be 1 listener and 1 accept mutex 
assigned. Default value of ListenCoresBucketsRatio is 0 which means there is 
only 1 listener bucket so that there is 1 listener and 1 accept mutex. When it 
is set to between 1 and number of active CPU threads, Apache httpd will first 
check if the SO_REUSEPORT feature is supported in the kernel. If yes, the 
number of listener buckets will be calculated as total number of active CPU 
threads/ ListenCoresBucketsRatio. In some testing cases, especially on big 
core count systems, enabling this feature (set it non-zero) has been tested to 
show significant performance improvement with response time reductions. 

When ListenCoresBucketsRatio is set to non-zero, Apache httpd checks on the 
StartServers/ MinSpareServers/ MaxSpareServers/ MinSpareThreads/ 
MaxSpareThreads directives and make sure there is always at least 1 httpd 
process per listener bucket. You may need to tune these directives based on 
your own environment, a good starting point is to start httpd with 2-4 
processes per listener bucket (for example StartServers = 2 * number of 
listener bucket), keep at least 1 of them idle. You can increase the values 
later if needed.

Thanks,
Yingqi

-Original Message-
From: Lu, Yingqi [mailto:yingqi...@intel.com] 
Sent: Thursday, January 15, 2015 9:39 AM
To: dev@httpd.apache.org
Subject: RE: AW: Time for 2.4.11

Hi Yann,

Thanks very much for your help! 

Yes, I think I can help to document the ListenCoresBucketsRatio, at least to 
draft it. Also, I think I can share the settings from our testing regarding to 
this work.

I will send them later this week.

Thanks,
Yingqi

-Original Message-
From: Yann Ylavic [mailto:ylavic@gmail.com]
Sent: Thursday, January 15, 2015 2:06 AM
To: httpd
Subject: Re: AW: Time for 2.4.11

On Thu, Jan 15, 2015 at 9:25 AM, Yann Ylavic ylavic@gmail.com wrote:

 There is still missing the ListenCoresBucketsRatio documentation, 
 and I don't think I can do it today, could you?

Also, would you share maybe some recommended settings 
({Min,Max}Spare*,ServerLimit, StartServer, ...) wrt bucketing and this new 
directive?

I did some testing (though with linux-3.14, and httpd-2.2.x backport of the 
patch), and it seems it really helps the scalability (at the limits).
And I did not notice any special dysfunctioning either, including during 
(graceful) restarts.
So +1 for me.

But, since the patch is quite big, it may be hard for reviewers to (in)validate 
today (thurs here, which seems to be the TR date).
So maybe we can take more time for this (with a patch already available for 
those who care) and wait until 2.4.12 (next next)?
What do you (reviewers) think?

Thanks,
Yann.


RE: AW: Time for 2.4.11

2015-01-15 Thread Lu, Yingqi
Hi Yann,

Thanks very much for your help! 

Yes, I think I can help to document the ListenCoresBucketsRatio, at least to 
draft it. Also, I think I can share the settings from our testing regarding to 
this work.

I will send them later this week.

Thanks,
Yingqi

-Original Message-
From: Yann Ylavic [mailto:ylavic@gmail.com] 
Sent: Thursday, January 15, 2015 2:06 AM
To: httpd
Subject: Re: AW: Time for 2.4.11

On Thu, Jan 15, 2015 at 9:25 AM, Yann Ylavic ylavic@gmail.com wrote:

 There is still missing the ListenCoresBucketsRatio documentation, 
 and I don't think I can do it today, could you?

Also, would you share maybe some recommended settings 
({Min,Max}Spare*,ServerLimit, StartServer, ...) wrt bucketing and this new 
directive?

I did some testing (though with linux-3.14, and httpd-2.2.x backport of the 
patch), and it seems it really helps the scalability (at the limits).
And I did not notice any special dysfunctioning either, including during 
(graceful) restarts.
So +1 for me.

But, since the patch is quite big, it may be hard for reviewers to (in)validate 
today (thurs here, which seems to be the TR date).
So maybe we can take more time for this (with a patch already available for 
those who care) and wait until 2.4.12 (next next)?
What do you (reviewers) think?

Thanks,
Yann.


RE: AW: Time for 2.4.11

2015-01-15 Thread Lu, Yingqi
By the way, do you have an estimate on when is the 2.4.12? 

I guess I will ping you back when the window opens for 2.4.12!

Thanks,
Yingqi

-Original Message-
From: Lu, Yingqi 
Sent: Thursday, January 15, 2015 9:39 AM
To: httpd
Subject: RE: AW: Time for 2.4.11

Hi Yann,

Thanks very much for your help! 

Yes, I think I can help to document the ListenCoresBucketsRatio, at least to 
draft it. Also, I think I can share the settings from our testing regarding to 
this work.

I will send them later this week.

Thanks,
Yingqi

-Original Message-
From: Yann Ylavic [mailto:ylavic@gmail.com]
Sent: Thursday, January 15, 2015 2:06 AM
To: httpd
Subject: Re: AW: Time for 2.4.11

On Thu, Jan 15, 2015 at 9:25 AM, Yann Ylavic ylavic@gmail.com wrote:

 There is still missing the ListenCoresBucketsRatio documentation, 
 and I don't think I can do it today, could you?

Also, would you share maybe some recommended settings 
({Min,Max}Spare*,ServerLimit, StartServer, ...) wrt bucketing and this new 
directive?

I did some testing (though with linux-3.14, and httpd-2.2.x backport of the 
patch), and it seems it really helps the scalability (at the limits).
And I did not notice any special dysfunctioning either, including during 
(graceful) restarts.
So +1 for me.

But, since the patch is quite big, it may be hard for reviewers to (in)validate 
today (thurs here, which seems to be the TR date).
So maybe we can take more time for this (with a patch already available for 
those who care) and wait until 2.4.12 (next next)?
What do you (reviewers) think?

Thanks,
Yann.


RE: Time for 2.4.11

2015-01-15 Thread Lu, Yingqi
Thanks for this information and all the help! I will ping you back when 2.4.12 
window opens :-)

Thanks,
Yingqi

-Original Message-
From: Jim Jagielski [mailto:j...@jagunet.com] 
Sent: Thursday, January 15, 2015 9:52 AM
To: dev@httpd.apache.org
Subject: Re: Time for 2.4.11

I try to do a release every 3-4 months, but this one lagged behind.

 On Jan 15, 2015, at 12:41 PM, Lu, Yingqi yingqi...@intel.com wrote:
 
 By the way, do you have an estimate on when is the 2.4.12? 
 
 I guess I will ping you back when the window opens for 2.4.12!
 
 Thanks,
 Yingqi
 
 -Original Message-
 From: Lu, Yingqi
 Sent: Thursday, January 15, 2015 9:39 AM
 To: httpd
 Subject: RE: AW: Time for 2.4.11
 
 Hi Yann,
 
 Thanks very much for your help! 
 
 Yes, I think I can help to document the ListenCoresBucketsRatio, at least 
 to draft it. Also, I think I can share the settings from our testing 
 regarding to this work.
 
 I will send them later this week.
 
 Thanks,
 Yingqi
 
 -Original Message-
 From: Yann Ylavic [mailto:ylavic@gmail.com]
 Sent: Thursday, January 15, 2015 2:06 AM
 To: httpd
 Subject: Re: AW: Time for 2.4.11
 
 On Thu, Jan 15, 2015 at 9:25 AM, Yann Ylavic ylavic@gmail.com wrote:
 
 There is still missing the ListenCoresBucketsRatio documentation, 
 and I don't think I can do it today, could you?
 
 Also, would you share maybe some recommended settings 
 ({Min,Max}Spare*,ServerLimit, StartServer, ...) wrt bucketing and this new 
 directive?
 
 I did some testing (though with linux-3.14, and httpd-2.2.x backport of the 
 patch), and it seems it really helps the scalability (at the limits).
 And I did not notice any special dysfunctioning either, including during 
 (graceful) restarts.
 So +1 for me.
 
 But, since the patch is quite big, it may be hard for reviewers to 
 (in)validate today (thurs here, which seems to be the TR date).
 So maybe we can take more time for this (with a patch already available for 
 those who care) and wait until 2.4.12 (next next)?
 What do you (reviewers) think?
 
 Thanks,
 Yann.



Re: AW: Time for 2.4.11

2015-01-15 Thread Lu, Yingqi
Ok, I will send them in the morning.

Thanks,
Yingqi

 On Jan 14, 2015, at 11:53 PM, Plüm, Rüdiger, Vodafone Group 
 ruediger.pl...@vodafone.com wrote:
 
 All of them are needed.
 
 Regards
 
 Rüdiger
 
 -Ursprüngliche Nachricht-
 Von: Lu, Yingqi [mailto:yingqi...@intel.com]
 Gesendet: Donnerstag, 15. Januar 2015 08:12
 An: dev@httpd.apache.org
 Betreff: RE: Time for 2.4.11
 
 Hi Jim,
 
 I just checked and found the most recent commits regarding to
 SO_REUSEPORT patch work is done on Dec 4, 2014 with trunk version
 1643179. There are also other commits done on this patch work as well.
 Please let me know if you need commits ID from all of them or the most
 recent one is good enough. I can help you find if you want.
 
 Thanks,
 Yingqi
 
 -Original Message-
 From: Lu, Yingqi [mailto:yingqi...@intel.com]
 Sent: Wednesday, January 14, 2015 3:35 PM
 To: dev@httpd.apache.org
 Subject: RE: Time for 2.4.11
 
 Hi Jim,
 
 Thanks very much for your replies. I do not think there is a 2.4
 backport patch available. All my previous work are on top of the trunk
 version. However, I think you may be able to apply the svn commits on
 2.4 since the work is pretty much self-contained.
 
 Please let me know if that works for you.
 
 Thanks,
 Yingqi Lu
 
 -Original Message-
 From: Jim Jagielski [mailto:j...@jagunet.com]
 Sent: Wednesday, January 14, 2015 3:07 PM
 To: dev@httpd.apache.org
 Subject: Re: Time for 2.4.11
 
 I haven't had time to check... is there an actual 2.4 backport patch
 available, or do I need to craft one (or do the svn commits apply
 cleanly)??
 On Jan 14, 2015, at 12:20 PM, Lu, Yingqi yingqi...@intel.com wrote:
 
 I just want to check what the status on the SO_REUSEPORT patch is.
 Do you see any issues backport it? Please let me know.
 
 Attached is the email I sent last week on the same topic in case you
 missed that.
 
 Thanks,
 Yingqi
 
 -Original Message-
 From: Jim Jagielski [mailto:j...@jagunet.com]
 Sent: Wednesday, January 14, 2015 4:54 AM
 To: dev@httpd.apache.org
 Subject: Re: Time for 2.4.11
 
 Get your backports into STATUS now, and test and vote on the existing
 (and to-be-entered) proposals asap!
 
 On Jan 13, 2015, at 12:05 PM, Jim Jagielski j...@jagunet.com wrote:
 
 Okey dokey... the idea is a TR on Thurs with a release next
 Mon/Tues.
 
 On Jan 8, 2015, at 6:11 AM, Jim Jagielski j...@jagunet.com wrote:
 
 Let's shoot for a TR next week. The work will keep me warm :)
 
 RE Time for 2.4.11.msg
 


RE: Time for 2.4.11

2015-01-14 Thread Lu, Yingqi
Hello,

Someone from this list pointed out that he had issues opening *.msg attachment. 
Now, I re-attach the plain text of the email I sent last week. Hope it works 
for all of you.

I just want to check the status of the SO_REUSEPORT patch. Please let me know 
if you have any issues backport it. It has already been trunked for 7 months. 
After several modifications, we think it is ready to go to stable. Also, we 
have completed tests on all the existing 4 MPMs and different usage cases. 
Results look good to us.

Please let me know.

Thanks,
Yingqi

-Original Message-
From: Lu, Yingqi [mailto:yingqi...@intel.com] 
Sent: Wednesday, January 14, 2015 9:20 AM
To: dev@httpd.apache.org
Subject: RE: Time for 2.4.11

I just want to check what the status on the SO_REUSEPORT patch is. Do you see 
any issues backport it? Please let me know.

Attached is the email I sent last week on the same topic in case you missed 
that.

Thanks,
Yingqi

-Original Message-
From: Jim Jagielski [mailto:j...@jagunet.com] 
Sent: Wednesday, January 14, 2015 4:54 AM
To: dev@httpd.apache.org
Subject: Re: Time for 2.4.11

Get your backports into STATUS now, and test and vote on the existing (and 
to-be-entered) proposals asap!

 On Jan 13, 2015, at 12:05 PM, Jim Jagielski j...@jagunet.com wrote:
 
 Okey dokey... the idea is a TR on Thurs with a release next Mon/Tues.
 
 On Jan 8, 2015, at 6:11 AM, Jim Jagielski j...@jagunet.com wrote:
 
 Let's shoot for a TR next week. The work will keep me warm :)
 

From:   Lu, Yingqi yingqi...@intel.com
Sent:   Friday, January 09, 2015 9:57 AM
To: dev@httpd.apache.org
Subject:RE: Time for 2.4.11

Hi Jim,

Thanks for your email. I think it should not be very hard to back port. After 
you trunked the original 
patch last June, I was working with Yann Ylavic last November to fix some minor 
issues. With current 
trunked code, there is no major API change to 2.4 version and we have tested 
with multiple workloads 
and usage cases for all 4 existing MPMs. It looks good to us.

Please note, with current code, there is a new configurable flag called 
ListenCoresBucketsRatio. The 
default value is 0 which means SO_REUSEPORT is disabled. This is different than 
the original patch. The 
reason Yann decided to choose the opt-in way because he finds it safer, 
especially for backports to 
stable. Given this said, I think it would be a good idea to add some document 
to introduce the feature 
and the flag itself. This would allow users to take advantage of this.

Please let me know if you have any questions. Again, thanks very much for the 
help, really appreciated!

The whole work can be followed in three threads with name:
1. [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT support
2. svn commit: r1599531 - in /httpd/httpd/trunk: CHANGES include/ap_listen.h 
server/listen.c 
server/mpm/event/event.c server/mpm/prefork/prefork.c 
server/mpm/worker/worker.c 
server/mpm_unix.c
3. Listeners buckets and duplication w/ and w/o SO_REUSEPORT on trunk

Thanks,
Yingqi

-Original Message-
From: Jim Jagielski [mailto:j...@jagunet.com] 
Sent: Friday, January 09, 2015 5:47 AM
To: dev@httpd.apache.org
Subject: Re: Time for 2.4.11

Let me look... how easy is the backport?
 On Jan 8, 2015, at 12:22 PM, Lu, Yingqi yingqi...@intel.com wrote:
 
 Hi All,
 
 Can we make the SO_REUSEPORT support into this new stable version? The 
 first version of the patch 
was trunked last June. After tests and modifications, I think it is ready to go.
 
 Please let me know what you think.
 
 Thanks,
 Yingqi
 
 -Original Message-
 From: Jim Jagielski [mailto:j...@jagunet.com] 
 Sent: Thursday, January 08, 2015 3:12 AM
 To: httpd
 Subject: Time for 2.4.11
 
 Let's shoot for a TR next week. The work will keep me warm :)



RE: Time for 2.4.11

2015-01-14 Thread Lu, Yingqi
Hi Jim,

Thanks very much for your replies. I do not think there is a 2.4 backport patch 
available. All my previous work are on top of the trunk version. However, I 
think you may be able to apply the svn commits on 2.4 since the work is pretty 
much self-contained.

Please let me know if that works for you.

Thanks,
Yingqi Lu 

-Original Message-
From: Jim Jagielski [mailto:j...@jagunet.com] 
Sent: Wednesday, January 14, 2015 3:07 PM
To: dev@httpd.apache.org
Subject: Re: Time for 2.4.11

I haven't had time to check... is there an actual 2.4 backport patch available, 
or do I need to craft one (or do the svn commits apply cleanly)??
 On Jan 14, 2015, at 12:20 PM, Lu, Yingqi yingqi...@intel.com wrote:
 
 I just want to check what the status on the SO_REUSEPORT patch is. Do you 
 see any issues backport it? Please let me know.
 
 Attached is the email I sent last week on the same topic in case you missed 
 that.
 
 Thanks,
 Yingqi
 
 -Original Message-
 From: Jim Jagielski [mailto:j...@jagunet.com]
 Sent: Wednesday, January 14, 2015 4:54 AM
 To: dev@httpd.apache.org
 Subject: Re: Time for 2.4.11
 
 Get your backports into STATUS now, and test and vote on the existing (and 
 to-be-entered) proposals asap!
 
 On Jan 13, 2015, at 12:05 PM, Jim Jagielski j...@jagunet.com wrote:
 
 Okey dokey... the idea is a TR on Thurs with a release next Mon/Tues.
 
 On Jan 8, 2015, at 6:11 AM, Jim Jagielski j...@jagunet.com wrote:
 
 Let's shoot for a TR next week. The work will keep me warm :)
 
 
 RE Time for 2.4.11.msg



RE: Time for 2.4.11

2015-01-14 Thread Lu, Yingqi
Hi Jim,

I just checked and found the most recent commits regarding to SO_REUSEPORT 
patch work is done on Dec 4, 2014 with trunk version 1643179. There are also 
other commits done on this patch work as well. Please let me know if you need 
commits ID from all of them or the most recent one is good enough. I can help 
you find if you want.

Thanks,
Yingqi

-Original Message-
From: Lu, Yingqi [mailto:yingqi...@intel.com] 
Sent: Wednesday, January 14, 2015 3:35 PM
To: dev@httpd.apache.org
Subject: RE: Time for 2.4.11

Hi Jim,

Thanks very much for your replies. I do not think there is a 2.4 backport patch 
available. All my previous work are on top of the trunk version. However, I 
think you may be able to apply the svn commits on 2.4 since the work is pretty 
much self-contained.

Please let me know if that works for you.

Thanks,
Yingqi Lu 

-Original Message-
From: Jim Jagielski [mailto:j...@jagunet.com] 
Sent: Wednesday, January 14, 2015 3:07 PM
To: dev@httpd.apache.org
Subject: Re: Time for 2.4.11

I haven't had time to check... is there an actual 2.4 backport patch available, 
or do I need to craft one (or do the svn commits apply cleanly)??
 On Jan 14, 2015, at 12:20 PM, Lu, Yingqi yingqi...@intel.com wrote:
 
 I just want to check what the status on the SO_REUSEPORT patch is. Do you 
 see any issues backport it? Please let me know.
 
 Attached is the email I sent last week on the same topic in case you missed 
 that.
 
 Thanks,
 Yingqi
 
 -Original Message-
 From: Jim Jagielski [mailto:j...@jagunet.com]
 Sent: Wednesday, January 14, 2015 4:54 AM
 To: dev@httpd.apache.org
 Subject: Re: Time for 2.4.11
 
 Get your backports into STATUS now, and test and vote on the existing (and 
 to-be-entered) proposals asap!
 
 On Jan 13, 2015, at 12:05 PM, Jim Jagielski j...@jagunet.com wrote:
 
 Okey dokey... the idea is a TR on Thurs with a release next Mon/Tues.
 
 On Jan 8, 2015, at 6:11 AM, Jim Jagielski j...@jagunet.com wrote:
 
 Let's shoot for a TR next week. The work will keep me warm :)
 
 
 RE Time for 2.4.11.msg



RE: Time for 2.4.11

2015-01-14 Thread Lu, Yingqi
I just want to check what the status on the SO_REUSEPORT patch is. Do you see 
any issues backport it? Please let me know.

Attached is the email I sent last week on the same topic in case you missed 
that.

Thanks,
Yingqi

-Original Message-
From: Jim Jagielski [mailto:j...@jagunet.com] 
Sent: Wednesday, January 14, 2015 4:54 AM
To: dev@httpd.apache.org
Subject: Re: Time for 2.4.11

Get your backports into STATUS now, and test and vote on the existing (and 
to-be-entered) proposals asap!

 On Jan 13, 2015, at 12:05 PM, Jim Jagielski j...@jagunet.com wrote:
 
 Okey dokey... the idea is a TR on Thurs with a release next Mon/Tues.
 
 On Jan 8, 2015, at 6:11 AM, Jim Jagielski j...@jagunet.com wrote:
 
 Let's shoot for a TR next week. The work will keep me warm :)
 



RE Time for 2.4.11.msg
Description: RE Time for 2.4.11.msg


RE: Time for 2.4.11

2015-01-12 Thread Lu, Yingqi
Hi All,

I just want to ping again to see if there is any updates on this?

Thanks,
Yingqi

-Original Message-
From: Lu, Yingqi [mailto:yingqi...@intel.com] 
Sent: Friday, January 09, 2015 9:57 AM
To: dev@httpd.apache.org
Subject: RE: Time for 2.4.11

Hi Jim,

Thanks for your email. I think it should not be very hard to back port. After 
you trunked the original patch last June, I was working with Yann Ylavic last 
November to fix some minor issues. With current trunked code, there is no major 
API change to 2.4 version and we have tested with multiple workloads and usage 
cases for all 4 existing MPMs. It looks good to us.

Please note, with current code, there is a new configurable flag called 
ListenCoresBucketsRatio. The default value is 0 which means SO_REUSEPORT is 
disabled. This is different than the original patch. The reason Yann decided to 
choose the opt-in way because he finds it safer, especially for backports to 
stable. Given this said, I think it would be a good idea to add some document 
to introduce the feature and the flag itself. This would allow users to take 
advantage of this.

Please let me know if you have any questions. Again, thanks very much for the 
help, really appreciated!

The whole work can be followed in three threads with name:
1. [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT support
2. svn commit: r1599531 - in /httpd/httpd/trunk: CHANGES include/ap_listen.h 
server/listen.c server/mpm/event/event.c server/mpm/prefork/prefork.c 
server/mpm/worker/worker.c server/mpm_unix.c
3. Listeners buckets and duplication w/ and w/o SO_REUSEPORT on trunk

Thanks,
Yingqi

-Original Message-
From: Jim Jagielski [mailto:j...@jagunet.com] 
Sent: Friday, January 09, 2015 5:47 AM
To: dev@httpd.apache.org
Subject: Re: Time for 2.4.11

Let me look... how easy is the backport?
 On Jan 8, 2015, at 12:22 PM, Lu, Yingqi yingqi...@intel.com wrote:
 
 Hi All,
 
 Can we make the SO_REUSEPORT support into this new stable version? The 
 first version of the patch was trunked last June. After tests and 
 modifications, I think it is ready to go.
 
 Please let me know what you think.
 
 Thanks,
 Yingqi
 
 -Original Message-
 From: Jim Jagielski [mailto:j...@jagunet.com] 
 Sent: Thursday, January 08, 2015 3:12 AM
 To: httpd
 Subject: Time for 2.4.11
 
 Let's shoot for a TR next week. The work will keep me warm :)



RE: Time for 2.4.11

2015-01-09 Thread Lu, Yingqi
Hi Jim,

Thanks for your email. I think it should not be very hard to back port. After 
you trunked the original patch last June, I was working with Yann Ylavic last 
November to fix some minor issues. With current trunked code, there is no major 
API change to 2.4 version and we have tested with multiple workloads and usage 
cases for all 4 existing MPMs. It looks good to us.

Please note, with current code, there is a new configurable flag called 
ListenCoresBucketsRatio. The default value is 0 which means SO_REUSEPORT is 
disabled. This is different than the original patch. The reason Yann decided to 
choose the opt-in way because he finds it safer, especially for backports to 
stable. Given this said, I think it would be a good idea to add some document 
to introduce the feature and the flag itself. This would allow users to take 
advantage of this.

Please let me know if you have any questions. Again, thanks very much for the 
help, really appreciated!

The whole work can be followed in three threads with name:
1. [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT support
2. svn commit: r1599531 - in /httpd/httpd/trunk: CHANGES include/ap_listen.h 
server/listen.c server/mpm/event/event.c server/mpm/prefork/prefork.c 
server/mpm/worker/worker.c server/mpm_unix.c
3. Listeners buckets and duplication w/ and w/o SO_REUSEPORT on trunk

Thanks,
Yingqi

-Original Message-
From: Jim Jagielski [mailto:j...@jagunet.com] 
Sent: Friday, January 09, 2015 5:47 AM
To: dev@httpd.apache.org
Subject: Re: Time for 2.4.11

Let me look... how easy is the backport?
 On Jan 8, 2015, at 12:22 PM, Lu, Yingqi yingqi...@intel.com wrote:
 
 Hi All,
 
 Can we make the SO_REUSEPORT support into this new stable version? The 
 first version of the patch was trunked last June. After tests and 
 modifications, I think it is ready to go.
 
 Please let me know what you think.
 
 Thanks,
 Yingqi
 
 -Original Message-
 From: Jim Jagielski [mailto:j...@jagunet.com] 
 Sent: Thursday, January 08, 2015 3:12 AM
 To: httpd
 Subject: Time for 2.4.11
 
 Let's shoot for a TR next week. The work will keep me warm :)



RE: Time for 2.4.11

2015-01-08 Thread Lu, Yingqi
Hi All,

Can we make the SO_REUSEPORT support into this new stable version? The first 
version of the patch was trunked last June. After tests and modifications, I 
think it is ready to go.

Please let me know what you think.

Thanks,
Yingqi

-Original Message-
From: Jim Jagielski [mailto:j...@jagunet.com] 
Sent: Thursday, January 08, 2015 3:12 AM
To: httpd
Subject: Time for 2.4.11

Let's shoot for a TR next week. The work will keep me warm :)


RE: Listeners buckets and duplication w/ and w/o SO_REUSEPORT on trunk

2014-11-07 Thread Lu, Yingqi
Hi Yann,

Thanks for your quick email.

Yes, with current implementation, accept mutex is not being removed, just being 
cut into smaller ones. My point was with smaller system, the hardware resource 
is less too so that the maximum traffic it can drive is not as much as the big 
systems. In that sense, the child process/bucket contention may not be hugely 
increased compared to big system. Running at peak performance level, the total 
number of child process should scale with the size of the systems if there is 
no other hardware resource limitations. Then, the child process/bucket should 
maintain at the similar rate no matter of the system size if we use some 
reasonable ListenCoresBucketsRatio.

Regarding to the timeout issue, I think I did not write it clearly in my last 
email. Testing trunk version with ServerLimit=Number_buckets=StartServer, I did 
not see any connection timeouts or connection losses. I only saw performance 
regressions.

The timeout or connection losses issues only occur when I tested the approach 
that create the listen socket inside child process. In this case, master 
process does not control any listen sockets any more, but let each child do it 
on its own. If I remember correctly, I think that was your quick prototype a 
while back after I posted the original patch. In the original discussion 
thread, I mentioned the connection issues and the performance degradation as 
well. 

Again, thank you very much for your help!

Yingqi


-Original Message-
From: Yann Ylavic [mailto:ylavic@gmail.com] 
Sent: Friday, November 07, 2014 7:49 AM
To: httpd
Subject: Re: Listeners buckets and duplication w/ and w/o SO_REUSEPORT on trunk

Hi Yingqi,

thanks for sharing your results.

On Thu, Nov 6, 2014 at 9:12 PM, Lu, Yingqi yingqi...@intel.com wrote:
 I do not see any documents regarding to this new configurable flag 
 ListenCoresBucketsRatio (maybe I missed it)

Will do it when possible, good point.

 Regarding to how to make small systems take advantage of this patch, I 
 actually did some testing on system with less cores. The data show that when 
 system has less than 16 cores, more than 1 bucket does not bring any 
 throughput and response time benefits. The patch is used mainly for big 
 systems to resolve the scalability issue. That is the reason why we 
 previously hard coded the ratio to 8 (impact only on system has 16 cores or 
 more).

 The accept_mutex is not much a bottleneck anymore with the current patch 
 implantation. Current implementation already cut 1 big mutex into multiple 
 smaller mutexes in the multiple listen statements case (each bucket has its 
 dedicated accept_mutex). To prove this, our data show performance parity 
 between 1 listen statement (listen 80, no accept_mutex) and 2 listen 
 statements (listen 192.168.1.1 80, listen 192.168.1.2 80, with accept_mutex) 
 with current trunk version. Comparing against without SO_REUSEPORT patch, we 
 see 28% performance gain with 1 listen statement case and 69% gain with 2 
 listen statements case.

With the current implementation and a reasonable number of servers
(children) started, this is surely true, your numbers prove it.
However, the less buckets (CPU cores), the more contention on each bucket (ie. 
listeners waiting on the same socket(s)/mutex).
So the results with less cores are quite expected IMHO.

But we can't remove the accept mutex since there will always be more servers 
than the number of buckets.


 Regarding to the approach that enables each child has its own listen socket, 
 I did some testing with current trunk version to increase the number of 
 buckets to be equal to a reasonable serverlimit (this avoids number of child 
 processes changes). I also verified that MaxClient and ThreadPerChild were 
 set properly. I used single listen statement so that accept_mutex was 
 disabled. Comparing against the current approach, this has ~25% less 
 throughput with significantly higher response time.

 In addition to this, implementing the listen socket for each child separately 
 has less performance as well as connection loss/timeout issues with current 
 Linux kernel. Below are more information/data we collected with each child 
 process has its own listen socket approach:
 1. During the run, we noticed that there are tons of “read timed out” errors. 
 These errors not only happen when the system is highly utilized, it even 
 happens when system is only 10% utilized. The response time was high.
 2. Compared to current trunk implementation, we found each child has its own 
 listen socket approach results in significantly higher (up to 10X) response 
 time at different CPU utilization levels. At peak performance level, it has 
 20+% less throughput with tons of “connection reset” errors in additional to 
 “read timed out” errors. Current trunk implementation does not have errors.
 3. During the graceful restart, there are tons of connection losses.

Did you also set StartServers = ServerLimit?
One

RE: Listeners buckets and duplication w/ and w/o SO_REUSEPORT on trunk

2014-11-06 Thread Lu, Yingqi
Hi Yann,

I don't mind at all. I will keep discussion following your reply there.

Thanks,
Yingqi

-Original Message-
From: Yann Ylavic [mailto:ylavic@gmail.com] 
Sent: Thursday, November 06, 2014 5:00 AM
To: httpd
Subject: Re: Listeners buckets and duplication w/ and w/o SO_REUSEPORT on trunk

Hi Yingqi,

let's continue discussing this on the original thread if you don't mind, I made 
an update there.

Thanks,
Yann.

On Thu, Nov 6, 2014 at 6:52 AM, Lu, Yingqi yingqi...@intel.com wrote:
 Hi Yann,

 I just took some testing on the most recent trunk version. I found out that 
 num_buckets is default to 1 (ListenCoresBucketsRatio is default to 0). Adding 
 ListenCoresBucketsRatio is great since user can have control over this. 
 However, I am thinking it may be better to make this default at 8. This will 
 make the SO_REUSEPORT support to be default enabled (8 buckets). In case 
 users are not aware of this new ListenCoresBucketsRatio configurable flag, 
 they can still enjoy the performance benefits.

 Please let me know what you think.

 Thanks,
 Yingqi

 -Original Message-
 From: Lu, Yingqi [mailto:yingqi...@intel.com]
 Sent: Thursday, October 30, 2014 9:10 AM
 To: dev@httpd.apache.org
 Subject: RE: Listeners buckets and duplication w/ and w/o SO_REUSEPORT 
 on trunk

 Hi Yann,

 Thank you very much for your help!

 As this is getting better, I am wondering if you guys have plan to put this 
 SO_REUSEPORT patch into the stable version. If yes, do you have a rough 
 timeline?

 The performance gain is great from the patch, I just want to more people 
 being able to take advantage of it.

 Thanks,
 Lucy

 -Original Message-
 From: Yann Ylavic [mailto:ylavic@gmail.com]
 Sent: Thursday, October 30, 2014 8:29 AM
 To: httpd
 Subject: Re: Listeners buckets and duplication w/ and w/o SO_REUSEPORT 
 on trunk

 Hi Yingqi,

 commited in r1635521, with some changes (please see commit log).
 These are not big changes, and your work on removing the global variables and 
 restoring some previous behaviour is great, thanks for the good work.

 Regards,
 Yann.


 On Wed, Oct 29, 2014 at 6:36 PM, Lu, Yingqi yingqi...@intel.com wrote:
 Thank you very much for your help!

 Thanks,
 Yingqi

 -Original Message-
 From: Yann Ylavic [mailto:ylavic@gmail.com]
 Sent: Wednesday, October 29, 2014 10:34 AM
 To: httpd
 Subject: Re: Listeners buckets and duplication w/ and w/o 
 SO_REUSEPORT on trunk

 Hi Yingqi,

 I'm working on it currently, will commit soon.

 Regards,
 Yann.

 On Wed, Oct 29, 2014 at 6:20 PM, Lu, Yingqi yingqi...@intel.com wrote:
 Hi All,

 I just want to check if there is any feedback/comments on this?

 For details, please refer to Yann Ylavic's notes and my responses below.

 Thanks,
 Yingqi

 -Original Message-
 From: Lu, Yingqi [mailto:yingqi...@intel.com]
 Sent: Friday, October 10, 2014 4:56 PM
 To: dev@httpd.apache.org
 Subject: RE: Listeners buckets and duplication w/ and w/o 
 SO_REUSEPORT on trunk

 Dear All,

 Attached patch is generated based on current trunk. It covers for 
 prefork/worker/event/eventopt MPM. It supposes to address following issues 
 regarding to SO_RESUEPORT support vs. current trunk version:

 1. Same as current trunk version implementation, when active_CPU_num = 8 
 or when so_reuseport is not supported by the kernel, ap_num_buckets is set 
 to 1. In any case, there is 1 dedicated listener per bucket.

 2. Remove global variables (mpm_listen, enable_default_listeners and 
 num_buckets). mpm_listen is changed to MPM local. enabled_default_listener 
 is completely removed. num_buckets is changed to MPM local 
 (ap_num_buckets). I rename have_so_reuseport to ap_have_so_reuseport. The 
 reason for keeping that global is because this variable is being used by 
 ap_log_common(). Based on the feedback here, I think it may not be a good 
 idea to change the function interface.

 3. Change ap_duplicated_listener to have more parameters. This function is 
 being called from MPM local (prefork.c/worker.c/event.c/eventopt.c). In 
 this function, prefork_listener (or worker_listener/even_listener/etc) 
 array will be initialized and be set value. ap_num_buckets is also 
 calculated inside this function. In addition, this version solves the issue 
 with one_process case (current trunk version has issue with one_process 
 enabled).

 4. Change ap_close_listener() back to previous (2.4.X version).

 5. Change dummy_connection back to previous (2.4.X version).

 6. Add ap_close_duplicated_listeners(). This is called from mpms when 
 stopping httpd.

 7. Add ap_close_child_listener(). When listener_thread (child process in 
 prefork) exit, only the dedicated listener needs to be closed (the rest are 
 already being closed in child_main when the child process starts).

 8. Remove duplication of listener when ap_num_buckets = 1 or without 
 SO_REUSEPORT support (ap_num_buckets = 1). With so_reuseport, only 
 duplicated (ap_num_buckets - 1) listeners (1

RE: Listeners buckets and duplication w/ and w/o SO_REUSEPORT on trunk

2014-11-06 Thread Lu, Yingqi
Hi Yann,

I do not see any documents regarding to this new configurable flag 
ListenCoresBucketsRatio (maybe I missed it) and also users may not be familiar 
with it, I still think maybe it is better to keep the default to 8 at least in 
the trunk. 

Regarding to how to make small systems take advantage of this patch, I actually 
did some testing on system with less cores. The data show that when system has 
less than 16 cores, more than 1 bucket does not bring any throughput and 
response time benefits. The patch is used mainly for big systems to resolve the 
scalability issue. That is the reason why we previously hard coded the ratio to 
8 (impact only on system has 16 cores or more). 

The accept_mutex is not much a bottleneck anymore with the current patch 
implantation. Current implementation already cut 1 big mutex into multiple 
smaller mutexes in the multiple listen statements case (each bucket has its 
dedicated accept_mutex). To prove this, our data show performance parity 
between 1 listen statement (listen 80, no accept_mutex) and 2 listen statements 
(listen 192.168.1.1 80, listen 192.168.1.2 80, with accept_mutex) with current 
trunk version. Comparing against without SO_REUSEPORT patch, we see 28% 
performance gain with 1 listen statement case and 69% gain with 2 listen 
statements case. 

Regarding to the approach that enables each child has its own listen socket, I 
did some testing with current trunk version to increase the number of buckets 
to be equal to a reasonable serverlimit (this avoids number of child processes 
changes). I also verified that MaxClient and ThreadPerChild were set properly. 
I used single listen statement so that accept_mutex was disabled. Comparing 
against the current approach, this has ~25% less throughput with significantly 
higher response time.

In addition to this, implementing the listen socket for each child separately 
has less performance as well as connection loss/timeout issues with current 
Linux kernel. Below are more information/data we collected with each child 
process has its own listen socket approach:
1. During the run, we noticed that there are tons of “read timed out” errors. 
These errors not only happen when the system is highly utilized, it even 
happens when system is only 10% utilized. The response time was high.
2. Compared to current trunk implementation, we found each child has its own 
listen socket approach results in significantly higher (up to 10X) response 
time at different CPU utilization levels. At peak performance level, it has 
20+% less throughput with tons of “connection reset” errors in additional to 
“read timed out” errors. Current trunk implementation does not have errors.
3. During the graceful restart, there are tons of connection losses. 

Based on the above findings, I think we may want to keep the current approach. 
It is a clean, working and better performing one :-)

Thanks,
Yingqi


-Original Message-
From: Yann Ylavic [mailto:ylavic@gmail.com]
Sent: Thursday, November 06, 2014 4:59 AM
To: httpd
Subject: Re: Listeners buckets and duplication w/ and w/o SO_REUSEPORT on trunk

Rebasing discussion here since this thread seems to be referenced in PR55897, 
and the discussion has somehow been forked and continued in [1].

[1]. 
http://mail-archives.apache.org/mod_mbox/httpd-dev/201410.mbox/%3c9acd5b67aac5594cb6268234cf29cf9aa37e9...@orsmsx113.amr.corp.intel.com%3E

On Sat, Oct 11, 2014 at 1:55 AM, Lu, Yingqi yingqi...@intel.com wrote:
 Attached patch is generated based on current trunk. It covers for 
 prefork/worker/event/eventopt MPM.

The patch (modified) has now been applied to trunk with r1635521.

On Thu, Oct 30, 2014 at 5:10 PM, Lu, Yingqi yingqi...@intel.com wrote:
 As this is getting better, I am wondering if you guys have plan to put this 
 SO_REUSEPORT patch into the stable version.
 If yes, do you have a rough timeline?

The whole feature could certainly be proposed for 2.4.x since there is no 
(MAJOR) API change.

On Thu, Nov 6, 2014 at 6:52 AM, Lu, Yingqi yingqi...@intel.com wrote:
 I just took some testing on the most recent trunk version.
 I found out that num_buckets is default to 1 (ListenCoresBucketsRatio is 
 default to 0).
 Adding ListenCoresBucketsRatio is great since user can have control over this.
 However, I am thinking it may be better to make this default at 8. 
 This will make the SO_REUSEPORT support to be default enabled (8 buckets).
(8 buckets with 64 CPU cores, lucky you...).

Yes this change wrt to your original patch is documented in the commit message, 
including how change it to an opt-out.
I chose the opt-in way because I almost always find it safer, especially for 
backports to stable.
I have no strong opinion on this regarding trunk, though, could be an opt-out 
(easily) there.

Let's see what others say on this and the backport to 2.4.x.
Anyone?

 In case users are not aware of this new ListenCoresBucketsRatio 
 configurable flag, they can still enjoy

RE: Listeners buckets and duplication w/ and w/o SO_REUSEPORT on trunk

2014-11-05 Thread Lu, Yingqi
Hi Yann,

I just took some testing on the most recent trunk version. I found out that 
num_buckets is default to 1 (ListenCoresBucketsRatio is default to 0). Adding 
ListenCoresBucketsRatio is great since user can have control over this. 
However, I am thinking it may be better to make this default at 8. This will 
make the SO_REUSEPORT support to be default enabled (8 buckets). In case users 
are not aware of this new ListenCoresBucketsRatio configurable flag, they can 
still enjoy the performance benefits.

Please let me know what you think.

Thanks,
Yingqi

-Original Message-
From: Lu, Yingqi [mailto:yingqi...@intel.com] 
Sent: Thursday, October 30, 2014 9:10 AM
To: dev@httpd.apache.org
Subject: RE: Listeners buckets and duplication w/ and w/o SO_REUSEPORT on trunk

Hi Yann,

Thank you very much for your help! 

As this is getting better, I am wondering if you guys have plan to put this 
SO_REUSEPORT patch into the stable version. If yes, do you have a rough 
timeline?

The performance gain is great from the patch, I just want to more people being 
able to take advantage of it.

Thanks,
Lucy

-Original Message-
From: Yann Ylavic [mailto:ylavic@gmail.com]
Sent: Thursday, October 30, 2014 8:29 AM
To: httpd
Subject: Re: Listeners buckets and duplication w/ and w/o SO_REUSEPORT on trunk

Hi Yingqi,

commited in r1635521, with some changes (please see commit log).
These are not big changes, and your work on removing the global variables and 
restoring some previous behaviour is great, thanks for the good work.

Regards,
Yann.


On Wed, Oct 29, 2014 at 6:36 PM, Lu, Yingqi yingqi...@intel.com wrote:
 Thank you very much for your help!

 Thanks,
 Yingqi

 -Original Message-
 From: Yann Ylavic [mailto:ylavic@gmail.com]
 Sent: Wednesday, October 29, 2014 10:34 AM
 To: httpd
 Subject: Re: Listeners buckets and duplication w/ and w/o SO_REUSEPORT 
 on trunk

 Hi Yingqi,

 I'm working on it currently, will commit soon.

 Regards,
 Yann.

 On Wed, Oct 29, 2014 at 6:20 PM, Lu, Yingqi yingqi...@intel.com wrote:
 Hi All,

 I just want to check if there is any feedback/comments on this?

 For details, please refer to Yann Ylavic's notes and my responses below.

 Thanks,
 Yingqi

 -Original Message-
 From: Lu, Yingqi [mailto:yingqi...@intel.com]
 Sent: Friday, October 10, 2014 4:56 PM
 To: dev@httpd.apache.org
 Subject: RE: Listeners buckets and duplication w/ and w/o 
 SO_REUSEPORT on trunk

 Dear All,

 Attached patch is generated based on current trunk. It covers for 
 prefork/worker/event/eventopt MPM. It supposes to address following issues 
 regarding to SO_RESUEPORT support vs. current trunk version:

 1. Same as current trunk version implementation, when active_CPU_num = 8 or 
 when so_reuseport is not supported by the kernel, ap_num_buckets is set to 
 1. In any case, there is 1 dedicated listener per bucket.

 2. Remove global variables (mpm_listen, enable_default_listeners and 
 num_buckets). mpm_listen is changed to MPM local. enabled_default_listener 
 is completely removed. num_buckets is changed to MPM local (ap_num_buckets). 
 I rename have_so_reuseport to ap_have_so_reuseport. The reason for keeping 
 that global is because this variable is being used by ap_log_common(). Based 
 on the feedback here, I think it may not be a good idea to change the 
 function interface.

 3. Change ap_duplicated_listener to have more parameters. This function is 
 being called from MPM local (prefork.c/worker.c/event.c/eventopt.c). In this 
 function, prefork_listener (or worker_listener/even_listener/etc) array will 
 be initialized and be set value. ap_num_buckets is also calculated inside 
 this function. In addition, this version solves the issue with one_process 
 case (current trunk version has issue with one_process enabled).

 4. Change ap_close_listener() back to previous (2.4.X version).

 5. Change dummy_connection back to previous (2.4.X version).

 6. Add ap_close_duplicated_listeners(). This is called from mpms when 
 stopping httpd.

 7. Add ap_close_child_listener(). When listener_thread (child process in 
 prefork) exit, only the dedicated listener needs to be closed (the rest are 
 already being closed in child_main when the child process starts).

 8. Remove duplication of listener when ap_num_buckets = 1 or without 
 SO_REUSEPORT support (ap_num_buckets = 1). With so_reuseport, only 
 duplicated (ap_num_buckets - 1) listeners (1 less duplication less current 
 trunk implementation).

 9. Inside each mpm, move child_bucket, child_pod and child_mutex 
 (worker/prefork only) to a struct. Also, add member bucket to the same 
 struct.

 Please review and let me know your feedback.

 Thanks,
 Yingqi

 -Original Message-
 From: Yann Ylavic [mailto:ylavic@gmail.com]
 Sent: Tuesday, October 07, 2014 5:26 PM
 To: httpd
 Subject: Re: Listeners buckets and duplication w/ and w/o 
 SO_REUSEPORT on trunk

 On Wed, Oct 8, 2014 at 2:03 AM, Yann Ylavic ylavic

RE: Listeners buckets and duplication w/ and w/o SO_REUSEPORT on trunk

2014-10-30 Thread Lu, Yingqi
Hi Yann,

Thank you very much for your help! 

As this is getting better, I am wondering if you guys have plan to put this 
SO_REUSEPORT patch into the stable version. If yes, do you have a rough 
timeline?

The performance gain is great from the patch, I just want to more people being 
able to take advantage of it.

Thanks,
Lucy

-Original Message-
From: Yann Ylavic [mailto:ylavic@gmail.com] 
Sent: Thursday, October 30, 2014 8:29 AM
To: httpd
Subject: Re: Listeners buckets and duplication w/ and w/o SO_REUSEPORT on trunk

Hi Yingqi,

commited in r1635521, with some changes (please see commit log).
These are not big changes, and your work on removing the global variables and 
restoring some previous behaviour is great, thanks for the good work.

Regards,
Yann.


On Wed, Oct 29, 2014 at 6:36 PM, Lu, Yingqi yingqi...@intel.com wrote:
 Thank you very much for your help!

 Thanks,
 Yingqi

 -Original Message-
 From: Yann Ylavic [mailto:ylavic@gmail.com]
 Sent: Wednesday, October 29, 2014 10:34 AM
 To: httpd
 Subject: Re: Listeners buckets and duplication w/ and w/o SO_REUSEPORT 
 on trunk

 Hi Yingqi,

 I'm working on it currently, will commit soon.

 Regards,
 Yann.

 On Wed, Oct 29, 2014 at 6:20 PM, Lu, Yingqi yingqi...@intel.com wrote:
 Hi All,

 I just want to check if there is any feedback/comments on this?

 For details, please refer to Yann Ylavic's notes and my responses below.

 Thanks,
 Yingqi

 -Original Message-
 From: Lu, Yingqi [mailto:yingqi...@intel.com]
 Sent: Friday, October 10, 2014 4:56 PM
 To: dev@httpd.apache.org
 Subject: RE: Listeners buckets and duplication w/ and w/o 
 SO_REUSEPORT on trunk

 Dear All,

 Attached patch is generated based on current trunk. It covers for 
 prefork/worker/event/eventopt MPM. It supposes to address following issues 
 regarding to SO_RESUEPORT support vs. current trunk version:

 1. Same as current trunk version implementation, when active_CPU_num = 8 or 
 when so_reuseport is not supported by the kernel, ap_num_buckets is set to 
 1. In any case, there is 1 dedicated listener per bucket.

 2. Remove global variables (mpm_listen, enable_default_listeners and 
 num_buckets). mpm_listen is changed to MPM local. enabled_default_listener 
 is completely removed. num_buckets is changed to MPM local (ap_num_buckets). 
 I rename have_so_reuseport to ap_have_so_reuseport. The reason for keeping 
 that global is because this variable is being used by ap_log_common(). Based 
 on the feedback here, I think it may not be a good idea to change the 
 function interface.

 3. Change ap_duplicated_listener to have more parameters. This function is 
 being called from MPM local (prefork.c/worker.c/event.c/eventopt.c). In this 
 function, prefork_listener (or worker_listener/even_listener/etc) array will 
 be initialized and be set value. ap_num_buckets is also calculated inside 
 this function. In addition, this version solves the issue with one_process 
 case (current trunk version has issue with one_process enabled).

 4. Change ap_close_listener() back to previous (2.4.X version).

 5. Change dummy_connection back to previous (2.4.X version).

 6. Add ap_close_duplicated_listeners(). This is called from mpms when 
 stopping httpd.

 7. Add ap_close_child_listener(). When listener_thread (child process in 
 prefork) exit, only the dedicated listener needs to be closed (the rest are 
 already being closed in child_main when the child process starts).

 8. Remove duplication of listener when ap_num_buckets = 1 or without 
 SO_REUSEPORT support (ap_num_buckets = 1). With so_reuseport, only 
 duplicated (ap_num_buckets - 1) listeners (1 less duplication less current 
 trunk implementation).

 9. Inside each mpm, move child_bucket, child_pod and child_mutex 
 (worker/prefork only) to a struct. Also, add member bucket to the same 
 struct.

 Please review and let me know your feedback.

 Thanks,
 Yingqi

 -Original Message-
 From: Yann Ylavic [mailto:ylavic@gmail.com]
 Sent: Tuesday, October 07, 2014 5:26 PM
 To: httpd
 Subject: Re: Listeners buckets and duplication w/ and w/o 
 SO_REUSEPORT on trunk

 On Wed, Oct 8, 2014 at 2:03 AM, Yann Ylavic ylavic@gmail.com wrote:
 On Wed, Oct 8, 2014 at 1:50 AM, Lu, Yingqi yingqi...@intel.com wrote:
 3. Yes, I did use some extern variables. I can change the name of them to 
 better coordinate with the variable naming conversion. We should do 
 something with ap_prefixed? Is there anything else I should consider when 
 I rename the variable?

 Maybe defining new functions with more arguments (to be used by the 
 existing ones with NULL or default values) is a better alternative.

 For example, ap_duplicate_listeners could be modified to provide mpm_listen 
 and even do the computation of num_buckets and provide it (this is not an 
 API change since it is trunk only for now).

 ap_close_listeners() could be then restored as before (work on ap_listeners 
 only

RE: Listeners buckets and duplication w/ and w/o SO_REUSEPORT on trunk

2014-10-29 Thread Lu, Yingqi
Hi All,

I just want to check if there is any feedback/comments on this? 

For details, please refer to Yann Ylavic's notes and my responses below.

Thanks,
Yingqi

-Original Message-
From: Lu, Yingqi [mailto:yingqi...@intel.com]
Sent: Friday, October 10, 2014 4:56 PM
To: dev@httpd.apache.org
Subject: RE: Listeners buckets and duplication w/ and w/o SO_REUSEPORT on trunk

Dear All,

Attached patch is generated based on current trunk. It covers for 
prefork/worker/event/eventopt MPM. It supposes to address following issues 
regarding to SO_RESUEPORT support vs. current trunk version:

1. Same as current trunk version implementation, when active_CPU_num = 8 or 
when so_reuseport is not supported by the kernel, ap_num_buckets is set to 1. 
In any case, there is 1 dedicated listener per bucket.

2. Remove global variables (mpm_listen, enable_default_listeners and 
num_buckets). mpm_listen is changed to MPM local. enabled_default_listener is 
completely removed. num_buckets is changed to MPM local (ap_num_buckets). I 
rename have_so_reuseport to ap_have_so_reuseport. The reason for keeping that 
global is because this variable is being used by ap_log_common(). Based on the 
feedback here, I think it may not be a good idea to change the function 
interface. 

3. Change ap_duplicated_listener to have more parameters. This function is 
being called from MPM local (prefork.c/worker.c/event.c/eventopt.c). In this 
function, prefork_listener (or worker_listener/even_listener/etc) array will be 
initialized and be set value. ap_num_buckets is also calculated inside this 
function. In addition, this version solves the issue with one_process case 
(current trunk version has issue with one_process enabled).

4. Change ap_close_listener() back to previous (2.4.X version). 

5. Change dummy_connection back to previous (2.4.X version).

6. Add ap_close_duplicated_listeners(). This is called from mpms when stopping 
httpd.

7. Add ap_close_child_listener(). When listener_thread (child process in 
prefork) exit, only the dedicated listener needs to be closed (the rest are 
already being closed in child_main when the child process starts).

8. Remove duplication of listener when ap_num_buckets = 1 or without 
SO_REUSEPORT support (ap_num_buckets = 1). With so_reuseport, only duplicated 
(ap_num_buckets - 1) listeners (1 less duplication less current trunk 
implementation).

9. Inside each mpm, move child_bucket, child_pod and child_mutex 
(worker/prefork only) to a struct. Also, add member bucket to the same struct. 

Please review and let me know your feedback. 

Thanks,
Yingqi

-Original Message-
From: Yann Ylavic [mailto:ylavic@gmail.com]
Sent: Tuesday, October 07, 2014 5:26 PM
To: httpd
Subject: Re: Listeners buckets and duplication w/ and w/o SO_REUSEPORT on trunk

On Wed, Oct 8, 2014 at 2:03 AM, Yann Ylavic ylavic@gmail.com wrote:
 On Wed, Oct 8, 2014 at 1:50 AM, Lu, Yingqi yingqi...@intel.com wrote:
 3. Yes, I did use some extern variables. I can change the name of them to 
 better coordinate with the variable naming conversion. We should do 
 something with ap_prefixed? Is there anything else I should consider when I 
 rename the variable?

 Maybe defining new functions with more arguments (to be used by the 
 existing ones with NULL or default values) is a better alternative.

For example, ap_duplicate_listeners could be modified to provide mpm_listen and 
even do the computation of num_buckets and provide it (this is not an API 
change since it is trunk only for now).

ap_close_listeners() could be then restored as before (work on ap_listeners 
only) and ap_close_duplicated_listeners(mpm_listen) be introduced and used in 
the MPMs instead.

Hence ap_listen_rec *mpm_listeners could be MPM local, which would then call 
ap_duplicate_listeners(..., mpm_listeners, num_buckets) and 
ap_close_duplicated_listeners(mpm_listeners)

That's just a quick thought...


 Please be aware that existing AP_DECLAREd functions API must not change 
 though.

 Regards,
 Yann.


 Thanks,
 Yingqi


 -Original Message-
 From: Yann Ylavic [mailto:ylavic@gmail.com]
 Sent: Tuesday, October 07, 2014 4:19 PM
 To: httpd
 Subject: Listeners buckets and duplication w/ and w/o SO_REUSEPORT on 
 trunk

 Hi,

 some notes about the current implementation of this (trunk only).

 First, whether or not SO_REUSEPORT is available, we do duplicate the 
 listeners.
 This, I think, is not the intention of Yingqi Lu's original proposal, and 
 probably my fault since I asked for the patch to be splitted in two for a 
 better understanding (finally the SO_REUSEPORT patch only has been commited).
 The fact is that without SO_REUSEPORT, this serves nothing, and we'd better 
 use one listener per bucket (as originally proposed), or a single bucket 
 with no duplication (as before) if the performance gains do not worth it.
 WDYT?

 Also, there is no opt-in/out for this functionalities, nor a way to 
 configure number of buckets

RE: Listeners buckets and duplication w/ and w/o SO_REUSEPORT on trunk

2014-10-29 Thread Lu, Yingqi
Thank you very much for your help!

Thanks,
Yingqi

-Original Message-
From: Yann Ylavic [mailto:ylavic@gmail.com] 
Sent: Wednesday, October 29, 2014 10:34 AM
To: httpd
Subject: Re: Listeners buckets and duplication w/ and w/o SO_REUSEPORT on trunk

Hi Yingqi,

I'm working on it currently, will commit soon.

Regards,
Yann.

On Wed, Oct 29, 2014 at 6:20 PM, Lu, Yingqi yingqi...@intel.com wrote:
 Hi All,

 I just want to check if there is any feedback/comments on this?

 For details, please refer to Yann Ylavic's notes and my responses below.

 Thanks,
 Yingqi

 -Original Message-
 From: Lu, Yingqi [mailto:yingqi...@intel.com]
 Sent: Friday, October 10, 2014 4:56 PM
 To: dev@httpd.apache.org
 Subject: RE: Listeners buckets and duplication w/ and w/o SO_REUSEPORT 
 on trunk

 Dear All,

 Attached patch is generated based on current trunk. It covers for 
 prefork/worker/event/eventopt MPM. It supposes to address following issues 
 regarding to SO_RESUEPORT support vs. current trunk version:

 1. Same as current trunk version implementation, when active_CPU_num = 8 or 
 when so_reuseport is not supported by the kernel, ap_num_buckets is set to 1. 
 In any case, there is 1 dedicated listener per bucket.

 2. Remove global variables (mpm_listen, enable_default_listeners and 
 num_buckets). mpm_listen is changed to MPM local. enabled_default_listener is 
 completely removed. num_buckets is changed to MPM local (ap_num_buckets). I 
 rename have_so_reuseport to ap_have_so_reuseport. The reason for keeping that 
 global is because this variable is being used by ap_log_common(). Based on 
 the feedback here, I think it may not be a good idea to change the function 
 interface.

 3. Change ap_duplicated_listener to have more parameters. This function is 
 being called from MPM local (prefork.c/worker.c/event.c/eventopt.c). In this 
 function, prefork_listener (or worker_listener/even_listener/etc) array will 
 be initialized and be set value. ap_num_buckets is also calculated inside 
 this function. In addition, this version solves the issue with one_process 
 case (current trunk version has issue with one_process enabled).

 4. Change ap_close_listener() back to previous (2.4.X version).

 5. Change dummy_connection back to previous (2.4.X version).

 6. Add ap_close_duplicated_listeners(). This is called from mpms when 
 stopping httpd.

 7. Add ap_close_child_listener(). When listener_thread (child process in 
 prefork) exit, only the dedicated listener needs to be closed (the rest are 
 already being closed in child_main when the child process starts).

 8. Remove duplication of listener when ap_num_buckets = 1 or without 
 SO_REUSEPORT support (ap_num_buckets = 1). With so_reuseport, only duplicated 
 (ap_num_buckets - 1) listeners (1 less duplication less current trunk 
 implementation).

 9. Inside each mpm, move child_bucket, child_pod and child_mutex 
 (worker/prefork only) to a struct. Also, add member bucket to the same struct.

 Please review and let me know your feedback.

 Thanks,
 Yingqi

 -Original Message-
 From: Yann Ylavic [mailto:ylavic@gmail.com]
 Sent: Tuesday, October 07, 2014 5:26 PM
 To: httpd
 Subject: Re: Listeners buckets and duplication w/ and w/o SO_REUSEPORT 
 on trunk

 On Wed, Oct 8, 2014 at 2:03 AM, Yann Ylavic ylavic@gmail.com wrote:
 On Wed, Oct 8, 2014 at 1:50 AM, Lu, Yingqi yingqi...@intel.com wrote:
 3. Yes, I did use some extern variables. I can change the name of them to 
 better coordinate with the variable naming conversion. We should do 
 something with ap_prefixed? Is there anything else I should consider when I 
 rename the variable?

 Maybe defining new functions with more arguments (to be used by the 
 existing ones with NULL or default values) is a better alternative.

 For example, ap_duplicate_listeners could be modified to provide mpm_listen 
 and even do the computation of num_buckets and provide it (this is not an API 
 change since it is trunk only for now).

 ap_close_listeners() could be then restored as before (work on ap_listeners 
 only) and ap_close_duplicated_listeners(mpm_listen) be introduced and used in 
 the MPMs instead.

 Hence ap_listen_rec *mpm_listeners could be MPM local, which would 
 then call ap_duplicate_listeners(..., mpm_listeners, num_buckets) 
 and ap_close_duplicated_listeners(mpm_listeners)

 That's just a quick thought...


 Please be aware that existing AP_DECLAREd functions API must not change 
 though.

 Regards,
 Yann.


 Thanks,
 Yingqi


 -Original Message-
 From: Yann Ylavic [mailto:ylavic@gmail.com]
 Sent: Tuesday, October 07, 2014 4:19 PM
 To: httpd
 Subject: Listeners buckets and duplication w/ and w/o SO_REUSEPORT 
 on trunk

 Hi,

 some notes about the current implementation of this (trunk only).

 First, whether or not SO_REUSEPORT is available, we do duplicate the 
 listeners.
 This, I think, is not the intention of Yingqi Lu's original proposal, and 
 probably my fault since

RE: Listeners buckets and duplication w/ and w/o SO_REUSEPORT on trunk

2014-10-18 Thread Lu, Yingqi
Hi All,

I just want to check if there is any feedback on this? Generated based on trunk 
version, this is to remove some code duplications/global variables. This also 
removes listener duplication when SO_REUSEPORT is not being used. 

For details, please refer to Yann Ylavic's notes and my responses below.

I also attached the code changes here again in case you missed it in the 
original email I sent last Friday.

Thanks very much,
Yingqi

-Original Message-
From: Lu, Yingqi [mailto:yingqi...@intel.com]
Sent: Friday, October 10, 2014 4:56 PM
To: dev@httpd.apache.org
Subject: RE: Listeners buckets and duplication w/ and w/o SO_REUSEPORT on trunk

Dear All,

Attached patch is generated based on current trunk. It covers for 
prefork/worker/event/eventopt MPM. It supposes to address following issues 
regarding to SO_RESUEPORT support vs. current trunk version:

1. Same as current trunk version implementation, when active_CPU_num = 8 or 
when so_reuseport is not supported by the kernel, ap_num_buckets is set to 1. 
In any case, there is 1 dedicated listener per bucket.

2. Remove global variables (mpm_listen, enable_default_listeners and 
num_buckets). mpm_listen is changed to MPM local. enabled_default_listener is 
completely removed. num_buckets is changed to MPM local (ap_num_buckets). I 
rename have_so_reuseport to ap_have_so_reuseport. The reason for keeping that 
global is because this variable is being used by ap_log_common(). Based on the 
feedback here, I think it may not be a good idea to change the function 
interface. 

3. Change ap_duplicated_listener to have more parameters. This function is 
being called from MPM local (prefork.c/worker.c/event.c/eventopt.c). In this 
function, prefork_listener (or worker_listener/even_listener/etc) array will be 
initialized and be set value. ap_num_buckets is also calculated inside this 
function. In addition, this version solves the issue with one_process case 
(current trunk version has issue with one_process enabled).

4. Change ap_close_listener() back to previous (2.4.X version). 

5. Change dummy_connection back to previous (2.4.X version).

6. Add ap_close_duplicated_listeners(). This is called from mpms when stopping 
httpd.

7. Add ap_close_child_listener(). When listener_thread (child process in 
prefork) exit, only the dedicated listener needs to be closed (the rest are 
already being closed in child_main when the child process starts).

8. Remove duplication of listener when ap_num_buckets = 1 or without 
SO_REUSEPORT support (ap_num_buckets = 1). With so_reuseport, only duplicated 
(ap_num_buckets - 1) listeners (1 less duplication less current trunk 
implementation).

9. Inside each mpm, move child_bucket, child_pod and child_mutex 
(worker/prefork only) to a struct. Also, add member bucket to the same struct. 

Please review and let me know your feedback. 

Thanks,
Yingqi

-Original Message-
From: Yann Ylavic [mailto:ylavic@gmail.com]
Sent: Tuesday, October 07, 2014 5:26 PM
To: httpd
Subject: Re: Listeners buckets and duplication w/ and w/o SO_REUSEPORT on trunk

On Wed, Oct 8, 2014 at 2:03 AM, Yann Ylavic ylavic@gmail.com wrote:
 On Wed, Oct 8, 2014 at 1:50 AM, Lu, Yingqi yingqi...@intel.com wrote:
 3. Yes, I did use some extern variables. I can change the name of them to 
 better coordinate with the variable naming conversion. We should do 
 something with ap_prefixed? Is there anything else I should consider when I 
 rename the variable?

 Maybe defining new functions with more arguments (to be used by the 
 existing ones with NULL or default values) is a better alternative.

For example, ap_duplicate_listeners could be modified to provide mpm_listen and 
even do the computation of num_buckets and provide it (this is not an API 
change since it is trunk only for now).

ap_close_listeners() could be then restored as before (work on ap_listeners 
only) and ap_close_duplicated_listeners(mpm_listen) be introduced and used in 
the MPMs instead.

Hence ap_listen_rec *mpm_listeners could be MPM local, which would then call 
ap_duplicate_listeners(..., mpm_listeners, num_buckets) and 
ap_close_duplicated_listeners(mpm_listeners)

That's just a quick thought...


 Please be aware that existing AP_DECLAREd functions API must not change 
 though.

 Regards,
 Yann.


 Thanks,
 Yingqi


 -Original Message-
 From: Yann Ylavic [mailto:ylavic@gmail.com]
 Sent: Tuesday, October 07, 2014 4:19 PM
 To: httpd
 Subject: Listeners buckets and duplication w/ and w/o SO_REUSEPORT on 
 trunk

 Hi,

 some notes about the current implementation of this (trunk only).

 First, whether or not SO_REUSEPORT is available, we do duplicate the 
 listeners.
 This, I think, is not the intention of Yingqi Lu's original proposal, and 
 probably my fault since I asked for the patch to be splitted in two for a 
 better understanding (finally the SO_REUSEPORT patch only has been commited).
 The fact is that without SO_REUSEPORT, this serves

RE: Listeners buckets and duplication w/ and w/o SO_REUSEPORT on trunk

2014-10-10 Thread Lu, Yingqi
Dear All,

Attached patch is generated based on current trunk. It covers for 
prefork/worker/event/eventopt MPM. It supposes to address following issues 
regarding to SO_RESUEPORT support vs. current trunk version:

1. Same as current trunk version implementation, when active_CPU_num = 8 or 
when so_reuseport is not supported by the kernel, ap_num_buckets is set to 1. 
In any case, there is 1 dedicated listener per bucket.

2. Remove global variables (mpm_listen, enable_default_listeners and 
num_buckets). mpm_listen is changed to MPM local. enabled_default_listener is 
completely removed. num_buckets is changed to MPM local (ap_num_buckets). I 
rename have_so_reuseport to ap_have_so_reuseport. The reason for keeping that 
global is because this variable is being used by ap_log_common(). Based on the 
feedback here, I think it may not be a good idea to change the function 
interface. 

3. Change ap_duplicated_listener to have more parameters. This function is 
being called from MPM local (prefork.c/worker.c/event.c/eventopt.c). In this 
function, prefork_listener (or worker_listener/even_listener/etc) array will be 
initialized and be set value. ap_num_buckets is also calculated inside this 
function. In addition, this version solves the issue with one_process case 
(current trunk version has issue with one_process enabled).

4. Change ap_close_listener() back to previous (2.4.X version). 

5. Change dummy_connection back to previous (2.4.X version).

6. Add ap_close_duplicated_listeners(). This is called from mpms when stopping 
httpd.

7. Add ap_close_child_listener(). When listener_thread (child process in 
prefork) exit, only the dedicated listener needs to be closed (the rest are 
already being closed in child_main when the child process starts).

8. Remove duplication of listener when ap_num_buckets = 1 or without 
SO_REUSEPORT support (ap_num_buckets = 1). With so_reuseport, only duplicated 
(ap_num_buckets - 1) listeners (1 less duplication less current trunk 
implementation).

9. Inside each mpm, move child_bucket, child_pod and child_mutex 
(worker/prefork only) to a struct. Also, add member bucket to the same struct. 

Please review and let me know your feedback. 

Thanks,
Yingqi

-Original Message-
From: Yann Ylavic [mailto:ylavic@gmail.com] 
Sent: Tuesday, October 07, 2014 5:26 PM
To: httpd
Subject: Re: Listeners buckets and duplication w/ and w/o SO_REUSEPORT on trunk

On Wed, Oct 8, 2014 at 2:03 AM, Yann Ylavic ylavic@gmail.com wrote:
 On Wed, Oct 8, 2014 at 1:50 AM, Lu, Yingqi yingqi...@intel.com wrote:
 3. Yes, I did use some extern variables. I can change the name of them to 
 better coordinate with the variable naming conversion. We should do 
 something with ap_prefixed? Is there anything else I should consider when I 
 rename the variable?

 Maybe defining new functions with more arguments (to be used by the 
 existing ones with NULL or default values) is a better alternative.

For example, ap_duplicate_listeners could be modified to provide mpm_listen and 
even do the computation of num_buckets and provide it (this is not an API 
change since it is trunk only for now).

ap_close_listeners() could be then restored as before (work on ap_listeners 
only) and ap_close_duplicated_listeners(mpm_listen) be introduced and used in 
the MPMs instead.

Hence ap_listen_rec *mpm_listeners could be MPM local, which would then call 
ap_duplicate_listeners(..., mpm_listeners, num_buckets) and 
ap_close_duplicated_listeners(mpm_listeners)

That's just a quick thought...


 Please be aware that existing AP_DECLAREd functions API must not change 
 though.

 Regards,
 Yann.


 Thanks,
 Yingqi


 -Original Message-
 From: Yann Ylavic [mailto:ylavic@gmail.com]
 Sent: Tuesday, October 07, 2014 4:19 PM
 To: httpd
 Subject: Listeners buckets and duplication w/ and w/o SO_REUSEPORT on 
 trunk

 Hi,

 some notes about the current implementation of this (trunk only).

 First, whether or not SO_REUSEPORT is available, we do duplicate the 
 listeners.
 This, I think, is not the intention of Yingqi Lu's original proposal, and 
 probably my fault since I asked for the patch to be splitted in two for a 
 better understanding (finally the SO_REUSEPORT patch only has been commited).
 The fact is that without SO_REUSEPORT, this serves nothing, and we'd better 
 use one listener per bucket (as originally proposed), or a single bucket 
 with no duplication (as before) if the performance gains do not worth it.
 WDYT?

 Also, there is no opt-in/out for this functionalities, nor a way to 
 configure number of buckets ratio wrt number of CPUs (cores).
 For example, SO_REUSEPORT also exists on *BSD, but I doubt it would work as 
 expected since IFAICT this not the same thing as in Linux (DragonFly's 
 implementation seems to be closed to Linux' one though).
 Yet, the dynamic setsockopt() check will also succeed on BSD, and the 
 functionality be enabled.
 So opt in (my preference) or out?

 Finally

RE: svn commit: r1599531 - in /httpd/httpd/trunk: CHANGES include/ap_listen.h server/listen.c server/mpm/event/event.c server/mpm/prefork/prefork.c server/mpm/worker/worker.c server/mpm_unix.c

2014-10-07 Thread Lu, Yingqi
In the last version, I forgot to change _SC_NPROCESSORS_ONLN to 
_SC_NPROCESSORS_CONF for worker mpm. 

Please use this version to review. Sorry for the duplication.

Thanks very much for your help!
Yingqi

-Original Message-
From: Lu, Yingqi [mailto:yingqi...@intel.com] 
Sent: Monday, October 06, 2014 6:29 PM
To: dev@httpd.apache.org
Subject: RE: svn commit: r1599531 - in /httpd/httpd/trunk: CHANGES 
include/ap_listen.h server/listen.c server/mpm/event/event.c 
server/mpm/prefork/prefork.c server/mpm/worker/worker.c server/mpm_unix.c

Hi Yann,

Thank you very much for your help. Here is another update on the fix. In this 
update, I changed:

1. Address the restart/graceful restart issues with ap_daemons_limit changes 
(malloc/realloc/free approach). Thank you for your help!

2. I still think we should use _SC_NPROCESSORS_CONF instead of 
_SC_NPROCESSORS_ONLN to calculate num_buckets. The reason is: number of 
duplicated listener is calculated based on num_buckets. Basically, one 
dedicated listener per bucket. Therefore, to keep the number of listener a 
constant value via the restarts, I think we may want to use   
_SC_NPROCESSORS_CONF. 

3. In addition to the restart issue, I guard the server_limit and 
ap_daemons_limit to be =  num_buckets.

I briefly run valgrind with --tool=memcheck on httpd 
start/stop/restart/graceful restart. Summary says 0 errors. I am not sure if 
this is sufficient enough. 

Please let me know if this version works.

Thanks very much,
Yingqi

-Original Message-
From: Lu, Yingqi [mailto:yingqi...@intel.com] 
Sent: Monday, October 06, 2014 7:46 AM
To: dev@httpd.apache.org
Subject: RE: svn commit: r1599531 - in /httpd/httpd/trunk: CHANGES 
include/ap_listen.h server/listen.c server/mpm/event/event.c 
server/mpm/prefork/prefork.c server/mpm/worker/worker.c server/mpm_unix.c

Hi Yann,

Thanks very much for your feedback.

I will send another update soon to address the restart issues.

Also, inactive CPUs will not be scheduled for httpd. I will change back 
_SC_NPROCESSORS_CONF to _SC_NPROCESSORS_ONLN.

Thanks,
Lucy

-Original Message-
From: Yann Ylavic [mailto:ylavic@gmail.com] 
Sent: Monday, October 06, 2014 1:12 AM
To: httpd
Subject: Re: svn commit: r1599531 - in /httpd/httpd/trunk: CHANGES 
include/ap_listen.h server/listen.c server/mpm/event/event.c 
server/mpm/prefork/prefork.c server/mpm/worker/worker.c server/mpm_unix.c

Hi Yingqi,

On Sun, Oct 5, 2014 at 11:36 PM, Lu, Yingqi yingqi...@intel.com wrote:
 To address your first comment, the issue with pconf pool is that bucket array 
 value needs to be retained via restart and graceful restart. Based on your 
 comments, I now put bucket array into the retained_data struct for all the 
 mpms. Hope this works.

The problem IMHO is that ap_daemons_limit (used to compute the size of the 
bucket array) may not be constant accross restarts (depending on the new 
configuration).
Maybe you could use a retained bucket array to copy the current values before 
graceful restart and restore them after in the pconf allocated array (the one 
really used by the parent process and the new generation of children).
To address the memory leak, since the size may change, I think the retained 
array would have to be malloc()ed instead, and possibly realloc()ed on restarts 
(cleared when non graceful) if there is not enough space to handle the new 
generation (with a process pool cleanup registered the first time to free() the 
whole thing on stop, and make valgrind happy).

Also, since the number of listenners (children) needs to remain constant (IIRC, 
or connections may be reset), maybe you'll have to make sure on graceful 
restart that the previous generation of children has really stopped listenning 
before starting new children. Maybe this is always the case already, but the 
race condition seems more problematic when SO_REUSEPORT is used.

 Regarding to your second question, based on previous patch code, num_buckets 
 is calculated based on the active CPU threads count. I am thinking maybe it 
 is better to do the calculation based on total number of CPU threads instead. 
 This keeps num_buckets to be a constant number as long as the system is 
 running. That is the reason I now change CPU thread count check from 
 _SC_NPROCESSORS_ONLN to _SC_NPROCESSORS_CONF.

I must have missed the point here, will inactive CPUs be scheduled for httpd?
Otherwise, I don't see why they should be taken into account for the number of 
buckets...

Regards,
Yann.


httpd_trunk_SO_REUSEPORT_fix.patch
Description: httpd_trunk_SO_REUSEPORT_fix.patch


RE: svn commit: r1599531 - in /httpd/httpd/trunk: CHANGES include/ap_listen.h server/listen.c server/mpm/event/event.c server/mpm/prefork/prefork.c server/mpm/worker/worker.c server/mpm_unix.c

2014-10-07 Thread Lu, Yingqi
Thanks very much for your quick help!

I will test it today and let you know.

Thanks,
Yingqi

-Original Message-
From: Yann Ylavic [mailto:ylavic@gmail.com] 
Sent: Tuesday, October 07, 2014 8:21 AM
To: httpd
Subject: Re: svn commit: r1599531 - in /httpd/httpd/trunk: CHANGES 
include/ap_listen.h server/listen.c server/mpm/event/event.c 
server/mpm/prefork/prefork.c server/mpm/worker/worker.c server/mpm_unix.c

Hi Yingqi,

thanks for your help.

I finally commited a version (http://svn.apache.org/r1629909) where each 
child's bucket number is stored in the process scoreboard, so there is no need 
for each mpm to handle its own array.

Can you please check that it works for you?

Regards,
Yann.

On Tue, Oct 7, 2014 at 11:01 AM, Lu, Yingqi yingqi...@intel.com wrote:
 In the last version, I forgot to change _SC_NPROCESSORS_ONLN to 
 _SC_NPROCESSORS_CONF for worker mpm.

 Please use this version to review. Sorry for the duplication.

 Thanks very much for your help!
 Yingqi

 -Original Message-
 From: Lu, Yingqi [mailto:yingqi...@intel.com]
 Sent: Monday, October 06, 2014 6:29 PM
 To: dev@httpd.apache.org
 Subject: RE: svn commit: r1599531 - in /httpd/httpd/trunk: CHANGES 
 include/ap_listen.h server/listen.c server/mpm/event/event.c 
 server/mpm/prefork/prefork.c server/mpm/worker/worker.c 
 server/mpm_unix.c

 Hi Yann,

 Thank you very much for your help. Here is another update on the fix. In this 
 update, I changed:

 1. Address the restart/graceful restart issues with ap_daemons_limit changes 
 (malloc/realloc/free approach). Thank you for your help!

 2. I still think we should use _SC_NPROCESSORS_CONF instead of 
 _SC_NPROCESSORS_ONLN to calculate num_buckets. The reason is: number of 
 duplicated listener is calculated based on num_buckets. Basically, one 
 dedicated listener per bucket. Therefore, to keep the number of listener a 
 constant value via the restarts, I think we may want to use   
 _SC_NPROCESSORS_CONF.

 3. In addition to the restart issue, I guard the server_limit and 
 ap_daemons_limit to be =  num_buckets.

 I briefly run valgrind with --tool=memcheck on httpd 
 start/stop/restart/graceful restart. Summary says 0 errors. I am not sure if 
 this is sufficient enough.

 Please let me know if this version works.

 Thanks very much,
 Yingqi

 -Original Message-
 From: Lu, Yingqi [mailto:yingqi...@intel.com]
 Sent: Monday, October 06, 2014 7:46 AM
 To: dev@httpd.apache.org
 Subject: RE: svn commit: r1599531 - in /httpd/httpd/trunk: CHANGES 
 include/ap_listen.h server/listen.c server/mpm/event/event.c 
 server/mpm/prefork/prefork.c server/mpm/worker/worker.c 
 server/mpm_unix.c

 Hi Yann,

 Thanks very much for your feedback.

 I will send another update soon to address the restart issues.

 Also, inactive CPUs will not be scheduled for httpd. I will change back 
 _SC_NPROCESSORS_CONF to _SC_NPROCESSORS_ONLN.

 Thanks,
 Lucy

 -Original Message-
 From: Yann Ylavic [mailto:ylavic@gmail.com]
 Sent: Monday, October 06, 2014 1:12 AM
 To: httpd
 Subject: Re: svn commit: r1599531 - in /httpd/httpd/trunk: CHANGES 
 include/ap_listen.h server/listen.c server/mpm/event/event.c 
 server/mpm/prefork/prefork.c server/mpm/worker/worker.c 
 server/mpm_unix.c

 Hi Yingqi,

 On Sun, Oct 5, 2014 at 11:36 PM, Lu, Yingqi yingqi...@intel.com wrote:
 To address your first comment, the issue with pconf pool is that bucket 
 array value needs to be retained via restart and graceful restart. Based on 
 your comments, I now put bucket array into the retained_data struct for all 
 the mpms. Hope this works.

 The problem IMHO is that ap_daemons_limit (used to compute the size of the 
 bucket array) may not be constant accross restarts (depending on the new 
 configuration).
 Maybe you could use a retained bucket array to copy the current values before 
 graceful restart and restore them after in the pconf allocated array (the one 
 really used by the parent process and the new generation of children).
 To address the memory leak, since the size may change, I think the retained 
 array would have to be malloc()ed instead, and possibly realloc()ed on 
 restarts (cleared when non graceful) if there is not enough space to handle 
 the new generation (with a process pool cleanup registered the first time to 
 free() the whole thing on stop, and make valgrind happy).

 Also, since the number of listenners (children) needs to remain constant 
 (IIRC, or connections may be reset), maybe you'll have to make sure on 
 graceful restart that the previous generation of children has really stopped 
 listenning before starting new children. Maybe this is always the case 
 already, but the race condition seems more problematic when SO_REUSEPORT is 
 used.

 Regarding to your second question, based on previous patch code, num_buckets 
 is calculated based on the active CPU threads count. I am thinking maybe it 
 is better to do the calculation based on total number of CPU threads 
 instead

RE: svn commit: r1629909 - in /httpd/httpd/trunk: include/ap_mmn.h include/scoreboard.h server/mpm/event/event.c server/mpm/eventopt/eventopt.c server/mpm/prefork/prefork.c server/mpm/worker/worker.c

2014-10-07 Thread Lu, Yingqi
Hi Yann,

I am still testing the fix. It is half way through. I already modified 
min_spare_threads to min_spare_threads/num_buckets for both worker and event 
MPM in my test bed, so I am testing the most recent version anyway (thought I 
would mention this together with the testing results).

Regarding to your following comments,

Wouldn't it be better, though more thread/process consuming, to always 
multiply the values with the number of buckets? This concerns 
ap_daemons_to_start, ap_daemons_limit, [min|max]_spare_threads (for unixes 
threaded MPMs), ap_daemons_[min|max]_free (for prefork)

I think it is better to keep the current way (not multiplying). In case that 
administrator use a huge number for these setting, if we multiply on top of 
that, it would be bad. That is my personal thought.

Thanks,
Yingqi

-Original Message-
From: Yann Ylavic [mailto:ylavic@gmail.com] 
Sent: Tuesday, October 07, 2014 3:02 PM
To: httpd
Subject: Re: svn commit: r1629909 - in /httpd/httpd/trunk: include/ap_mmn.h 
include/scoreboard.h server/mpm/event/event.c server/mpm/eventopt/eventopt.c 
server/mpm/prefork/prefork.c server/mpm/worker/worker.c

On Tue, Oct 7, 2014 at 8:59 PM, Ruediger Pluem rpl...@apache.org wrote:

 On 10/07/2014 05:16 PM, yla...@apache.org wrote:
 URL: 
 http://svn.apache.org/viewvc/httpd/httpd/trunk/server/mpm/event/event
 .c?rev=1629909r1=1629908r2=1629909view=diff
 =
 =
 --- httpd/httpd/trunk/server/mpm/event/event.c (original)
 +++ httpd/httpd/trunk/server/mpm/event/event.c Tue Oct  7 15:16:02 
 +++ 2014
 if (all_dead_threads) {
 @@ -2801,12 +2800,12 @@ static void perform_idle_server_maintena

  retained-max_daemons_limit = last_non_dead + 1;

 -if (idle_thread_count  max_spare_threads/num_buckets) {
 +if (idle_thread_count  max_spare_threads / num_buckets) {
  /* Kill off one child */
  ap_mpm_podx_signal(pod[child_bucket], AP_MPM_PODX_GRACEFUL);
  retained-idle_spawn_rate[child_bucket] = 1;
  }
 -else if (idle_thread_count  min_spare_threads/num_buckets) {
 +else if (idle_thread_count  min_spare_threads) {

 Why this?

My bad, thanks, fixed in r1629990.

However I was and am 'm still confused about the way for the adminstrator to 
configure these directives.
Should (s)he take into account the number of active CPU cores or should the MPM 
do that?
What about existing configurations?

Currently (r1629990, as per the original commit and Yingqi's proposed fixes to 
avoid division by 0), this is the administrator's job, but we silently raise 
the values if they don't feet well, this is not consistent IMHO.

Wouldn't it be better, though more thread/process consuming, to always multiply 
the values with the number of buckets? This concerns ap_daemons_to_start, 
ap_daemons_limit, [min|max]_spare_threads (for unixes threaded MPMs), 
ap_daemons_[min|max]_free (for prefork).


RE: Listeners buckets and duplication w/ and w/o SO_REUSEPORT on trunk

2014-10-07 Thread Lu, Yingqi
Here is what I think. Currently (trunk version as well as my original patch),

1. Without SO_REUSEPORT or when available CPU number  8, num_bucket = 1 
anyway. It duplicates 1 listener and use that for this single bucket. If folks 
think we should not duplicate in this case, I can modify the code to do that.

2. num_buckets is calculated = available_CPU_num/8. When available CPU is less 
than 8, num_buckets = 1. It checks if SO_REUSEPORT is enabled in the kernel. If 
yes, it will enable it. I guess that is opt-in? Maybe I misunderstood you here, 
Yann. Please correct me if I do.

3. Yes, I did use some extern variables. I can change the name of them to 
better coordinate with the variable naming conversion. We should do something 
with ap_prefixed? Is there anything else I should consider when I rename the 
variable?

Thanks,
Yingqi


-Original Message-
From: Yann Ylavic [mailto:ylavic@gmail.com] 
Sent: Tuesday, October 07, 2014 4:19 PM
To: httpd
Subject: Listeners buckets and duplication w/ and w/o SO_REUSEPORT on trunk

Hi,

some notes about the current implementation of this (trunk only).

First, whether or not SO_REUSEPORT is available, we do duplicate the listeners.
This, I think, is not the intention of Yingqi Lu's original proposal, and 
probably my fault since I asked for the patch to be splitted in two for a 
better understanding (finally the SO_REUSEPORT patch only has been commited).
The fact is that without SO_REUSEPORT, this serves nothing, and we'd better use 
one listener per bucket (as originally proposed), or a single bucket with no 
duplication (as before) if the performance gains do not worth it.
WDYT?

Also, there is no opt-in/out for this functionalities, nor a way to configure 
number of buckets ratio wrt number of CPUs (cores).
For example, SO_REUSEPORT also exists on *BSD, but I doubt it would work as 
expected since IFAICT this not the same thing as in Linux (DragonFly's 
implementation seems to be closed to Linux' one though).
Yet, the dynamic setsockopt() check will also succeed on BSD, and the 
functionality be enabled.
So opt in (my preference) or out?

Finally, some global variables (not even ap_ prefixed) are used to communicate 
between listen.c and the MPM. This helps not breaking the API, but this is 
trunk...
I guess we can fix it, this is just a (self or anyone's) reminder :)

Regards,
Yann.


RE: svn commit: r1629909 - in /httpd/httpd/trunk: include/ap_mmn.h include/scoreboard.h server/mpm/event/event.c server/mpm/eventopt/eventopt.c server/mpm/prefork/prefork.c server/mpm/worker/worker.c

2014-10-07 Thread Lu, Yingqi
I tested it and it works.

Thanks,
Yingqi

-Original Message-
From: Lu, Yingqi [mailto:yingqi...@intel.com] 
Sent: Tuesday, October 07, 2014 3:13 PM
To: dev@httpd.apache.org
Subject: RE: svn commit: r1629909 - in /httpd/httpd/trunk: include/ap_mmn.h 
include/scoreboard.h server/mpm/event/event.c server/mpm/eventopt/eventopt.c 
server/mpm/prefork/prefork.c server/mpm/worker/worker.c

Hi Yann,

I am still testing the fix. It is half way through. I already modified 
min_spare_threads to min_spare_threads/num_buckets for both worker and event 
MPM in my test bed, so I am testing the most recent version anyway (thought I 
would mention this together with the testing results).

Regarding to your following comments,

Wouldn't it be better, though more thread/process consuming, to always 
multiply the values with the number of buckets? This concerns 
ap_daemons_to_start, ap_daemons_limit, [min|max]_spare_threads (for unixes 
threaded MPMs), ap_daemons_[min|max]_free (for prefork)

I think it is better to keep the current way (not multiplying). In case that 
administrator use a huge number for these setting, if we multiply on top of 
that, it would be bad. That is my personal thought.

Thanks,
Yingqi

-Original Message-
From: Yann Ylavic [mailto:ylavic@gmail.com] 
Sent: Tuesday, October 07, 2014 3:02 PM
To: httpd
Subject: Re: svn commit: r1629909 - in /httpd/httpd/trunk: include/ap_mmn.h 
include/scoreboard.h server/mpm/event/event.c server/mpm/eventopt/eventopt.c 
server/mpm/prefork/prefork.c server/mpm/worker/worker.c

On Tue, Oct 7, 2014 at 8:59 PM, Ruediger Pluem rpl...@apache.org wrote:

 On 10/07/2014 05:16 PM, yla...@apache.org wrote:
 URL: 
 http://svn.apache.org/viewvc/httpd/httpd/trunk/server/mpm/event/event
 .c?rev=1629909r1=1629908r2=1629909view=diff
 =
 =
 --- httpd/httpd/trunk/server/mpm/event/event.c (original)
 +++ httpd/httpd/trunk/server/mpm/event/event.c Tue Oct  7 15:16:02 
 +++ 2014
 if (all_dead_threads) {
 @@ -2801,12 +2800,12 @@ static void perform_idle_server_maintena

  retained-max_daemons_limit = last_non_dead + 1;

 -if (idle_thread_count  max_spare_threads/num_buckets) {
 +if (idle_thread_count  max_spare_threads / num_buckets) {
  /* Kill off one child */
  ap_mpm_podx_signal(pod[child_bucket], AP_MPM_PODX_GRACEFUL);
  retained-idle_spawn_rate[child_bucket] = 1;
  }
 -else if (idle_thread_count  min_spare_threads/num_buckets) {
 +else if (idle_thread_count  min_spare_threads) {

 Why this?

My bad, thanks, fixed in r1629990.

However I was and am 'm still confused about the way for the adminstrator to 
configure these directives.
Should (s)he take into account the number of active CPU cores or should the MPM 
do that?
What about existing configurations?

Currently (r1629990, as per the original commit and Yingqi's proposed fixes to 
avoid division by 0), this is the administrator's job, but we silently raise 
the values if they don't feet well, this is not consistent IMHO.

Wouldn't it be better, though more thread/process consuming, to always multiply 
the values with the number of buckets? This concerns ap_daemons_to_start, 
ap_daemons_limit, [min|max]_spare_threads (for unixes threaded MPMs), 
ap_daemons_[min|max]_free (for prefork).


RE: Listeners buckets and duplication w/ and w/o SO_REUSEPORT on trunk

2014-10-07 Thread Lu, Yingqi
Regarding to your comments #2, we tested on a 16 thread system and it does not 
bring any performance value. That is the reason I calculate this way.

Thanks for the comments below. I will try to send out a fix soon.

Thanks,
Yingqi

-Original Message-
From: Yann Ylavic [mailto:ylavic@gmail.com] 
Sent: Tuesday, October 07, 2014 5:04 PM
To: httpd
Subject: Re: Listeners buckets and duplication w/ and w/o SO_REUSEPORT on trunk

On Wed, Oct 8, 2014 at 1:50 AM, Lu, Yingqi yingqi...@intel.com wrote:
 Here is what I think. Currently (trunk version as well as my original 
 patch),

 1. Without SO_REUSEPORT or when available CPU number  8, num_bucket = 1 
 anyway. It duplicates 1 listener and use that for this single bucket. If 
 folks think we should not duplicate in this case, I can modify the code to do 
 that.

Yes I think the duplication should be avoided.

But is one listener per bucket an interesting alternative to num_buckets = 1?


 2. num_buckets is calculated = available_CPU_num/8. When available CPU is 
 less than 8, num_buckets = 1. It checks if SO_REUSEPORT is enabled in the 
 kernel. If yes, it will enable it. I guess that is opt-in? Maybe I 
 misunderstood you here, Yann. Please correct me if I do.

Why fixed 8, wouldn't one with less than 16 cores want the feature?


 3. Yes, I did use some extern variables. I can change the name of them to 
 better coordinate with the variable naming conversion. We should do something 
 with ap_prefixed? Is there anything else I should consider when I rename the 
 variable?

Maybe defining new functions with more arguments (to be used by the existing 
ones with NULL or default values) is a better alternative.

Please be aware that existing AP_DECLAREd functions API must not change though.

Regards,
Yann.


 Thanks,
 Yingqi


 -Original Message-
 From: Yann Ylavic [mailto:ylavic@gmail.com]
 Sent: Tuesday, October 07, 2014 4:19 PM
 To: httpd
 Subject: Listeners buckets and duplication w/ and w/o SO_REUSEPORT on 
 trunk

 Hi,

 some notes about the current implementation of this (trunk only).

 First, whether or not SO_REUSEPORT is available, we do duplicate the 
 listeners.
 This, I think, is not the intention of Yingqi Lu's original proposal, and 
 probably my fault since I asked for the patch to be splitted in two for a 
 better understanding (finally the SO_REUSEPORT patch only has been commited).
 The fact is that without SO_REUSEPORT, this serves nothing, and we'd better 
 use one listener per bucket (as originally proposed), or a single bucket with 
 no duplication (as before) if the performance gains do not worth it.
 WDYT?

 Also, there is no opt-in/out for this functionalities, nor a way to configure 
 number of buckets ratio wrt number of CPUs (cores).
 For example, SO_REUSEPORT also exists on *BSD, but I doubt it would work as 
 expected since IFAICT this not the same thing as in Linux (DragonFly's 
 implementation seems to be closed to Linux' one though).
 Yet, the dynamic setsockopt() check will also succeed on BSD, and the 
 functionality be enabled.
 So opt in (my preference) or out?

 Finally, some global variables (not even ap_ prefixed) are used to 
 communicate between listen.c and the MPM. This helps not breaking the API, 
 but this is trunk...
 I guess we can fix it, this is just a (self or anyone's) reminder :)

 Regards,
 Yann.


RE: svn commit: r1599531 - in /httpd/httpd/trunk: CHANGES include/ap_listen.h server/listen.c server/mpm/event/event.c server/mpm/prefork/prefork.c server/mpm/worker/worker.c server/mpm_unix.c

2014-10-06 Thread Lu, Yingqi
Hi Yann,

Thanks very much for your feedback.

I will send another update soon to address the restart issues.

Also, inactive CPUs will not be scheduled for httpd. I will change back 
_SC_NPROCESSORS_CONF to _SC_NPROCESSORS_ONLN.

Thanks,
Lucy

-Original Message-
From: Yann Ylavic [mailto:ylavic@gmail.com] 
Sent: Monday, October 06, 2014 1:12 AM
To: httpd
Subject: Re: svn commit: r1599531 - in /httpd/httpd/trunk: CHANGES 
include/ap_listen.h server/listen.c server/mpm/event/event.c 
server/mpm/prefork/prefork.c server/mpm/worker/worker.c server/mpm_unix.c

Hi Yingqi,

On Sun, Oct 5, 2014 at 11:36 PM, Lu, Yingqi yingqi...@intel.com wrote:
 To address your first comment, the issue with pconf pool is that bucket array 
 value needs to be retained via restart and graceful restart. Based on your 
 comments, I now put bucket array into the retained_data struct for all the 
 mpms. Hope this works.

The problem IMHO is that ap_daemons_limit (used to compute the size of the 
bucket array) may not be constant accross restarts (depending on the new 
configuration).
Maybe you could use a retained bucket array to copy the current values before 
graceful restart and restore them after in the pconf allocated array (the one 
really used by the parent process and the new generation of children).
To address the memory leak, since the size may change, I think the retained 
array would have to be malloc()ed instead, and possibly realloc()ed on restarts 
(cleared when non graceful) if there is not enough space to handle the new 
generation (with a process pool cleanup registered the first time to free() the 
whole thing on stop, and make valgrind happy).

Also, since the number of listenners (children) needs to remain constant (IIRC, 
or connections may be reset), maybe you'll have to make sure on graceful 
restart that the previous generation of children has really stopped listenning 
before starting new children. Maybe this is always the case already, but the 
race condition seems more problematic when SO_REUSEPORT is used.

 Regarding to your second question, based on previous patch code, num_buckets 
 is calculated based on the active CPU threads count. I am thinking maybe it 
 is better to do the calculation based on total number of CPU threads instead. 
 This keeps num_buckets to be a constant number as long as the system is 
 running. That is the reason I now change CPU thread count check from 
 _SC_NPROCESSORS_ONLN to _SC_NPROCESSORS_CONF.

I must have missed the point here, will inactive CPUs be scheduled for httpd?
Otherwise, I don't see why they should be taken into account for the number of 
buckets...

Regards,
Yann.


RE: svn commit: r1599531 - in /httpd/httpd/trunk: CHANGES include/ap_listen.h server/listen.c server/mpm/event/event.c server/mpm/prefork/prefork.c server/mpm/worker/worker.c server/mpm_unix.c

2014-10-06 Thread Lu, Yingqi
Hi Yann,

I am coding the fix now and I a question regarding to your comments. I would 
like to get some clarification to make sure I totally understand.

You mentioned: with a process pool cleanup registered the first time to free() 
the whole thing on stop

My question is: if we do malloc and realloc(), I think it would be just in 
memory. I can free() the whole thing on httpd stop. Do you mean that or you 
actually mean we need create a new memory pool, allocate/realloc the memory 
there and register the cleanup on stop?

Thanks,
Lucy

-Original Message-
From: Lu, Yingqi [mailto:yingqi...@intel.com] 
Sent: Monday, October 06, 2014 7:46 AM
To: dev@httpd.apache.org
Subject: RE: svn commit: r1599531 - in /httpd/httpd/trunk: CHANGES 
include/ap_listen.h server/listen.c server/mpm/event/event.c 
server/mpm/prefork/prefork.c server/mpm/worker/worker.c server/mpm_unix.c

Hi Yann,

Thanks very much for your feedback.

I will send another update soon to address the restart issues.

Also, inactive CPUs will not be scheduled for httpd. I will change back 
_SC_NPROCESSORS_CONF to _SC_NPROCESSORS_ONLN.

Thanks,
Lucy

-Original Message-
From: Yann Ylavic [mailto:ylavic@gmail.com] 
Sent: Monday, October 06, 2014 1:12 AM
To: httpd
Subject: Re: svn commit: r1599531 - in /httpd/httpd/trunk: CHANGES 
include/ap_listen.h server/listen.c server/mpm/event/event.c 
server/mpm/prefork/prefork.c server/mpm/worker/worker.c server/mpm_unix.c

Hi Yingqi,

On Sun, Oct 5, 2014 at 11:36 PM, Lu, Yingqi yingqi...@intel.com wrote:
 To address your first comment, the issue with pconf pool is that bucket array 
 value needs to be retained via restart and graceful restart. Based on your 
 comments, I now put bucket array into the retained_data struct for all the 
 mpms. Hope this works.

The problem IMHO is that ap_daemons_limit (used to compute the size of the 
bucket array) may not be constant accross restarts (depending on the new 
configuration).
Maybe you could use a retained bucket array to copy the current values before 
graceful restart and restore them after in the pconf allocated array (the one 
really used by the parent process and the new generation of children).
To address the memory leak, since the size may change, I think the retained 
array would have to be malloc()ed instead, and possibly realloc()ed on restarts 
(cleared when non graceful) if there is not enough space to handle the new 
generation (with a process pool cleanup registered the first time to free() the 
whole thing on stop, and make valgrind happy).

Also, since the number of listenners (children) needs to remain constant (IIRC, 
or connections may be reset), maybe you'll have to make sure on graceful 
restart that the previous generation of children has really stopped listenning 
before starting new children. Maybe this is always the case already, but the 
race condition seems more problematic when SO_REUSEPORT is used.

 Regarding to your second question, based on previous patch code, num_buckets 
 is calculated based on the active CPU threads count. I am thinking maybe it 
 is better to do the calculation based on total number of CPU threads instead. 
 This keeps num_buckets to be a constant number as long as the system is 
 running. That is the reason I now change CPU thread count check from 
 _SC_NPROCESSORS_ONLN to _SC_NPROCESSORS_CONF.

I must have missed the point here, will inactive CPUs be scheduled for httpd?
Otherwise, I don't see why they should be taken into account for the number of 
buckets...

Regards,
Yann.


RE: svn commit: r1599531 - in /httpd/httpd/trunk: CHANGES include/ap_listen.h server/listen.c server/mpm/event/event.c server/mpm/prefork/prefork.c server/mpm/worker/worker.c server/mpm_unix.c

2014-10-06 Thread Lu, Yingqi
Hi Yann,

Sorry that I just saw your messages, I was too focus on coding the fix :-) 
Almost done.

I will take a look at your suggested code and try to incorporate it into the 
fix.

Hopefully, I can send something out tonight.

Thanks,
Yingqi



-Original Message-
From: Yann Ylavic [mailto:ylavic@gmail.com] 
Sent: Monday, October 06, 2014 2:59 PM
To: httpd
Subject: Re: svn commit: r1599531 - in /httpd/httpd/trunk: CHANGES 
include/ap_listen.h server/listen.c server/mpm/event/event.c 
server/mpm/prefork/prefork.c server/mpm/worker/worker.c server/mpm_unix.c

Sorry this won't work.

On Mon, Oct 6, 2014 at 9:52 PM, Yann Ylavic ylavic@gmail.com wrote:
 Oups, I forgot the cleanup :p

 On Mon, Oct 6, 2014 at 9:45 PM, Yann Ylavic ylavic@gmail.com wrote:
 Index: server/mpm/prefork/prefork.c
 ===
 --- server/mpm/prefork/prefork.c(revision 1629482)
 +++ server/mpm/prefork/prefork.c(working copy)
 [...]
 @@ -1232,6 +1250,16 @@ static int prefork_run(apr_pool_t *_pconf, apr_poo
  return DONE;
  }

 +if (!retained-bucket) {
 +retained-daemons_limit = ap_daemons_limit;
 +retained-bucket = ap_malloc(sizeof(int) * ap_daemons_limit);
 +if (!retained-bucket) {
 +ap_log_error(APLOG_MARK, APLOG_CRIT, 0, ap_server_conf, 
 APLOGNO()
 + could not allocate buckets);
 +}

 Here:
 +apr_pool_cleanup_register(s-process-pool, retained-bucket,
 +  free_bucket, 
 + apr_pool_cleanup_null);

Here we have to use:
+apr_pool_cleanup_register(s-process-pool, retained-bucket,
+  free_bucket, apr_pool_cleanup_null);


 +}
 +memcpy(retained-bucket, bucket, sizeof(int) * 
 + ap_daemons_limit);
 +
  /* advance to the next generation */
  /* XXX: we really need to make sure this new generation number isn't in
   * use by any of the children.
 [END]

 With :

 static apr_status_t free_bucket(void *bucket) {
 free(bucket);

And here:
free(*(int **)bucket);

 return APR_SUCCESS;
 }


RE: svn commit: r1599531 - in /httpd/httpd/trunk: CHANGES include/ap_listen.h server/listen.c server/mpm/event/event.c server/mpm/prefork/prefork.c server/mpm/worker/worker.c server/mpm_unix.c

2014-10-06 Thread Lu, Yingqi
Hi Yann,

Thank you very much for your help. Here is another update on the fix. In this 
update, I changed:

1. Address the restart/graceful restart issues with ap_daemons_limit changes 
(malloc/realloc/free approach). Thank you for your help!

2. I still think we should use _SC_NPROCESSORS_CONF instead of 
_SC_NPROCESSORS_ONLN to calculate num_buckets. The reason is: number of 
duplicated listener is calculated based on num_buckets. Basically, one 
dedicated listener per bucket. Therefore, to keep the number of listener a 
constant value via the restarts, I think we may want to use   
_SC_NPROCESSORS_CONF. 

3. In addition to the restart issue, I guard the server_limit and 
ap_daemons_limit to be =  num_buckets.

I briefly run valgrind with --tool=memcheck on httpd 
start/stop/restart/graceful restart. Summary says 0 errors. I am not sure if 
this is sufficient enough. 

Please let me know if this version works.

Thanks very much,
Yingqi

-Original Message-
From: Lu, Yingqi [mailto:yingqi...@intel.com] 
Sent: Monday, October 06, 2014 7:46 AM
To: dev@httpd.apache.org
Subject: RE: svn commit: r1599531 - in /httpd/httpd/trunk: CHANGES 
include/ap_listen.h server/listen.c server/mpm/event/event.c 
server/mpm/prefork/prefork.c server/mpm/worker/worker.c server/mpm_unix.c

Hi Yann,

Thanks very much for your feedback.

I will send another update soon to address the restart issues.

Also, inactive CPUs will not be scheduled for httpd. I will change back 
_SC_NPROCESSORS_CONF to _SC_NPROCESSORS_ONLN.

Thanks,
Lucy

-Original Message-
From: Yann Ylavic [mailto:ylavic@gmail.com] 
Sent: Monday, October 06, 2014 1:12 AM
To: httpd
Subject: Re: svn commit: r1599531 - in /httpd/httpd/trunk: CHANGES 
include/ap_listen.h server/listen.c server/mpm/event/event.c 
server/mpm/prefork/prefork.c server/mpm/worker/worker.c server/mpm_unix.c

Hi Yingqi,

On Sun, Oct 5, 2014 at 11:36 PM, Lu, Yingqi yingqi...@intel.com wrote:
 To address your first comment, the issue with pconf pool is that bucket array 
 value needs to be retained via restart and graceful restart. Based on your 
 comments, I now put bucket array into the retained_data struct for all the 
 mpms. Hope this works.

The problem IMHO is that ap_daemons_limit (used to compute the size of the 
bucket array) may not be constant accross restarts (depending on the new 
configuration).
Maybe you could use a retained bucket array to copy the current values before 
graceful restart and restore them after in the pconf allocated array (the one 
really used by the parent process and the new generation of children).
To address the memory leak, since the size may change, I think the retained 
array would have to be malloc()ed instead, and possibly realloc()ed on restarts 
(cleared when non graceful) if there is not enough space to handle the new 
generation (with a process pool cleanup registered the first time to free() the 
whole thing on stop, and make valgrind happy).

Also, since the number of listenners (children) needs to remain constant (IIRC, 
or connections may be reset), maybe you'll have to make sure on graceful 
restart that the previous generation of children has really stopped listenning 
before starting new children. Maybe this is always the case already, but the 
race condition seems more problematic when SO_REUSEPORT is used.

 Regarding to your second question, based on previous patch code, num_buckets 
 is calculated based on the active CPU threads count. I am thinking maybe it 
 is better to do the calculation based on total number of CPU threads instead. 
 This keeps num_buckets to be a constant number as long as the system is 
 running. That is the reason I now change CPU thread count check from 
 _SC_NPROCESSORS_ONLN to _SC_NPROCESSORS_CONF.

I must have missed the point here, will inactive CPUs be scheduled for httpd?
Otherwise, I don't see why they should be taken into account for the number of 
buckets...

Regards,
Yann.


httpd_trunk_SO_REUSEPORT_fix.patch
Description: httpd_trunk_SO_REUSEPORT_fix.patch


RE: svn commit: r1599531 - in /httpd/httpd/trunk: CHANGES include/ap_listen.h server/listen.c server/mpm/event/event.c server/mpm/prefork/prefork.c server/mpm/worker/worker.c server/mpm_unix.c

2014-10-05 Thread Lu, Yingqi
Here (attachment) is the most recent version of the fix. It fixes a small issue 
for event mpm in the version I sent out yesterday. Please use this one as the 
final fix. I have already updated the Bugzilla database for Bug 55897 - 
[PATCH]patch with SO_REUSEPORT support

Jim/Jeff, can you please help review it and add it into the trunk?

Thanks very much for your help,
Yingqi

-Original Message-
From: Lu, Yingqi [mailto:yingqi...@intel.com] 
Sent: Saturday, October 04, 2014 10:54 PM
To: dev@httpd.apache.org
Subject: RE: svn commit: r1599531 - in /httpd/httpd/trunk: CHANGES 
include/ap_listen.h server/listen.c server/mpm/event/event.c 
server/mpm/prefork/prefork.c server/mpm/worker/worker.c server/mpm_unix.c

Hi Kaspar,

Thanks very much for testing the fixes and I am glad it works. 

I tested it on prefork/worker/event as well. It was a universal issue to all 
the mpms. It should all work now. Really appreciated your help.

I will updated the Bugzilla database for Bug 55897 - [PATCH]patch with 
SO_REUSEPORT support. 

Thanks!
Yingqi

-Original Message-
From: Kaspar Brand [mailto:httpd-dev.2...@velox.ch]
Sent: Saturday, October 04, 2014 10:48 PM
To: dev@httpd.apache.org
Subject: Re: svn commit: r1599531 - in /httpd/httpd/trunk: CHANGES 
include/ap_listen.h server/listen.c server/mpm/event/event.c 
server/mpm/prefork/prefork.c server/mpm/worker/worker.c server/mpm_unix.c

On 05.10.2014 02:27, Lu, Yingqi wrote:
 Kaspar, can you please test the patch and let us know if that resolves 
 your issue?

Yes, makes the restart issues disappear for me (only tested with the worker 
MPM, and not very extensively). Thanks.

 At the meantime, can some please review the patch and help add it into 
 trunk?

I'll defer to Jeff or Jim (I'm definitely not familiar enough with anything 
below server/mpm/...).

Kaspar


httpd_trunk_SO_REUSEPORT_fix.patch
Description: httpd_trunk_SO_REUSEPORT_fix.patch


RE: svn commit: r1599531 - in /httpd/httpd/trunk: CHANGES include/ap_listen.h server/listen.c server/mpm/event/event.c server/mpm/prefork/prefork.c server/mpm/worker/worker.c server/mpm_unix.c

2014-10-05 Thread Lu, Yingqi
Hi Yann,

Thanks very much for your feedback. Attached is another update. Please review.

To address your first comment, the issue with pconf pool is that bucket array 
value needs to be retained via restart and graceful restart. Based on your 
comments, I now put bucket array into the retained_data struct for all the 
mpms. Hope this works.

Regarding to your second question, based on previous patch code, num_buckets is 
calculated based on the active CPU threads count. I am thinking maybe it is 
better to do the calculation based on total number of CPU threads instead. This 
keeps num_buckets to be a constant number as long as the system is running. 
That is the reason I now change CPU thread count check from 
_SC_NPROCESSORS_ONLN to _SC_NPROCESSORS_CONF. 

Please review and let me know your feedback.

Thanks,
Yingqi


-Original Message-
From: Yann Ylavic [mailto:ylavic@gmail.com] 
Sent: Sunday, October 05, 2014 12:20 PM
To: httpd
Subject: Re: svn commit: r1599531 - in /httpd/httpd/trunk: CHANGES 
include/ap_listen.h server/listen.c server/mpm/event/event.c 
server/mpm/prefork/prefork.c server/mpm/worker/worker.c server/mpm_unix.c

Hi Yingqi,

On Sun, Oct 5, 2014 at 8:38 PM, Lu, Yingqi yingqi...@intel.com wrote:
 Here (attachment) is the most recent version of the fix. It fixes a 
 small issue for event mpm in the version I sent out yesterday. Please 
 use this one as the final fix. I have already updated the Bugzilla 
 database for Bug 55897 - [PATCH]patch with SO_REUSEPORT support


I don't think we can use the process pool for allocations on restart, even if 
it does not occur on graceful restart, there is still non-graceful restarts 
that will cause leaks.

What was excactly the issue with using pconf?

 4. Change CPU thread count check from _SC_NPROCESSORS_ONLN to 
 _SC_NPROCESSORS_CONF. This makes sure num_buckets to be a constant as long as 
 the system is running. This change addresses the use case like: A user 
 offline some of the CPU threads and then restart httpd. In this case, I think 
 we need to make sure num_buckets does not change during the restart.

Why would httpd always ignore offline CPUs and not take that into account on 
restarts (at least non-graceful ones)?

Regards,
Yann.


httpd_trunk_SO_REUSEPORT_fix.patch
Description: httpd_trunk_SO_REUSEPORT_fix.patch


RE: svn commit: r1599531 - in /httpd/httpd/trunk: CHANGES include/ap_listen.h server/listen.c server/mpm/event/event.c server/mpm/prefork/prefork.c server/mpm/worker/worker.c server/mpm_unix.c

2014-10-05 Thread Lu, Yingqi
Hi Kaspar,

I just saw your comments on the bugzilla site. Thank you very much for your 
help there!

By the way, can you please try the most recent version of the fix (the one I 
sent out this afternoon) on your environment to see if that solves your restart 
issues?

Thanks very much!
Yingqi

-Original Message-
From: Lu, Yingqi [mailto:yingqi...@intel.com] 
Sent: Sunday, October 05, 2014 2:37 PM
To: dev@httpd.apache.org
Subject: RE: svn commit: r1599531 - in /httpd/httpd/trunk: CHANGES 
include/ap_listen.h server/listen.c server/mpm/event/event.c 
server/mpm/prefork/prefork.c server/mpm/worker/worker.c server/mpm_unix.c

Hi Yann,

Thanks very much for your feedback. Attached is another update. Please review.

To address your first comment, the issue with pconf pool is that bucket array 
value needs to be retained via restart and graceful restart. Based on your 
comments, I now put bucket array into the retained_data struct for all the 
mpms. Hope this works.

Regarding to your second question, based on previous patch code, num_buckets is 
calculated based on the active CPU threads count. I am thinking maybe it is 
better to do the calculation based on total number of CPU threads instead. This 
keeps num_buckets to be a constant number as long as the system is running. 
That is the reason I now change CPU thread count check from 
_SC_NPROCESSORS_ONLN to _SC_NPROCESSORS_CONF. 

Please review and let me know your feedback.

Thanks,
Yingqi


-Original Message-
From: Yann Ylavic [mailto:ylavic@gmail.com]
Sent: Sunday, October 05, 2014 12:20 PM
To: httpd
Subject: Re: svn commit: r1599531 - in /httpd/httpd/trunk: CHANGES 
include/ap_listen.h server/listen.c server/mpm/event/event.c 
server/mpm/prefork/prefork.c server/mpm/worker/worker.c server/mpm_unix.c

Hi Yingqi,

On Sun, Oct 5, 2014 at 8:38 PM, Lu, Yingqi yingqi...@intel.com wrote:
 Here (attachment) is the most recent version of the fix. It fixes a 
 small issue for event mpm in the version I sent out yesterday. Please 
 use this one as the final fix. I have already updated the Bugzilla 
 database for Bug 55897 - [PATCH]patch with SO_REUSEPORT support


I don't think we can use the process pool for allocations on restart, even if 
it does not occur on graceful restart, there is still non-graceful restarts 
that will cause leaks.

What was excactly the issue with using pconf?

 4. Change CPU thread count check from _SC_NPROCESSORS_ONLN to 
 _SC_NPROCESSORS_CONF. This makes sure num_buckets to be a constant as long as 
 the system is running. This change addresses the use case like: A user 
 offline some of the CPU threads and then restart httpd. In this case, I think 
 we need to make sure num_buckets does not change during the restart.

Why would httpd always ignore offline CPUs and not take that into account on 
restarts (at least non-graceful ones)?

Regards,
Yann.


RE: svn commit: r1599531 - in /httpd/httpd/trunk: CHANGES include/ap_listen.h server/listen.c server/mpm/event/event.c server/mpm/prefork/prefork.c server/mpm/worker/worker.c server/mpm_unix.c

2014-10-04 Thread Lu, Yingqi
Hi Kaspar,

Thanks for the email. I will try to duplicate your case and find a solution for 
it. 

Thanks,
Yingqi

-Original Message-
From: Kaspar Brand [mailto:httpd-dev.2...@velox.ch] 
Sent: Saturday, October 04, 2014 4:08 AM
To: dev@httpd.apache.org
Subject: Re: svn commit: r1599531 - in /httpd/httpd/trunk: CHANGES 
include/ap_listen.h server/listen.c server/mpm/event/event.c 
server/mpm/prefork/prefork.c server/mpm/worker/worker.c server/mpm_unix.c

 Author: jim
 Date: Tue Jun  3 13:07:29 2014
 New Revision: 1599531
 
 URL: http://svn.apache.org/r1599531
 Log:
 Optimize w/ duplicated listeners and use of SO_REUSEPORT where 
 available.
 
 Modified:
 httpd/httpd/trunk/CHANGES
 httpd/httpd/trunk/include/ap_listen.h
 httpd/httpd/trunk/server/listen.c
 httpd/httpd/trunk/server/mpm/event/event.c
 httpd/httpd/trunk/server/mpm/prefork/prefork.c
 httpd/httpd/trunk/server/mpm/worker/worker.c
 httpd/httpd/trunk/server/mpm_unix.c


With these changes, I'm getting segfaults with the worker MPM from current 
trunk (r1629257) when trying to gracefully restart, i.e. with SIGUSR1. 
Standard restarts (SIGHUP) seem to work better, though I'm getting server 
seems busy and scoreboard is full log entries and other errors in this case. 
A sample stack (CentOS 6 / x86_64) is shown below, in case it helps in tracking 
down the issue.

Kaspar


(gdb) bt f
#0  make_child (s=0x7f8b447c26f8, slot=0) at worker.c:1410
pid = value optimized out
#1  0x7f8b43342037 in server_main_loop (_pconf=value optimized out, 
plog=value optimized out,
s=value optimized out) at worker.c:1742
status = 0
pid = {pid = 2188, in = 0x7f8b447c26f8, out = 0x7f8b44791138, err = 
0x7f8b42811993}
i = value optimized out
old_gen = 0
child_slot = 0
exitwhy = APR_PROC_EXIT
processed_status = 0
#2  worker_run (_pconf=value optimized out, plog=value optimized out, 
s=value optimized out) at worker.c:1872
remaining_children_to_start = 3
rv = value optimized out
i = value optimized out
#3  0x7f8b432ffe7e in ap_run_mpm (pconf=0x7f8b44791138, 
plog=0x7f8b447be498, s=0x7f8b447c26f8)
at mpm_common.c:100
pHook = value optimized out
n = value optimized out
rv = -1
#4  0x7f8b432f940e in main (argc=1, argv=0x7bee4618) at main.c:799
c = 0 '\000'
showcompile = 0
showdirectives = 1148776984
confname = 0x7f8b433442ea conf/httpd.conf
def_server_root = 0x7f8b433442d1 /home/apache-httpd/trunk
temp_error_log = 0x0
error = value optimized out
process = 0x7f8b4478f218
pconf = 0x7f8b44791138
plog = 0x7f8b447be498
ptemp = 0x7f8b447bc348
pcommands = 0x7f8b447b3248
opt = 0x7f8b447b3338
rv = value optimized out
mod = value optimized out
opt_arg = 0x0
signal_server = value optimized out


RE: svn commit: r1599531 - in /httpd/httpd/trunk: CHANGES include/ap_listen.h server/listen.c server/mpm/event/event.c server/mpm/prefork/prefork.c server/mpm/worker/worker.c server/mpm_unix.c

2014-10-04 Thread Lu, Yingqi
Yes, I was able to duplicate both issues and attached is a patch which should 
fix them. Thanks again very much for the feedback, really appreciated!

Attached patch is based on httpd trunk r1629441. In this patch, the changes are:

1. Fix the graceful restart issue for prefork/worker/event MPM. 
2. Fix the server seems busy and scoreboard is full issue on restart for 
both worker and event MPM. Prefork does not have this issue.
3. Guard the ap_daemons_to_start = num_buckets. I mentioned this in a separate 
mail thread couple days ago, I merged the change here.
4. Change CPU thread count check from _SC_NPROCESSORS_ONLN to 
_SC_NPROCESSORS_CONF. This makes sure num_buckets to be a constant as long as 
the system is running. This change addresses the use case like: A user offline 
some of the CPU threads and then restart httpd. In this case, I think we need 
to make sure num_buckets does not change during the restart. 

Kaspar, can you please test the patch and let us know if that resolves your 
issue? 

At the meantime, can some please review the patch and help add it into trunk?

Thanks,
Yingqi


-Original Message-
From: Lu, Yingqi [mailto:yingqi...@intel.com] 
Sent: Saturday, October 04, 2014 7:52 AM
To: dev@httpd.apache.org
Subject: RE: svn commit: r1599531 - in /httpd/httpd/trunk: CHANGES 
include/ap_listen.h server/listen.c server/mpm/event/event.c 
server/mpm/prefork/prefork.c server/mpm/worker/worker.c server/mpm_unix.c

Hi Kaspar,

Thanks for the email. I will try to duplicate your case and find a solution for 
it. 

Thanks,
Yingqi

-Original Message-
From: Kaspar Brand [mailto:httpd-dev.2...@velox.ch]
Sent: Saturday, October 04, 2014 4:08 AM
To: dev@httpd.apache.org
Subject: Re: svn commit: r1599531 - in /httpd/httpd/trunk: CHANGES 
include/ap_listen.h server/listen.c server/mpm/event/event.c 
server/mpm/prefork/prefork.c server/mpm/worker/worker.c server/mpm_unix.c

 Author: jim
 Date: Tue Jun  3 13:07:29 2014
 New Revision: 1599531
 
 URL: http://svn.apache.org/r1599531
 Log:
 Optimize w/ duplicated listeners and use of SO_REUSEPORT where 
 available.
 
 Modified:
 httpd/httpd/trunk/CHANGES
 httpd/httpd/trunk/include/ap_listen.h
 httpd/httpd/trunk/server/listen.c
 httpd/httpd/trunk/server/mpm/event/event.c
 httpd/httpd/trunk/server/mpm/prefork/prefork.c
 httpd/httpd/trunk/server/mpm/worker/worker.c
 httpd/httpd/trunk/server/mpm_unix.c


With these changes, I'm getting segfaults with the worker MPM from current 
trunk (r1629257) when trying to gracefully restart, i.e. with SIGUSR1. 
Standard restarts (SIGHUP) seem to work better, though I'm getting server 
seems busy and scoreboard is full log entries and other errors in this case. 
A sample stack (CentOS 6 / x86_64) is shown below, in case it helps in tracking 
down the issue.

Kaspar


(gdb) bt f
#0  make_child (s=0x7f8b447c26f8, slot=0) at worker.c:1410
pid = value optimized out
#1  0x7f8b43342037 in server_main_loop (_pconf=value optimized out, 
plog=value optimized out,
s=value optimized out) at worker.c:1742
status = 0
pid = {pid = 2188, in = 0x7f8b447c26f8, out = 0x7f8b44791138, err = 
0x7f8b42811993}
i = value optimized out
old_gen = 0
child_slot = 0
exitwhy = APR_PROC_EXIT
processed_status = 0
#2  worker_run (_pconf=value optimized out, plog=value optimized out, 
s=value optimized out) at worker.c:1872
remaining_children_to_start = 3
rv = value optimized out
i = value optimized out
#3  0x7f8b432ffe7e in ap_run_mpm (pconf=0x7f8b44791138, 
plog=0x7f8b447be498, s=0x7f8b447c26f8)
at mpm_common.c:100
pHook = value optimized out
n = value optimized out
rv = -1
#4  0x7f8b432f940e in main (argc=1, argv=0x7bee4618) at main.c:799
c = 0 '\000'
showcompile = 0
showdirectives = 1148776984
confname = 0x7f8b433442ea conf/httpd.conf
def_server_root = 0x7f8b433442d1 /home/apache-httpd/trunk
temp_error_log = 0x0
error = value optimized out
process = 0x7f8b4478f218
pconf = 0x7f8b44791138
plog = 0x7f8b447be498
ptemp = 0x7f8b447bc348
pcommands = 0x7f8b447b3248
opt = 0x7f8b447b3338
rv = value optimized out
mod = value optimized out
opt_arg = 0x0
signal_server = value optimized out


httpd_trunk_SO_REUSEPORT_fix.patch
Description: httpd_trunk_SO_REUSEPORT_fix.patch


RE: svn commit: r1599531 - in /httpd/httpd/trunk: CHANGES include/ap_listen.h server/listen.c server/mpm/event/event.c server/mpm/prefork/prefork.c server/mpm/worker/worker.c server/mpm_unix.c

2014-10-04 Thread Lu, Yingqi
Hi Kaspar,

Thanks very much for testing the fixes and I am glad it works. 

I tested it on prefork/worker/event as well. It was a universal issue to all 
the mpms. It should all work now. Really appreciated your help.

I will updated the Bugzilla database for Bug 55897 - [PATCH]patch with 
SO_REUSEPORT support. 

Thanks!
Yingqi

-Original Message-
From: Kaspar Brand [mailto:httpd-dev.2...@velox.ch] 
Sent: Saturday, October 04, 2014 10:48 PM
To: dev@httpd.apache.org
Subject: Re: svn commit: r1599531 - in /httpd/httpd/trunk: CHANGES 
include/ap_listen.h server/listen.c server/mpm/event/event.c 
server/mpm/prefork/prefork.c server/mpm/worker/worker.c server/mpm_unix.c

On 05.10.2014 02:27, Lu, Yingqi wrote:
 Kaspar, can you please test the patch and let us know if that resolves 
 your issue?

Yes, makes the restart issues disappear for me (only tested with the worker 
MPM, and not very extensively). Thanks.

 At the meantime, can some please review the patch and help add it into 
 trunk?

I'll defer to Jeff or Jim (I'm definitely not familiar enough with anything 
below server/mpm/...).

Kaspar


RE: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT support

2014-10-02 Thread Lu, Yingqi
Hi Jim,

I found that original patch code does not guard ap_daemons_to_start to be 
equal or bigger than num_buckets. Since each bucket has a dedicated listener 
assigned to it, I think we need to make sure that each bucket has at least 1 
child process to start with.

I created this small patch to fix the issue. Can you please help to add the 
changes into the trunk?

Thanks,
Yingqi

-Original Message-
From: Lu, Yingqi [mailto:yingqi...@intel.com] 
Sent: Thursday, June 05, 2014 9:12 AM
To: dev@httpd.apache.org
Subject: RE: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT 
support

Thanks very much for your help!

-Original Message-
From: Jim Jagielski [mailto:j...@jagunet.com]
Sent: Thursday, June 05, 2014 6:38 AM
To: dev@httpd.apache.org
Subject: Re: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT 
support

Committed r1600656

Thx
On Jun 4, 2014, at 3:39 PM, Lu, Yingqi yingqi...@intel.com wrote:

 Hi Jim,
 
 I just found that prefork and worker has issue with restart. Event mpm code 
 is good. 
 
 I created this small patch to fix the issue for both prefork and worker. The 
 patch is based on rev #1600451.
 
 Can you please help add the changes in the trunk?
 
 Thanks,
 Yingqi
 
 -Original Message-
 From: Lu, Yingqi [mailto:yingqi...@intel.com]
 Sent: Tuesday, June 03, 2014 8:50 AM
 To: dev@httpd.apache.org
 Subject: RE: [PATCH ASF bugzilla# 55897]prefork_mpm patch with 
 SO_REUSEPORT support
 
 Thank you very much for your help!
 
 Thanks,
 Yingqi
 
 -Original Message-
 From: Jim Jagielski [mailto:j...@jagunet.com]
 Sent: Tuesday, June 03, 2014 8:31 AM
 To: dev@httpd.apache.org
 Subject: Re: [PATCH ASF bugzilla# 55897]prefork_mpm patch with 
 SO_REUSEPORT support
 
 Next on the agenda is to push into eventopt
 
 On Jun 3, 2014, at 10:21 AM, Jim Jagielski j...@jagunet.com wrote:
 
 FTR: I saw no reason to try to handle both patches... I used the 
 so_reuseport patch as the primary patch to focus on.
 
 I have some minor changes coming up to follow-up post the initial 
 commit
 
 On Jun 3, 2014, at 8:51 AM, Jim Jagielski j...@jagunet.com wrote:
 
 I have folded this into trunk and am currently fixing some compile 
 warnings and errors...
 
 On Jun 2, 2014, at 4:22 AM, Lu, Yingqi yingqi...@intel.com wrote:
 
 Hi Jim,
 
 Personally, I think the second approach is better, it keeps 
 ap_mpm_pod_signal () and ap_mpm_pod_killpg () exactly as the original 
 ones, only modifies dummy_connection (). Please let me know if you have 
 different opinions.
 
 Attached is the latest version of the two patches. They were both 
 generated against trunk rev. 1598561. Please review them and let me know 
 if there is anything missing.
 
 I already updated the Bugzilla database for the item 55897 and item 56279.
 
 Thanks,
 Yingqi
 
 -Original Message-
 From: Lu, Yingqi [mailto:yingqi...@intel.com]
 Sent: Saturday, May 31, 2014 11:48 PM
 To: dev@httpd.apache.org
 Subject: RE: [PATCH ASF bugzilla# 55897]prefork_mpm patch with 
 SO_REUSEPORT support
 
 Hi Jim,
 
 Regarding to your comment #2, yes, you are right, it should be 
 ap_mpm_pod_killpg(pod, retained-max_daemons_limit, i). Thanks very much 
 for catching this.
 
 Regarding to your comment #1, the patch modifies the 
 dummy_connection(ap_pod_t *pod) to be dummy_connection(ap_pod_t *pod, int 
 child_bucket). Inside the function, the reference listen statement is 
 mpm_listen[child_bucket]. And, ap_mpm_pod_signal() calls 
 dummy_connection(). 
 
 Can we just modify the return of ap_mpm_pod_signal() from 
 dummy_connection(pod) to dummy_connection(pod, 0) and add 
 ap_mpm_pod_signal_ex()? 
 
 Or, if we need to keep ap_mpm_pod_signal() exactly as the original, I can 
 modify dummy_connection() to send dummy data via all the duplicated listen 
 statements. Then, we do not need child_bucket as the input parameter for 
 dummy_connection(). In this case, we do not need adding 
 ap_mpm_pod_signal_ex() too.
 
 I already tested the code for the above approaches and they both work. 
 
 Please let me know which way you think is better. I can quickly send you 
 an update for review.
 
 Thanks,
 Yingqi
 
 -Original Message-
 From: Lu, Yingqi
 Sent: Saturday, May 31, 2014 3:28 PM
 To: dev@httpd.apache.org
 Subject: Re: [PATCH ASF bugzilla# 55897]prefork_mpm patch with 
 SO_REUSEPORT support
 
 Hi Jim,
 
 Thanks very much for your email! I will look into both of them and send an 
 update tonight!
 
 Thanks,
 Yingqi
 
 On May 31, 2014, at 9:43 AM, Jim Jagielski j...@jagunet.com wrote:
 
 I also see:
 
 /* kill off the idle ones */
 -ap_mpm_pod_killpg(pod, retained-max_daemons_limit);
 +for (i = 0; i  num_buckets; i++) {
 +ap_mpm_pod_killpg(pod[i], i, retained-max_daemons_limit);
 +}
 
 
 Is that right? Why isn't it: ap_mpm_pod_killpg(pod, 
 retained-max_daemons_limit, i); ??
 
 /**
 * Write data to the pipe-of-death, signalling that all child 
 process
 * should die

RE: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT support

2014-06-05 Thread Lu, Yingqi
Thanks very much for your help!

-Original Message-
From: Jim Jagielski [mailto:j...@jagunet.com] 
Sent: Thursday, June 05, 2014 6:38 AM
To: dev@httpd.apache.org
Subject: Re: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT 
support

Committed r1600656

Thx
On Jun 4, 2014, at 3:39 PM, Lu, Yingqi yingqi...@intel.com wrote:

 Hi Jim,
 
 I just found that prefork and worker has issue with restart. Event mpm code 
 is good. 
 
 I created this small patch to fix the issue for both prefork and worker. The 
 patch is based on rev #1600451.
 
 Can you please help add the changes in the trunk?
 
 Thanks,
 Yingqi
 
 -Original Message-
 From: Lu, Yingqi [mailto:yingqi...@intel.com]
 Sent: Tuesday, June 03, 2014 8:50 AM
 To: dev@httpd.apache.org
 Subject: RE: [PATCH ASF bugzilla# 55897]prefork_mpm patch with 
 SO_REUSEPORT support
 
 Thank you very much for your help!
 
 Thanks,
 Yingqi
 
 -Original Message-
 From: Jim Jagielski [mailto:j...@jagunet.com]
 Sent: Tuesday, June 03, 2014 8:31 AM
 To: dev@httpd.apache.org
 Subject: Re: [PATCH ASF bugzilla# 55897]prefork_mpm patch with 
 SO_REUSEPORT support
 
 Next on the agenda is to push into eventopt
 
 On Jun 3, 2014, at 10:21 AM, Jim Jagielski j...@jagunet.com wrote:
 
 FTR: I saw no reason to try to handle both patches... I used the 
 so_reuseport patch as the primary patch to focus on.
 
 I have some minor changes coming up to follow-up post the initial 
 commit
 
 On Jun 3, 2014, at 8:51 AM, Jim Jagielski j...@jagunet.com wrote:
 
 I have folded this into trunk and am currently fixing some compile 
 warnings and errors...
 
 On Jun 2, 2014, at 4:22 AM, Lu, Yingqi yingqi...@intel.com wrote:
 
 Hi Jim,
 
 Personally, I think the second approach is better, it keeps 
 ap_mpm_pod_signal () and ap_mpm_pod_killpg () exactly as the original 
 ones, only modifies dummy_connection (). Please let me know if you have 
 different opinions.
 
 Attached is the latest version of the two patches. They were both 
 generated against trunk rev. 1598561. Please review them and let me know 
 if there is anything missing.
 
 I already updated the Bugzilla database for the item 55897 and item 56279.
 
 Thanks,
 Yingqi
 
 -Original Message-
 From: Lu, Yingqi [mailto:yingqi...@intel.com]
 Sent: Saturday, May 31, 2014 11:48 PM
 To: dev@httpd.apache.org
 Subject: RE: [PATCH ASF bugzilla# 55897]prefork_mpm patch with 
 SO_REUSEPORT support
 
 Hi Jim,
 
 Regarding to your comment #2, yes, you are right, it should be 
 ap_mpm_pod_killpg(pod, retained-max_daemons_limit, i). Thanks very much 
 for catching this.
 
 Regarding to your comment #1, the patch modifies the 
 dummy_connection(ap_pod_t *pod) to be dummy_connection(ap_pod_t *pod, int 
 child_bucket). Inside the function, the reference listen statement is 
 mpm_listen[child_bucket]. And, ap_mpm_pod_signal() calls 
 dummy_connection(). 
 
 Can we just modify the return of ap_mpm_pod_signal() from 
 dummy_connection(pod) to dummy_connection(pod, 0) and add 
 ap_mpm_pod_signal_ex()? 
 
 Or, if we need to keep ap_mpm_pod_signal() exactly as the original, I can 
 modify dummy_connection() to send dummy data via all the duplicated listen 
 statements. Then, we do not need child_bucket as the input parameter for 
 dummy_connection(). In this case, we do not need adding 
 ap_mpm_pod_signal_ex() too.
 
 I already tested the code for the above approaches and they both work. 
 
 Please let me know which way you think is better. I can quickly send you 
 an update for review.
 
 Thanks,
 Yingqi
 
 -Original Message-
 From: Lu, Yingqi
 Sent: Saturday, May 31, 2014 3:28 PM
 To: dev@httpd.apache.org
 Subject: Re: [PATCH ASF bugzilla# 55897]prefork_mpm patch with 
 SO_REUSEPORT support
 
 Hi Jim,
 
 Thanks very much for your email! I will look into both of them and send an 
 update tonight!
 
 Thanks,
 Yingqi
 
 On May 31, 2014, at 9:43 AM, Jim Jagielski j...@jagunet.com wrote:
 
 I also see:
 
 /* kill off the idle ones */
 -ap_mpm_pod_killpg(pod, retained-max_daemons_limit);
 +for (i = 0; i  num_buckets; i++) {
 +ap_mpm_pod_killpg(pod[i], i, retained-max_daemons_limit);
 +}
 
 
 Is that right? Why isn't it: ap_mpm_pod_killpg(pod, 
 retained-max_daemons_limit, i); ??
 
 /**
 * Write data to the pipe-of-death, signalling that all child 
 process
 * should die.
 * @param pod The pipe-of-death to write to.
 * @param num The number of child processes to kill
 + * @param my_bucket the bucket that holds the dying child process.
 */
 -AP_DECLARE(void) ap_mpm_pod_killpg(ap_pod_t *pod, int num);
 +AP_DECLARE(void) ap_mpm_pod_killpg(ap_pod_t *pod, int num, int 
 +child_bucket);
 
 Isn't 'num' the same in both implementation??
 
 On May 31, 2014, at 12:03 PM, Jim Jagielski j...@jagunet.com wrote:
 
 Sorry I didn't catch this earlier:
 
 I see
 
 +++ httpd-trunk.new/include/mpm_common.h2014-05-16 
 13:07:03.892987491 -0400
 @@ -267,16 +267,18 @@
 * Write data to the pipe

RE: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT support

2014-06-04 Thread Lu, Yingqi
Hi Jim,

I just found that prefork and worker has issue with restart. Event mpm code is 
good. 

I created this small patch to fix the issue for both prefork and worker. The 
patch is based on rev #1600451.

Can you please help add the changes in the trunk?

Thanks,
Yingqi

-Original Message-
From: Lu, Yingqi [mailto:yingqi...@intel.com] 
Sent: Tuesday, June 03, 2014 8:50 AM
To: dev@httpd.apache.org
Subject: RE: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT 
support

Thank you very much for your help!

Thanks,
Yingqi

-Original Message-
From: Jim Jagielski [mailto:j...@jagunet.com]
Sent: Tuesday, June 03, 2014 8:31 AM
To: dev@httpd.apache.org
Subject: Re: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT 
support

Next on the agenda is to push into eventopt

On Jun 3, 2014, at 10:21 AM, Jim Jagielski j...@jagunet.com wrote:

 FTR: I saw no reason to try to handle both patches... I used the 
 so_reuseport patch as the primary patch to focus on.
 
 I have some minor changes coming up to follow-up post the initial 
 commit
 
 On Jun 3, 2014, at 8:51 AM, Jim Jagielski j...@jagunet.com wrote:
 
 I have folded this into trunk and am currently fixing some compile 
 warnings and errors...
 
 On Jun 2, 2014, at 4:22 AM, Lu, Yingqi yingqi...@intel.com wrote:
 
 Hi Jim,
 
 Personally, I think the second approach is better, it keeps 
 ap_mpm_pod_signal () and ap_mpm_pod_killpg () exactly as the original ones, 
 only modifies dummy_connection (). Please let me know if you have different 
 opinions.
 
 Attached is the latest version of the two patches. They were both generated 
 against trunk rev. 1598561. Please review them and let me know if there is 
 anything missing.
 
 I already updated the Bugzilla database for the item 55897 and item 56279.
 
 Thanks,
 Yingqi
 
 -Original Message-
 From: Lu, Yingqi [mailto:yingqi...@intel.com]
 Sent: Saturday, May 31, 2014 11:48 PM
 To: dev@httpd.apache.org
 Subject: RE: [PATCH ASF bugzilla# 55897]prefork_mpm patch with 
 SO_REUSEPORT support
 
 Hi Jim,
 
 Regarding to your comment #2, yes, you are right, it should be 
 ap_mpm_pod_killpg(pod, retained-max_daemons_limit, i). Thanks very much 
 for catching this.
 
 Regarding to your comment #1, the patch modifies the 
 dummy_connection(ap_pod_t *pod) to be dummy_connection(ap_pod_t *pod, int 
 child_bucket). Inside the function, the reference listen statement is 
 mpm_listen[child_bucket]. And, ap_mpm_pod_signal() calls 
 dummy_connection(). 
 
 Can we just modify the return of ap_mpm_pod_signal() from 
 dummy_connection(pod) to dummy_connection(pod, 0) and add 
 ap_mpm_pod_signal_ex()? 
 
 Or, if we need to keep ap_mpm_pod_signal() exactly as the original, I can 
 modify dummy_connection() to send dummy data via all the duplicated listen 
 statements. Then, we do not need child_bucket as the input parameter for 
 dummy_connection(). In this case, we do not need adding 
 ap_mpm_pod_signal_ex() too.
 
 I already tested the code for the above approaches and they both work. 
 
 Please let me know which way you think is better. I can quickly send you an 
 update for review.
 
 Thanks,
 Yingqi
 
 -Original Message-
 From: Lu, Yingqi
 Sent: Saturday, May 31, 2014 3:28 PM
 To: dev@httpd.apache.org
 Subject: Re: [PATCH ASF bugzilla# 55897]prefork_mpm patch with 
 SO_REUSEPORT support
 
 Hi Jim,
 
 Thanks very much for your email! I will look into both of them and send an 
 update tonight!
 
 Thanks,
 Yingqi
 
 On May 31, 2014, at 9:43 AM, Jim Jagielski j...@jagunet.com wrote:
 
 I also see:
 
  /* kill off the idle ones */
 -ap_mpm_pod_killpg(pod, retained-max_daemons_limit);
 +for (i = 0; i  num_buckets; i++) {
 +ap_mpm_pod_killpg(pod[i], i, retained-max_daemons_limit);
 +}
 
 
 Is that right? Why isn't it: ap_mpm_pod_killpg(pod, 
 retained-max_daemons_limit, i); ??
 
 /**
 * Write data to the pipe-of-death, signalling that all child 
 process
 * should die.
 * @param pod The pipe-of-death to write to.
 * @param num The number of child processes to kill
 + * @param my_bucket the bucket that holds the dying child process.
 */
 -AP_DECLARE(void) ap_mpm_pod_killpg(ap_pod_t *pod, int num);
 +AP_DECLARE(void) ap_mpm_pod_killpg(ap_pod_t *pod, int num, int 
 +child_bucket);
 
 Isn't 'num' the same in both implementation??
 
 On May 31, 2014, at 12:03 PM, Jim Jagielski j...@jagunet.com wrote:
 
 Sorry I didn't catch this earlier:
 
 I see
 
 +++ httpd-trunk.new/include/mpm_common.h2014-05-16 13:07:03.892987491 
 -0400
 @@ -267,16 +267,18 @@
 * Write data to the pipe-of-death, signalling that one child 
 process
 * should die.
 * @param pod the pipe-of-death to write to.
 + * @param my_bucket the bucket that holds the dying child process.
 */
 -AP_DECLARE(apr_status_t) ap_mpm_pod_signal(ap_pod_t *pod);
 +AP_DECLARE(apr_status_t) ap_mpm_pod_signal(ap_pod_t *pod, int 
 +child_bucket);
 
 We can change the API at this point. We could add

RE: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT support

2014-06-03 Thread Lu, Yingqi
Thank you very much for your help!

Thanks,
Yingqi

-Original Message-
From: Jim Jagielski [mailto:j...@jagunet.com] 
Sent: Tuesday, June 03, 2014 8:31 AM
To: dev@httpd.apache.org
Subject: Re: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT 
support

Next on the agenda is to push into eventopt

On Jun 3, 2014, at 10:21 AM, Jim Jagielski j...@jagunet.com wrote:

 FTR: I saw no reason to try to handle both patches... I used the 
 so_reuseport patch as the primary patch to focus on.
 
 I have some minor changes coming up to follow-up post the initial 
 commit
 
 On Jun 3, 2014, at 8:51 AM, Jim Jagielski j...@jagunet.com wrote:
 
 I have folded this into trunk and am currently fixing some compile 
 warnings and errors...
 
 On Jun 2, 2014, at 4:22 AM, Lu, Yingqi yingqi...@intel.com wrote:
 
 Hi Jim,
 
 Personally, I think the second approach is better, it keeps 
 ap_mpm_pod_signal () and ap_mpm_pod_killpg () exactly as the original ones, 
 only modifies dummy_connection (). Please let me know if you have different 
 opinions.
 
 Attached is the latest version of the two patches. They were both generated 
 against trunk rev. 1598561. Please review them and let me know if there is 
 anything missing.
 
 I already updated the Bugzilla database for the item 55897 and item 56279.
 
 Thanks,
 Yingqi
 
 -Original Message-
 From: Lu, Yingqi [mailto:yingqi...@intel.com]
 Sent: Saturday, May 31, 2014 11:48 PM
 To: dev@httpd.apache.org
 Subject: RE: [PATCH ASF bugzilla# 55897]prefork_mpm patch with 
 SO_REUSEPORT support
 
 Hi Jim,
 
 Regarding to your comment #2, yes, you are right, it should be 
 ap_mpm_pod_killpg(pod, retained-max_daemons_limit, i). Thanks very much 
 for catching this.
 
 Regarding to your comment #1, the patch modifies the 
 dummy_connection(ap_pod_t *pod) to be dummy_connection(ap_pod_t *pod, int 
 child_bucket). Inside the function, the reference listen statement is 
 mpm_listen[child_bucket]. And, ap_mpm_pod_signal() calls 
 dummy_connection(). 
 
 Can we just modify the return of ap_mpm_pod_signal() from 
 dummy_connection(pod) to dummy_connection(pod, 0) and add 
 ap_mpm_pod_signal_ex()? 
 
 Or, if we need to keep ap_mpm_pod_signal() exactly as the original, I can 
 modify dummy_connection() to send dummy data via all the duplicated listen 
 statements. Then, we do not need child_bucket as the input parameter for 
 dummy_connection(). In this case, we do not need adding 
 ap_mpm_pod_signal_ex() too.
 
 I already tested the code for the above approaches and they both work. 
 
 Please let me know which way you think is better. I can quickly send you an 
 update for review.
 
 Thanks,
 Yingqi
 
 -Original Message-
 From: Lu, Yingqi
 Sent: Saturday, May 31, 2014 3:28 PM
 To: dev@httpd.apache.org
 Subject: Re: [PATCH ASF bugzilla# 55897]prefork_mpm patch with 
 SO_REUSEPORT support
 
 Hi Jim,
 
 Thanks very much for your email! I will look into both of them and send an 
 update tonight!
 
 Thanks,
 Yingqi
 
 On May 31, 2014, at 9:43 AM, Jim Jagielski j...@jagunet.com wrote:
 
 I also see:
 
  /* kill off the idle ones */
 -ap_mpm_pod_killpg(pod, retained-max_daemons_limit);
 +for (i = 0; i  num_buckets; i++) {
 +ap_mpm_pod_killpg(pod[i], i, retained-max_daemons_limit);
 +}
 
 
 Is that right? Why isn't it: ap_mpm_pod_killpg(pod, 
 retained-max_daemons_limit, i); ??
 
 /**
 * Write data to the pipe-of-death, signalling that all child 
 process
 * should die.
 * @param pod The pipe-of-death to write to.
 * @param num The number of child processes to kill
 + * @param my_bucket the bucket that holds the dying child process.
 */
 -AP_DECLARE(void) ap_mpm_pod_killpg(ap_pod_t *pod, int num);
 +AP_DECLARE(void) ap_mpm_pod_killpg(ap_pod_t *pod, int num, int 
 +child_bucket);
 
 Isn't 'num' the same in both implementation??
 
 On May 31, 2014, at 12:03 PM, Jim Jagielski j...@jagunet.com wrote:
 
 Sorry I didn't catch this earlier:
 
 I see
 
 +++ httpd-trunk.new/include/mpm_common.h2014-05-16 13:07:03.892987491 
 -0400
 @@ -267,16 +267,18 @@
 * Write data to the pipe-of-death, signalling that one child 
 process
 * should die.
 * @param pod the pipe-of-death to write to.
 + * @param my_bucket the bucket that holds the dying child process.
 */
 -AP_DECLARE(apr_status_t) ap_mpm_pod_signal(ap_pod_t *pod);
 +AP_DECLARE(apr_status_t) ap_mpm_pod_signal(ap_pod_t *pod, int 
 +child_bucket);
 
 We can change the API at this point. We could add another 
 function, eg ap_mpm_pod_signal_ex() which takes the int param, but 
 we can't modify ap_mpm_pod_signal() itself.
 
 On May 30, 2014, at 11:15 AM, Lu, Yingqi yingqi...@intel.com wrote:
 
 Thank you very much!
 
 Thanks,
 Yingqi
 
 -Original Message-
 From: Jim Jagielski [mailto:j...@jagunet.com]
 Sent: Friday, May 30, 2014 7:07 AM
 To: dev@httpd.apache.org
 Subject: Re: [PATCH ASF bugzilla# 55897]prefork_mpm patch with 
 SO_REUSEPORT support
 
 Thx! Let me review. My plan

RE: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT support

2014-06-01 Thread Lu, Yingqi
Hi Jim,

Regarding to your comment #2, yes, you are right, it should be 
ap_mpm_pod_killpg(pod, retained-max_daemons_limit, i). Thanks very much for 
catching this.

Regarding to your comment #1, the patch modifies the dummy_connection(ap_pod_t 
*pod) to be dummy_connection(ap_pod_t *pod, int child_bucket). Inside the 
function, the reference listen statement is mpm_listen[child_bucket]. And, 
ap_mpm_pod_signal() calls dummy_connection(). 

Can we just modify the return of ap_mpm_pod_signal() from dummy_connection(pod) 
to dummy_connection(pod, 0) and add ap_mpm_pod_signal_ex()? 

Or, if we need to keep ap_mpm_pod_signal() exactly as the original, I can 
modify dummy_connection() to send dummy data via all the duplicated listen 
statements. Then, we do not need child_bucket as the input parameter for 
dummy_connection(). In this case, we do not need adding ap_mpm_pod_signal_ex() 
too.

I already tested the code for the above approaches and they both work. 

Please let me know which way you think is better. I can quickly send you an 
update for review.

Thanks,
Yingqi

-Original Message-
From: Lu, Yingqi 
Sent: Saturday, May 31, 2014 3:28 PM
To: dev@httpd.apache.org
Subject: Re: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT 
support

Hi Jim,

Thanks very much for your email! I will look into both of them and send an 
update tonight!

Thanks,
Yingqi

 On May 31, 2014, at 9:43 AM, Jim Jagielski j...@jagunet.com wrote:
 
 I also see:
 
 /* kill off the idle ones */
 -ap_mpm_pod_killpg(pod, retained-max_daemons_limit);
 +for (i = 0; i  num_buckets; i++) {
 +ap_mpm_pod_killpg(pod[i], i, retained-max_daemons_limit);
 +}
 
 
 Is that right? Why isn't it: ap_mpm_pod_killpg(pod, 
 retained-max_daemons_limit, i); ??
 
 /**
  * Write data to the pipe-of-death, signalling that all child process
  * should die.
  * @param pod The pipe-of-death to write to.
  * @param num The number of child processes to kill
 + * @param my_bucket the bucket that holds the dying child process.
  */
 -AP_DECLARE(void) ap_mpm_pod_killpg(ap_pod_t *pod, int num);
 +AP_DECLARE(void) ap_mpm_pod_killpg(ap_pod_t *pod, int num, int 
 +child_bucket);
 
 Isn't 'num' the same in both implementation??
 
 On May 31, 2014, at 12:03 PM, Jim Jagielski j...@jagunet.com wrote:
 
 Sorry I didn't catch this earlier:
 
 I see
 
 +++ httpd-trunk.new/include/mpm_common.h2014-05-16 13:07:03.892987491 
 -0400
 @@ -267,16 +267,18 @@
 * Write data to the pipe-of-death, signalling that one child process
 * should die.
 * @param pod the pipe-of-death to write to.
 + * @param my_bucket the bucket that holds the dying child process.
 */
 -AP_DECLARE(apr_status_t) ap_mpm_pod_signal(ap_pod_t *pod);
 +AP_DECLARE(apr_status_t) ap_mpm_pod_signal(ap_pod_t *pod, int 
 +child_bucket);
 
 We can change the API at this point. We could add another function, 
 eg ap_mpm_pod_signal_ex() which takes the int param, but we can't 
 modify ap_mpm_pod_signal() itself.
 
 On May 30, 2014, at 11:15 AM, Lu, Yingqi yingqi...@intel.com wrote:
 
 Thank you very much!
 
 Thanks,
 Yingqi
 
 -Original Message-
 From: Jim Jagielski [mailto:j...@jagunet.com]
 Sent: Friday, May 30, 2014 7:07 AM
 To: dev@httpd.apache.org
 Subject: Re: [PATCH ASF bugzilla# 55897]prefork_mpm patch with 
 SO_REUSEPORT support
 
 Thx! Let me review. My plan is to fold into trunk this weekend.
 
 On May 16, 2014, at 2:53 PM, Lu, Yingqi yingqi...@intel.com wrote:
 
 Hi Jim,
 
 Thanks very much for clarifying this with me. I added #ifdef in the code 
 to check _SC_NPROCESSORS_ONLN in the so_reuseport patch. Bucket patch does 
 not use this parameter so that it remains the same.
 
 Attached are the two most recent patches. I already updated the bugzilla 
 #55897 as well.
 
 Thanks,
 Yingqi
 
 -Original Message-
 From: Jim Jagielski [mailto:j...@jagunet.com]
 Sent: Thursday, May 15, 2014 7:53 AM
 To: dev@httpd.apache.org
 Subject: Re: [PATCH ASF bugzilla# 55897]prefork_mpm patch with 
 SO_REUSEPORT support
 
 I was thinking more about the sysconf(_SC_NPROCESSORS_ONLN) stuff...
 We could either check for that during config/build or protect it with a 
 #ifdef in the code (and create some logging so the admin nows if it was 
 found or not).
 
 On May 14, 2014, at 11:59 AM, Lu, Yingqi yingqi...@intel.com wrote:
 
 Hi Jim,
 
 Thanks very much for your email.
 
 In the SO_REUSEPORT patch, SO_REUSEPORT support is checked inside 
 listen.c file. If the feature is not supported on the OS (for example, 
 Linux kernel  3.9), it will fall back to the original behavior. 
 
 In the bucket patch, there is no need to check the params. With single 
 listen statement, it is just the default behavior. 
 
 Please let me know if this answers your question.
 
 Thanks,
 Yingqi
 
 -Original Message-
 From: Jim Jagielski [mailto:j...@jagunet.com]
 Sent: Wednesday, May 14, 2014 6:57 AM
 To: dev@httpd.apache.org
 Subject: Re: [PATCH ASF bugzilla# 55897

Re: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT support

2014-05-31 Thread Lu, Yingqi
Hi Jim,

Thanks very much for your email! I will look into both of them and send an 
update tonight!

Thanks,
Yingqi

 On May 31, 2014, at 9:43 AM, Jim Jagielski j...@jagunet.com wrote:
 
 I also see:
 
 /* kill off the idle ones */
 -ap_mpm_pod_killpg(pod, retained-max_daemons_limit);
 +for (i = 0; i  num_buckets; i++) {
 +ap_mpm_pod_killpg(pod[i], i, retained-max_daemons_limit);
 +}
 
 
 Is that right? Why isn't it: ap_mpm_pod_killpg(pod, 
 retained-max_daemons_limit, i); ??
 
 /**
  * Write data to the pipe-of-death, signalling that all child process
  * should die.
  * @param pod The pipe-of-death to write to.
  * @param num The number of child processes to kill
 + * @param my_bucket the bucket that holds the dying child process.
  */
 -AP_DECLARE(void) ap_mpm_pod_killpg(ap_pod_t *pod, int num);
 +AP_DECLARE(void) ap_mpm_pod_killpg(ap_pod_t *pod, int num, int child_bucket);
 
 Isn't 'num' the same in both implementation??
 
 On May 31, 2014, at 12:03 PM, Jim Jagielski j...@jagunet.com wrote:
 
 Sorry I didn't catch this earlier:
 
 I see
 
 +++ httpd-trunk.new/include/mpm_common.h2014-05-16 13:07:03.892987491 
 -0400
 @@ -267,16 +267,18 @@
 * Write data to the pipe-of-death, signalling that one child process
 * should die.
 * @param pod the pipe-of-death to write to.
 + * @param my_bucket the bucket that holds the dying child process.
 */
 -AP_DECLARE(apr_status_t) ap_mpm_pod_signal(ap_pod_t *pod);
 +AP_DECLARE(apr_status_t) ap_mpm_pod_signal(ap_pod_t *pod, int child_bucket);
 
 We can change the API at this point. We could
 add another function, eg ap_mpm_pod_signal_ex() which
 takes the int param, but we can't modify ap_mpm_pod_signal()
 itself.
 
 On May 30, 2014, at 11:15 AM, Lu, Yingqi yingqi...@intel.com wrote:
 
 Thank you very much!
 
 Thanks,
 Yingqi
 
 -Original Message-
 From: Jim Jagielski [mailto:j...@jagunet.com] 
 Sent: Friday, May 30, 2014 7:07 AM
 To: dev@httpd.apache.org
 Subject: Re: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT 
 support
 
 Thx! Let me review. My plan is to fold into trunk this weekend.
 
 On May 16, 2014, at 2:53 PM, Lu, Yingqi yingqi...@intel.com wrote:
 
 Hi Jim,
 
 Thanks very much for clarifying this with me. I added #ifdef in the code 
 to check _SC_NPROCESSORS_ONLN in the so_reuseport patch. Bucket patch does 
 not use this parameter so that it remains the same.
 
 Attached are the two most recent patches. I already updated the bugzilla 
 #55897 as well.
 
 Thanks,
 Yingqi
 
 -Original Message-
 From: Jim Jagielski [mailto:j...@jagunet.com]
 Sent: Thursday, May 15, 2014 7:53 AM
 To: dev@httpd.apache.org
 Subject: Re: [PATCH ASF bugzilla# 55897]prefork_mpm patch with 
 SO_REUSEPORT support
 
 I was thinking more about the sysconf(_SC_NPROCESSORS_ONLN) stuff...
 We could either check for that during config/build or protect it with a 
 #ifdef in the code (and create some logging so the admin nows if it was 
 found or not).
 
 On May 14, 2014, at 11:59 AM, Lu, Yingqi yingqi...@intel.com wrote:
 
 Hi Jim,
 
 Thanks very much for your email.
 
 In the SO_REUSEPORT patch, SO_REUSEPORT support is checked inside 
 listen.c file. If the feature is not supported on the OS (for example, 
 Linux kernel  3.9), it will fall back to the original behavior. 
 
 In the bucket patch, there is no need to check the params. With single 
 listen statement, it is just the default behavior. 
 
 Please let me know if this answers your question.
 
 Thanks,
 Yingqi
 
 -Original Message-
 From: Jim Jagielski [mailto:j...@jagunet.com]
 Sent: Wednesday, May 14, 2014 6:57 AM
 To: dev@httpd.apache.org
 Subject: Re: [PATCH ASF bugzilla# 55897]prefork_mpm patch with 
 SO_REUSEPORT support
 
 This is very cool!
 
 mod_status assumes that sysconf() exists, but do we need to do a config 
 check on the params we use in these patches?
 We look OK on Linux, FreeBSD and OSX...
 
 I'm +1 on folding into trunk.
 
 On May 13, 2014, at 7:55 PM, Lu, Yingqi yingqi...@intel.com wrote:
 
 Dear All,
 
 During the last couple weeks, I spent some time extending the original 
 two patches from prefork MPM only to all three Linux MPMs (prefork, 
 worker and event). Attached is the latest version of the two patches. 
 Bugzilla database has also been updated already. The ID for the two 
 patches are #55897 and #56279. Please refer to messages below for 
 details on both of the patches.
 
 Quick test result on modern dual socket Intel platform (Linux Kernel
 3.13.9) SO_REUSEPORT patch (bugzilla #55897)
 1.   Prefork MPM: 1 listen statement: 2.16X throughput improvement; 
 2 listen statements: 2.33X throughput improvement
 2.   Worker MPM: 1 listen statement: 10% throughput improvement; 2 
 listen statements: 35% throughput improvement
 3.   Event MPM: 1 listen statement: 13% throughput improvement; 2 
 listen statements: throughput parity, but 62% response time reduction 
 (with patch, 38% response time as original

RE: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT support

2014-05-30 Thread Lu, Yingqi
Thank you very much!

Thanks,
Yingqi

-Original Message-
From: Jim Jagielski [mailto:j...@jagunet.com] 
Sent: Friday, May 30, 2014 7:07 AM
To: dev@httpd.apache.org
Subject: Re: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT 
support

Thx! Let me review. My plan is to fold into trunk this weekend.

On May 16, 2014, at 2:53 PM, Lu, Yingqi yingqi...@intel.com wrote:

 Hi Jim,
 
 Thanks very much for clarifying this with me. I added #ifdef in the code to 
 check _SC_NPROCESSORS_ONLN in the so_reuseport patch. Bucket patch does not 
 use this parameter so that it remains the same.
 
 Attached are the two most recent patches. I already updated the bugzilla 
 #55897 as well.
 
 Thanks,
 Yingqi
 
 -Original Message-
 From: Jim Jagielski [mailto:j...@jagunet.com]
 Sent: Thursday, May 15, 2014 7:53 AM
 To: dev@httpd.apache.org
 Subject: Re: [PATCH ASF bugzilla# 55897]prefork_mpm patch with 
 SO_REUSEPORT support
 
 I was thinking more about the sysconf(_SC_NPROCESSORS_ONLN) stuff...
 We could either check for that during config/build or protect it with a 
 #ifdef in the code (and create some logging so the admin nows if it was found 
 or not).
 
 On May 14, 2014, at 11:59 AM, Lu, Yingqi yingqi...@intel.com wrote:
 
 Hi Jim,
 
 Thanks very much for your email.
 
 In the SO_REUSEPORT patch, SO_REUSEPORT support is checked inside listen.c 
 file. If the feature is not supported on the OS (for example, Linux kernel  
 3.9), it will fall back to the original behavior. 
 
 In the bucket patch, there is no need to check the params. With single 
 listen statement, it is just the default behavior. 
 
 Please let me know if this answers your question.
 
 Thanks,
 Yingqi
 
 -Original Message-
 From: Jim Jagielski [mailto:j...@jagunet.com]
 Sent: Wednesday, May 14, 2014 6:57 AM
 To: dev@httpd.apache.org
 Subject: Re: [PATCH ASF bugzilla# 55897]prefork_mpm patch with 
 SO_REUSEPORT support
 
 This is very cool!
 
 mod_status assumes that sysconf() exists, but do we need to do a config 
 check on the params we use in these patches?
 We look OK on Linux, FreeBSD and OSX...
 
 I'm +1 on folding into trunk.
 
 On May 13, 2014, at 7:55 PM, Lu, Yingqi yingqi...@intel.com wrote:
 
 Dear All,
 
 During the last couple weeks, I spent some time extending the original two 
 patches from prefork MPM only to all three Linux MPMs (prefork, worker and 
 event). Attached is the latest version of the two patches. Bugzilla 
 database has also been updated already. The ID for the two patches are 
 #55897 and #56279. Please refer to messages below for details on both of 
 the patches.
 
 Quick test result on modern dual socket Intel platform (Linux Kernel
 3.13.9) SO_REUSEPORT patch (bugzilla #55897)
 1.   Prefork MPM: 1 listen statement: 2.16X throughput improvement; 2 
 listen statements: 2.33X throughput improvement
 2.   Worker MPM: 1 listen statement: 10% throughput improvement; 2 
 listen statements: 35% throughput improvement
 3.   Event MPM: 1 listen statement: 13% throughput improvement; 2 
 listen statements: throughput parity, but 62% response time reduction (with 
 patch, 38% response time as original SW)
 
 Bucket patch (bugzilla #56279, only impact multiple listen statement case)
 1.   Prefork MPM: 2 listen statements: 42% throughput improvement
 2.   Worker MPM: 2 listen statements: 7% throughput improvement
 
 In all the above testing cases, significant response time reductions are 
 observed, even with throughput improvements.
 
 Please let me know your feedback and comments.
 
 Thanks,
 Yingqi
 Software and workloads used in performance tests may have been optimized 
 for performance only on Intel microprocessors. Performance tests, such as 
 SYSmark and MobileMark, are measured using specific computer systems, 
 components, software, operations and functions. Any change to any of those 
 factors may cause the results to vary. You should consult other information 
 and performance tests to assist you in fully evaluating your contemplated 
 purchases, including the performance of that product when combined with 
 other products.
 
 From: Lu, Yingqi [mailto:yingqi...@intel.com]
 Sent: Monday, March 17, 2014 1:41 PM
 To: dev@httpd.apache.org
 Subject: RE: FW: [PATCH ASF bugzilla# 55897]prefork_mpm patch with 
 SO_REUSEPORT support
 
 Dear all,
 
 Based on the feedback we received, we modified this patch. Here is the most 
 recent version. We also modified the Bugzilla database(Bugzilla# 55897 for 
 SO_REUSEPORT patch; Bugzilla# 56279 for bucket patch).
 
 Below are the changes we made into this new version:
 
 According to Yann Ylavic and other people's comments, we separate the 
 original patch between with and without SO_REUSEPORT into two separated 
 patches. The SO_REUSEPORT patch does not change the original listen 
 sockets, it just duplicate the original one into multiple ones. Since the 
 listen sockets are identical, there is no need to change

RE: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT support

2014-05-28 Thread Lu, Yingqi
Hi All,

I just want to ping again on these two patches. 

Thanks,
Yingqi

-Original Message-
From: Lu, Yingqi [mailto:yingqi...@intel.com] 
Sent: Friday, May 23, 2014 9:03 AM
To: dev@httpd.apache.org
Subject: RE: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT 
support

Dear All,

These two patches are the modified version based on the original single patch I 
sent out early January. Since I sent out the original patch, I have gotten lots 
of positive feedbacks and suggestions from this community. Thanks very much for 
your support and help!

The two most recent patches have addressed all the suggestions I got from you 
so far and they include changes for all three Linux MPMs. They have also been 
fully tested. Please refer to the change details and test results in the email 
below.

If there are no other comments/suggestions, should we fold them into the trunk?

Thanks,
Yingqi

-Original Message-
From: Lu, Yingqi [mailto:yingqi...@intel.com]
Sent: Friday, May 16, 2014 11:53 AM
To: dev@httpd.apache.org
Subject: RE: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT 
support

Hi Jim,

Thanks very much for clarifying this with me. I added #ifdef in the code to 
check _SC_NPROCESSORS_ONLN in the so_reuseport patch. Bucket patch does not use 
this parameter so that it remains the same.

Attached are the two most recent patches. I already updated the bugzilla #55897 
as well.

Thanks,
Yingqi

-Original Message-
From: Jim Jagielski [mailto:j...@jagunet.com]
Sent: Thursday, May 15, 2014 7:53 AM
To: dev@httpd.apache.org
Subject: Re: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT 
support

I was thinking more about the sysconf(_SC_NPROCESSORS_ONLN) stuff...
We could either check for that during config/build or protect it with a #ifdef 
in the code (and create some logging so the admin nows if it was found or not).

On May 14, 2014, at 11:59 AM, Lu, Yingqi yingqi...@intel.com wrote:

 Hi Jim,
 
 Thanks very much for your email.
 
 In the SO_REUSEPORT patch, SO_REUSEPORT support is checked inside listen.c 
 file. If the feature is not supported on the OS (for example, Linux kernel  
 3.9), it will fall back to the original behavior. 
 
 In the bucket patch, there is no need to check the params. With single listen 
 statement, it is just the default behavior. 
 
 Please let me know if this answers your question.
 
 Thanks,
 Yingqi
 
 -Original Message-
 From: Jim Jagielski [mailto:j...@jagunet.com]
 Sent: Wednesday, May 14, 2014 6:57 AM
 To: dev@httpd.apache.org
 Subject: Re: [PATCH ASF bugzilla# 55897]prefork_mpm patch with 
 SO_REUSEPORT support
 
 This is very cool!
 
 mod_status assumes that sysconf() exists, but do we need to do a config check 
 on the params we use in these patches?
 We look OK on Linux, FreeBSD and OSX...
 
 I'm +1 on folding into trunk.
 
 On May 13, 2014, at 7:55 PM, Lu, Yingqi yingqi...@intel.com wrote:
 
 Dear All,
 
 During the last couple weeks, I spent some time extending the original two 
 patches from prefork MPM only to all three Linux MPMs (prefork, worker and 
 event). Attached is the latest version of the two patches. Bugzilla database 
 has also been updated already. The ID for the two patches are #55897 and 
 #56279. Please refer to messages below for details on both of the patches.
 
 Quick test result on modern dual socket Intel platform (Linux Kernel
 3.13.9) SO_REUSEPORT patch (bugzilla #55897)
 1.   Prefork MPM: 1 listen statement: 2.16X throughput improvement; 2 
 listen statements: 2.33X throughput improvement
 2.   Worker MPM: 1 listen statement: 10% throughput improvement; 2 
 listen statements: 35% throughput improvement
 3.   Event MPM: 1 listen statement: 13% throughput improvement; 2 listen 
 statements: throughput parity, but 62% response time reduction (with patch, 
 38% response time as original SW)
 
 Bucket patch (bugzilla #56279, only impact multiple listen statement case)
 1.   Prefork MPM: 2 listen statements: 42% throughput improvement
 2.   Worker MPM: 2 listen statements: 7% throughput improvement
 
 In all the above testing cases, significant response time reductions are 
 observed, even with throughput improvements.
 
 Please let me know your feedback and comments.
 
 Thanks,
 Yingqi
 Software and workloads used in performance tests may have been optimized for 
 performance only on Intel microprocessors. Performance tests, such as 
 SYSmark and MobileMark, are measured using specific computer systems, 
 components, software, operations and functions. Any change to any of those 
 factors may cause the results to vary. You should consult other information 
 and performance tests to assist you in fully evaluating your contemplated 
 purchases, including the performance of that product when combined with 
 other products.
 
 From: Lu, Yingqi [mailto:yingqi...@intel.com]
 Sent: Monday, March 17, 2014 1:41 PM
 To: dev@httpd.apache.org
 Subject: RE: FW: [PATCH ASF

RE: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT support

2014-05-23 Thread Lu, Yingqi
Dear All,

These two patches are the modified version based on the original single patch I 
sent out early January. Since I sent out the original patch, I have gotten lots 
of positive feedbacks and suggestions from this community. Thanks very much for 
your support and help!

The two most recent patches have addressed all the suggestions I got from you 
so far and they include changes for all three Linux MPMs. They have also been 
fully tested. Please refer to the change details and test results in the email 
below.

If there are no other comments/suggestions, should we fold them into the trunk?

Thanks,
Yingqi

-Original Message-
From: Lu, Yingqi [mailto:yingqi...@intel.com] 
Sent: Friday, May 16, 2014 11:53 AM
To: dev@httpd.apache.org
Subject: RE: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT 
support

Hi Jim,

Thanks very much for clarifying this with me. I added #ifdef in the code to 
check _SC_NPROCESSORS_ONLN in the so_reuseport patch. Bucket patch does not use 
this parameter so that it remains the same.

Attached are the two most recent patches. I already updated the bugzilla #55897 
as well.

Thanks,
Yingqi

-Original Message-
From: Jim Jagielski [mailto:j...@jagunet.com]
Sent: Thursday, May 15, 2014 7:53 AM
To: dev@httpd.apache.org
Subject: Re: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT 
support

I was thinking more about the sysconf(_SC_NPROCESSORS_ONLN) stuff...
We could either check for that during config/build or protect it with a #ifdef 
in the code (and create some logging so the admin nows if it was found or not).

On May 14, 2014, at 11:59 AM, Lu, Yingqi yingqi...@intel.com wrote:

 Hi Jim,
 
 Thanks very much for your email.
 
 In the SO_REUSEPORT patch, SO_REUSEPORT support is checked inside listen.c 
 file. If the feature is not supported on the OS (for example, Linux kernel  
 3.9), it will fall back to the original behavior. 
 
 In the bucket patch, there is no need to check the params. With single listen 
 statement, it is just the default behavior. 
 
 Please let me know if this answers your question.
 
 Thanks,
 Yingqi
 
 -Original Message-
 From: Jim Jagielski [mailto:j...@jagunet.com]
 Sent: Wednesday, May 14, 2014 6:57 AM
 To: dev@httpd.apache.org
 Subject: Re: [PATCH ASF bugzilla# 55897]prefork_mpm patch with 
 SO_REUSEPORT support
 
 This is very cool!
 
 mod_status assumes that sysconf() exists, but do we need to do a config check 
 on the params we use in these patches?
 We look OK on Linux, FreeBSD and OSX...
 
 I'm +1 on folding into trunk.
 
 On May 13, 2014, at 7:55 PM, Lu, Yingqi yingqi...@intel.com wrote:
 
 Dear All,
 
 During the last couple weeks, I spent some time extending the original two 
 patches from prefork MPM only to all three Linux MPMs (prefork, worker and 
 event). Attached is the latest version of the two patches. Bugzilla database 
 has also been updated already. The ID for the two patches are #55897 and 
 #56279. Please refer to messages below for details on both of the patches.
 
 Quick test result on modern dual socket Intel platform (Linux Kernel
 3.13.9) SO_REUSEPORT patch (bugzilla #55897)
 1.   Prefork MPM: 1 listen statement: 2.16X throughput improvement; 2 
 listen statements: 2.33X throughput improvement
 2.   Worker MPM: 1 listen statement: 10% throughput improvement; 2 
 listen statements: 35% throughput improvement
 3.   Event MPM: 1 listen statement: 13% throughput improvement; 2 listen 
 statements: throughput parity, but 62% response time reduction (with patch, 
 38% response time as original SW)
 
 Bucket patch (bugzilla #56279, only impact multiple listen statement case)
 1.   Prefork MPM: 2 listen statements: 42% throughput improvement
 2.   Worker MPM: 2 listen statements: 7% throughput improvement
 
 In all the above testing cases, significant response time reductions are 
 observed, even with throughput improvements.
 
 Please let me know your feedback and comments.
 
 Thanks,
 Yingqi
 Software and workloads used in performance tests may have been optimized for 
 performance only on Intel microprocessors. Performance tests, such as 
 SYSmark and MobileMark, are measured using specific computer systems, 
 components, software, operations and functions. Any change to any of those 
 factors may cause the results to vary. You should consult other information 
 and performance tests to assist you in fully evaluating your contemplated 
 purchases, including the performance of that product when combined with 
 other products.
 
 From: Lu, Yingqi [mailto:yingqi...@intel.com]
 Sent: Monday, March 17, 2014 1:41 PM
 To: dev@httpd.apache.org
 Subject: RE: FW: [PATCH ASF bugzilla# 55897]prefork_mpm patch with 
 SO_REUSEPORT support
 
 Dear all,
 
 Based on the feedback we received, we modified this patch. Here is the most 
 recent version. We also modified the Bugzilla database(Bugzilla# 55897 for 
 SO_REUSEPORT patch; Bugzilla# 56279 for bucket patch

RE: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT support

2014-05-20 Thread Lu, Yingqi
Dear All,

I am checking if there are any questions/comments on both of the patches? Also, 
I am wondering what the process of patch acceptance is.

Please let me know if there is anything I can do to help.

Thanks,
Yingqi

-Original Message-
From: Lu, Yingqi [mailto:yingqi...@intel.com] 
Sent: Friday, May 16, 2014 11:53 AM
To: dev@httpd.apache.org
Subject: RE: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT 
support

Hi Jim,

Thanks very much for clarifying this with me. I added #ifdef in the code to 
check _SC_NPROCESSORS_ONLN in the so_reuseport patch. Bucket patch does not use 
this parameter so that it remains the same.

Attached are the two most recent patches. I already updated the bugzilla #55897 
as well.

Thanks,
Yingqi

-Original Message-
From: Jim Jagielski [mailto:j...@jagunet.com]
Sent: Thursday, May 15, 2014 7:53 AM
To: dev@httpd.apache.org
Subject: Re: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT 
support

I was thinking more about the sysconf(_SC_NPROCESSORS_ONLN) stuff...
We could either check for that during config/build or protect it with a #ifdef 
in the code (and create some logging so the admin nows if it was found or not).

On May 14, 2014, at 11:59 AM, Lu, Yingqi yingqi...@intel.com wrote:

 Hi Jim,
 
 Thanks very much for your email.
 
 In the SO_REUSEPORT patch, SO_REUSEPORT support is checked inside listen.c 
 file. If the feature is not supported on the OS (for example, Linux kernel  
 3.9), it will fall back to the original behavior. 
 
 In the bucket patch, there is no need to check the params. With single listen 
 statement, it is just the default behavior. 
 
 Please let me know if this answers your question.
 
 Thanks,
 Yingqi
 
 -Original Message-
 From: Jim Jagielski [mailto:j...@jagunet.com]
 Sent: Wednesday, May 14, 2014 6:57 AM
 To: dev@httpd.apache.org
 Subject: Re: [PATCH ASF bugzilla# 55897]prefork_mpm patch with 
 SO_REUSEPORT support
 
 This is very cool!
 
 mod_status assumes that sysconf() exists, but do we need to do a config check 
 on the params we use in these patches?
 We look OK on Linux, FreeBSD and OSX...
 
 I'm +1 on folding into trunk.
 
 On May 13, 2014, at 7:55 PM, Lu, Yingqi yingqi...@intel.com wrote:
 
 Dear All,
 
 During the last couple weeks, I spent some time extending the original two 
 patches from prefork MPM only to all three Linux MPMs (prefork, worker and 
 event). Attached is the latest version of the two patches. Bugzilla database 
 has also been updated already. The ID for the two patches are #55897 and 
 #56279. Please refer to messages below for details on both of the patches.
 
 Quick test result on modern dual socket Intel platform (Linux Kernel
 3.13.9) SO_REUSEPORT patch (bugzilla #55897)
 1.   Prefork MPM: 1 listen statement: 2.16X throughput improvement; 2 
 listen statements: 2.33X throughput improvement
 2.   Worker MPM: 1 listen statement: 10% throughput improvement; 2 
 listen statements: 35% throughput improvement
 3.   Event MPM: 1 listen statement: 13% throughput improvement; 2 listen 
 statements: throughput parity, but 62% response time reduction (with patch, 
 38% response time as original SW)
 
 Bucket patch (bugzilla #56279, only impact multiple listen statement case)
 1.   Prefork MPM: 2 listen statements: 42% throughput improvement
 2.   Worker MPM: 2 listen statements: 7% throughput improvement
 
 In all the above testing cases, significant response time reductions are 
 observed, even with throughput improvements.
 
 Please let me know your feedback and comments.
 
 Thanks,
 Yingqi
 Software and workloads used in performance tests may have been optimized for 
 performance only on Intel microprocessors. Performance tests, such as 
 SYSmark and MobileMark, are measured using specific computer systems, 
 components, software, operations and functions. Any change to any of those 
 factors may cause the results to vary. You should consult other information 
 and performance tests to assist you in fully evaluating your contemplated 
 purchases, including the performance of that product when combined with 
 other products.
 
 From: Lu, Yingqi [mailto:yingqi...@intel.com]
 Sent: Monday, March 17, 2014 1:41 PM
 To: dev@httpd.apache.org
 Subject: RE: FW: [PATCH ASF bugzilla# 55897]prefork_mpm patch with 
 SO_REUSEPORT support
 
 Dear all,
 
 Based on the feedback we received, we modified this patch. Here is the most 
 recent version. We also modified the Bugzilla database(Bugzilla# 55897 for 
 SO_REUSEPORT patch; Bugzilla# 56279 for bucket patch).
 
 Below are the changes we made into this new version:
 
 According to Yann Ylavic and other people's comments, we separate the 
 original patch between with and without SO_REUSEPORT into two separated 
 patches. The SO_REUSEPORT patch does not change the original listen sockets, 
 it just duplicate the original one into multiple ones. Since the listen 
 sockets are identical, there is no need

RE: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT support

2014-05-16 Thread Lu, Yingqi
Now, I just got the message I sent out yesterday, what a long delay!

Actually, this morning I sent out several messages checking for other 
feedback/comments/questions. I guess those messages are still in the air since 
I have not received any copy of those mails.

All, please let me know your feedback/comments/questions. Thanks very much for 
your time reviewing the patches.

Thanks,
Yingqi 

-Original Message-
From: Lu, Yingqi [mailto:yingqi...@intel.com] 
Sent: Wednesday, May 14, 2014 9:00 AM
To: dev@httpd.apache.org
Subject: RE: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT 
support

Hi Jim,

Thanks very much for your email.

In the SO_REUSEPORT patch, SO_REUSEPORT support is checked inside listen.c 
file. If the feature is not supported on the OS (for example, Linux kernel  
3.9), it will fall back to the original behavior. 

In the bucket patch, there is no need to check the params. With single listen 
statement, it is just the default behavior. 

Please let me know if this answers your question.

Thanks,
Yingqi

-Original Message-
From: Jim Jagielski [mailto:j...@jagunet.com]
Sent: Wednesday, May 14, 2014 6:57 AM
To: dev@httpd.apache.org
Subject: Re: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT 
support

This is very cool!

mod_status assumes that sysconf() exists, but do we need to do a config check 
on the params we use in these patches?
We look OK on Linux, FreeBSD and OSX...

I'm +1 on folding into trunk.

On May 13, 2014, at 7:55 PM, Lu, Yingqi yingqi...@intel.com wrote:

 Dear All,
  
 During the last couple weeks, I spent some time extending the original two 
 patches from prefork MPM only to all three Linux MPMs (prefork, worker and 
 event). Attached is the latest version of the two patches. Bugzilla database 
 has also been updated already. The ID for the two patches are #55897 and 
 #56279. Please refer to messages below for details on both of the patches.
  
 Quick test result on modern dual socket Intel platform (Linux Kernel
 3.13.9) SO_REUSEPORT patch (bugzilla #55897)
 1.   Prefork MPM: 1 listen statement: 2.16X throughput improvement; 2 
 listen statements: 2.33X throughput improvement
 2.   Worker MPM: 1 listen statement: 10% throughput improvement; 2 listen 
 statements: 35% throughput improvement
 3.   Event MPM: 1 listen statement: 13% throughput improvement; 2 listen 
 statements: throughput parity, but 62% response time reduction (with patch, 
 38% response time as original SW)
  
 Bucket patch (bugzilla #56279, only impact multiple listen statement case)
 1.   Prefork MPM: 2 listen statements: 42% throughput improvement
 2.   Worker MPM: 2 listen statements: 7% throughput improvement
  
 In all the above testing cases, significant response time reductions are 
 observed, even with throughput improvements.
  
 Please let me know your feedback and comments.
  
 Thanks,
 Yingqi
 Software and workloads used in performance tests may have been optimized for 
 performance only on Intel microprocessors. Performance tests, such as SYSmark 
 and MobileMark, are measured using specific computer systems, components, 
 software, operations and functions. Any change to any of those factors may 
 cause the results to vary. You should consult other information and 
 performance tests to assist you in fully evaluating your contemplated 
 purchases, including the performance of that product when combined with other 
 products.
  
 From: Lu, Yingqi [mailto:yingqi...@intel.com]
 Sent: Monday, March 17, 2014 1:41 PM
 To: dev@httpd.apache.org
 Subject: RE: FW: [PATCH ASF bugzilla# 55897]prefork_mpm patch with 
 SO_REUSEPORT support
  
 Dear all,
  
 Based on the feedback we received, we modified this patch. Here is the most 
 recent version. We also modified the Bugzilla database(Bugzilla# 55897 for 
 SO_REUSEPORT patch; Bugzilla# 56279 for bucket patch).
  
 Below are the changes we made into this new version:
  
 According to Yann Ylavic and other people's comments, we separate the 
 original patch between with and without SO_REUSEPORT into two separated 
 patches. The SO_REUSEPORT patch does not change the original listen sockets, 
 it just duplicate the original one into multiple ones. Since the listen 
 sockets are identical, there is no need to change the idle_server_maintenance 
 function. The bucket patch (without SO_REUSEPORT), on the other hand, it 
 breaks down the original listen record (if there are multiple listen socks) 
 to multiple listen record linked lists. In this case, idle_server_maintenance 
 is implemented at bucket level to address the situation that imbalanced 
 traffic occurs among different listen sockets/children buckets. In the bucket 
 patch, the polling in the child process is removed since each child only 
 listens to 1 sock.
  
 According to Arkadiusz Miskiewicz's comment, we make the detection of 
 SO_REUSEPORT at run time.
  
 According to Jeff Trawick's comments,
 1. We generate

RE: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT support

2014-05-16 Thread Lu, Yingqi
Dear all,

I sent this message twice in the morning, but I did not receive anything back. 
I am resending this one more time, just trying to gather your 
feedback/comments/questions on both of the patches. 

Sorry for the duplications.

Thanks,
Yingqi

-Original Message-
From: Lu, Yingqi
Sent: Wednesday, May 14, 2014 9:00 AM
To: dev@httpd.apache.org
Subject: RE: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT 
support

Hi Jim,

Thanks very much for your email.

In the SO_REUSEPORT patch, SO_REUSEPORT support is checked inside listen.c 
file. If the feature is not supported on the OS (for example, Linux kernel  
3.9), it will fall back to the original behavior. 

In the bucket patch, there is no need to check the params. With single listen 
statement, it is just the default behavior. 

Please let me know if this answers your question.

Thanks,
Yingqi

-Original Message-
From: Jim Jagielski [mailto:j...@jagunet.com]
Sent: Wednesday, May 14, 2014 6:57 AM
To: dev@httpd.apache.org
Subject: Re: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT 
support

This is very cool!

mod_status assumes that sysconf() exists, but do we need to do a config check 
on the params we use in these patches?
We look OK on Linux, FreeBSD and OSX...

I'm +1 on folding into trunk.

On May 13, 2014, at 7:55 PM, Lu, Yingqi yingqi...@intel.com wrote:

 Dear All,
  
 During the last couple weeks, I spent some time extending the original two 
 patches from prefork MPM only to all three Linux MPMs (prefork, worker and 
 event). Attached is the latest version of the two patches. Bugzilla database 
 has also been updated already. The ID for the two patches are #55897 and 
 #56279. Please refer to messages below for details on both of the patches.
  
 Quick test result on modern dual socket Intel platform (Linux Kernel
 3.13.9) SO_REUSEPORT patch (bugzilla #55897)
 1.   Prefork MPM: 1 listen statement: 2.16X throughput improvement; 2 
 listen statements: 2.33X throughput improvement
 2.   Worker MPM: 1 listen statement: 10% throughput improvement; 2 listen 
 statements: 35% throughput improvement
 3.   Event MPM: 1 listen statement: 13% throughput improvement; 2 listen 
 statements: throughput parity, but 62% response time reduction (with patch, 
 38% response time as original SW)
  
 Bucket patch (bugzilla #56279, only impact multiple listen statement case)
 1.   Prefork MPM: 2 listen statements: 42% throughput improvement
 2.   Worker MPM: 2 listen statements: 7% throughput improvement
  
 In all the above testing cases, significant response time reductions are 
 observed, even with throughput improvements.
  
 Please let me know your feedback and comments.
  
 Thanks,
 Yingqi
 Software and workloads used in performance tests may have been optimized for 
 performance only on Intel microprocessors. Performance tests, such as SYSmark 
 and MobileMark, are measured using specific computer systems, components, 
 software, operations and functions. Any change to any of those factors may 
 cause the results to vary. You should consult other information and 
 performance tests to assist you in fully evaluating your contemplated 
 purchases, including the performance of that product when combined with other 
 products.
  
 From: Lu, Yingqi [mailto:yingqi...@intel.com]
 Sent: Monday, March 17, 2014 1:41 PM
 To: dev@httpd.apache.org
 Subject: RE: FW: [PATCH ASF bugzilla# 55897]prefork_mpm patch with 
 SO_REUSEPORT support
  
 Dear all,
  
 Based on the feedback we received, we modified this patch. Here is the most 
 recent version. We also modified the Bugzilla database(Bugzilla# 55897 for 
 SO_REUSEPORT patch; Bugzilla# 56279 for bucket patch).
  
 Below are the changes we made into this new version:
  
 According to Yann Ylavic and other people's comments, we separate the 
 original patch between with and without SO_REUSEPORT into two separated 
 patches. The SO_REUSEPORT patch does not change the original listen sockets, 
 it just duplicate the original one into multiple ones. Since the listen 
 sockets are identical, there is no need to change the idle_server_maintenance 
 function. The bucket patch (without SO_REUSEPORT), on the other hand, it 
 breaks down the original listen record (if there are multiple listen socks) 
 to multiple listen record linked lists. In this case, idle_server_maintenance 
 is implemented at bucket level to address the situation that imbalanced 
 traffic occurs among different listen sockets/children buckets. In the bucket 
 patch, the polling in the child process is removed since each child only 
 listens to 1 sock.
  
 According to Arkadiusz Miskiewicz's comment, we make the detection of 
 SO_REUSEPORT at run time.
  
 According to Jeff Trawick's comments,
 1. We generate the patches against the httpd trunk.
 2. We tested the current patches and they do not impact event and worker 
 mpms. If current patches can be accepted, we would be happy to extend

RE: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT support

2014-05-16 Thread Lu, Yingqi
Dear all,

Any other feedback/comments/questions? 

Thanks,
Yingqi

-Original Message-
From: Lu, Yingqi 
Sent: Wednesday, May 14, 2014 9:00 AM
To: dev@httpd.apache.org
Subject: RE: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT 
support

Hi Jim,

Thanks very much for your email.

In the SO_REUSEPORT patch, SO_REUSEPORT support is checked inside listen.c 
file. If the feature is not supported on the OS (for example, Linux kernel  
3.9), it will fall back to the original behavior. 

In the bucket patch, there is no need to check the params. With single listen 
statement, it is just the default behavior. 

Please let me know if this answers your question.

Thanks,
Yingqi

-Original Message-
From: Jim Jagielski [mailto:j...@jagunet.com]
Sent: Wednesday, May 14, 2014 6:57 AM
To: dev@httpd.apache.org
Subject: Re: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT 
support

This is very cool!

mod_status assumes that sysconf() exists, but do we need to do a config check 
on the params we use in these patches?
We look OK on Linux, FreeBSD and OSX...

I'm +1 on folding into trunk.

On May 13, 2014, at 7:55 PM, Lu, Yingqi yingqi...@intel.com wrote:

 Dear All,
  
 During the last couple weeks, I spent some time extending the original two 
 patches from prefork MPM only to all three Linux MPMs (prefork, worker and 
 event). Attached is the latest version of the two patches. Bugzilla database 
 has also been updated already. The ID for the two patches are #55897 and 
 #56279. Please refer to messages below for details on both of the patches.
  
 Quick test result on modern dual socket Intel platform (Linux Kernel
 3.13.9) SO_REUSEPORT patch (bugzilla #55897)
 1.   Prefork MPM: 1 listen statement: 2.16X throughput improvement; 2 
 listen statements: 2.33X throughput improvement
 2.   Worker MPM: 1 listen statement: 10% throughput improvement; 2 listen 
 statements: 35% throughput improvement
 3.   Event MPM: 1 listen statement: 13% throughput improvement; 2 listen 
 statements: throughput parity, but 62% response time reduction (with patch, 
 38% response time as original SW)
  
 Bucket patch (bugzilla #56279, only impact multiple listen statement case)
 1.   Prefork MPM: 2 listen statements: 42% throughput improvement
 2.   Worker MPM: 2 listen statements: 7% throughput improvement
  
 In all the above testing cases, significant response time reductions are 
 observed, even with throughput improvements.
  
 Please let me know your feedback and comments.
  
 Thanks,
 Yingqi
 Software and workloads used in performance tests may have been optimized for 
 performance only on Intel microprocessors. Performance tests, such as SYSmark 
 and MobileMark, are measured using specific computer systems, components, 
 software, operations and functions. Any change to any of those factors may 
 cause the results to vary. You should consult other information and 
 performance tests to assist you in fully evaluating your contemplated 
 purchases, including the performance of that product when combined with other 
 products.
  
 From: Lu, Yingqi [mailto:yingqi...@intel.com]
 Sent: Monday, March 17, 2014 1:41 PM
 To: dev@httpd.apache.org
 Subject: RE: FW: [PATCH ASF bugzilla# 55897]prefork_mpm patch with 
 SO_REUSEPORT support
  
 Dear all,
  
 Based on the feedback we received, we modified this patch. Here is the most 
 recent version. We also modified the Bugzilla database(Bugzilla# 55897 for 
 SO_REUSEPORT patch; Bugzilla# 56279 for bucket patch).
  
 Below are the changes we made into this new version:
  
 According to Yann Ylavic and other people's comments, we separate the 
 original patch between with and without SO_REUSEPORT into two separated 
 patches. The SO_REUSEPORT patch does not change the original listen sockets, 
 it just duplicate the original one into multiple ones. Since the listen 
 sockets are identical, there is no need to change the idle_server_maintenance 
 function. The bucket patch (without SO_REUSEPORT), on the other hand, it 
 breaks down the original listen record (if there are multiple listen socks) 
 to multiple listen record linked lists. In this case, idle_server_maintenance 
 is implemented at bucket level to address the situation that imbalanced 
 traffic occurs among different listen sockets/children buckets. In the bucket 
 patch, the polling in the child process is removed since each child only 
 listens to 1 sock.
  
 According to Arkadiusz Miskiewicz's comment, we make the detection of 
 SO_REUSEPORT at run time.
  
 According to Jeff Trawick's comments,
 1. We generate the patches against the httpd trunk.
 2. We tested the current patches and they do not impact event and worker 
 mpms. If current patches can be accepted, we would be happy to extend them to 
 other Linux based mpms. There are not much code changes, but require some 
 time to setup the workload to test.
 3. We removed unnecessary comments and changed APLOGNO(). We

RE: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT support

2014-05-16 Thread Lu, Yingqi
Dear all,

Any other feedback/comments/questions? 

Thanks,
Yingqi

-Original Message-
From: Lu, Yingqi 
Sent: Wednesday, May 14, 2014 9:00 AM
To: dev@httpd.apache.org
Subject: RE: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT 
support

Hi Jim,

Thanks very much for your email.

In the SO_REUSEPORT patch, SO_REUSEPORT support is checked inside listen.c 
file. If the feature is not supported on the OS (for example, Linux kernel  
3.9), it will fall back to the original behavior. 

In the bucket patch, there is no need to check the params. With single listen 
statement, it is just the default behavior. 

Please let me know if this answers your question.

Thanks,
Yingqi

-Original Message-
From: Jim Jagielski [mailto:j...@jagunet.com]
Sent: Wednesday, May 14, 2014 6:57 AM
To: dev@httpd.apache.org
Subject: Re: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT 
support

This is very cool!

mod_status assumes that sysconf() exists, but do we need to do a config check 
on the params we use in these patches?
We look OK on Linux, FreeBSD and OSX...

I'm +1 on folding into trunk.

On May 13, 2014, at 7:55 PM, Lu, Yingqi yingqi...@intel.com wrote:

 Dear All,
  
 During the last couple weeks, I spent some time extending the original two 
 patches from prefork MPM only to all three Linux MPMs (prefork, worker and 
 event). Attached is the latest version of the two patches. Bugzilla database 
 has also been updated already. The ID for the two patches are #55897 and 
 #56279. Please refer to messages below for details on both of the patches.
  
 Quick test result on modern dual socket Intel platform (Linux Kernel
 3.13.9) SO_REUSEPORT patch (bugzilla #55897)
 1.   Prefork MPM: 1 listen statement: 2.16X throughput improvement; 2 
 listen statements: 2.33X throughput improvement
 2.   Worker MPM: 1 listen statement: 10% throughput improvement; 2 listen 
 statements: 35% throughput improvement
 3.   Event MPM: 1 listen statement: 13% throughput improvement; 2 listen 
 statements: throughput parity, but 62% response time reduction (with patch, 
 38% response time as original SW)
  
 Bucket patch (bugzilla #56279, only impact multiple listen statement case)
 1.   Prefork MPM: 2 listen statements: 42% throughput improvement
 2.   Worker MPM: 2 listen statements: 7% throughput improvement
  
 In all the above testing cases, significant response time reductions are 
 observed, even with throughput improvements.
  
 Please let me know your feedback and comments.
  
 Thanks,
 Yingqi
 Software and workloads used in performance tests may have been optimized for 
 performance only on Intel microprocessors. Performance tests, such as SYSmark 
 and MobileMark, are measured using specific computer systems, components, 
 software, operations and functions. Any change to any of those factors may 
 cause the results to vary. You should consult other information and 
 performance tests to assist you in fully evaluating your contemplated 
 purchases, including the performance of that product when combined with other 
 products.
  
 From: Lu, Yingqi [mailto:yingqi...@intel.com]
 Sent: Monday, March 17, 2014 1:41 PM
 To: dev@httpd.apache.org
 Subject: RE: FW: [PATCH ASF bugzilla# 55897]prefork_mpm patch with 
 SO_REUSEPORT support
  
 Dear all,
  
 Based on the feedback we received, we modified this patch. Here is the most 
 recent version. We also modified the Bugzilla database(Bugzilla# 55897 for 
 SO_REUSEPORT patch; Bugzilla# 56279 for bucket patch).
  
 Below are the changes we made into this new version:
  
 According to Yann Ylavic and other people's comments, we separate the 
 original patch between with and without SO_REUSEPORT into two separated 
 patches. The SO_REUSEPORT patch does not change the original listen sockets, 
 it just duplicate the original one into multiple ones. Since the listen 
 sockets are identical, there is no need to change the idle_server_maintenance 
 function. The bucket patch (without SO_REUSEPORT), on the other hand, it 
 breaks down the original listen record (if there are multiple listen socks) 
 to multiple listen record linked lists. In this case, idle_server_maintenance 
 is implemented at bucket level to address the situation that imbalanced 
 traffic occurs among different listen sockets/children buckets. In the bucket 
 patch, the polling in the child process is removed since each child only 
 listens to 1 sock.
  
 According to Arkadiusz Miskiewicz's comment, we make the detection of 
 SO_REUSEPORT at run time.
  
 According to Jeff Trawick's comments,
 1. We generate the patches against the httpd trunk.
 2. We tested the current patches and they do not impact event and worker 
 mpms. If current patches can be accepted, we would be happy to extend them to 
 other Linux based mpms. There are not much code changes, but require some 
 time to setup the workload to test.
 3. We removed unnecessary comments and changed APLOGNO(). We

RE: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT support

2014-05-14 Thread Lu, Yingqi
Hi Jim,

Thanks very much for your email.

In the SO_REUSEPORT patch, SO_REUSEPORT support is checked inside listen.c 
file. If the feature is not supported on the OS (for example, Linux kernel  
3.9), it will fall back to the original behavior. 

In the bucket patch, there is no need to check the params. With single listen 
statement, it is just the default behavior. 

Please let me know if this answers your question.

Thanks,
Yingqi

-Original Message-
From: Jim Jagielski [mailto:j...@jagunet.com] 
Sent: Wednesday, May 14, 2014 6:57 AM
To: dev@httpd.apache.org
Subject: Re: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT 
support

This is very cool!

mod_status assumes that sysconf() exists, but do we need to do a config check 
on the params we use in these patches?
We look OK on Linux, FreeBSD and OSX...

I'm +1 on folding into trunk.

On May 13, 2014, at 7:55 PM, Lu, Yingqi yingqi...@intel.com wrote:

 Dear All,
  
 During the last couple weeks, I spent some time extending the original two 
 patches from prefork MPM only to all three Linux MPMs (prefork, worker and 
 event). Attached is the latest version of the two patches. Bugzilla database 
 has also been updated already. The ID for the two patches are #55897 and 
 #56279. Please refer to messages below for details on both of the patches.
  
 Quick test result on modern dual socket Intel platform (Linux Kernel 
 3.13.9) SO_REUSEPORT patch (bugzilla #55897)
 1.   Prefork MPM: 1 listen statement: 2.16X throughput improvement; 2 
 listen statements: 2.33X throughput improvement
 2.   Worker MPM: 1 listen statement: 10% throughput improvement; 2 listen 
 statements: 35% throughput improvement
 3.   Event MPM: 1 listen statement: 13% throughput improvement; 2 listen 
 statements: throughput parity, but 62% response time reduction (with patch, 
 38% response time as original SW)
  
 Bucket patch (bugzilla #56279, only impact multiple listen statement case)
 1.   Prefork MPM: 2 listen statements: 42% throughput improvement
 2.   Worker MPM: 2 listen statements: 7% throughput improvement
  
 In all the above testing cases, significant response time reductions are 
 observed, even with throughput improvements.
  
 Please let me know your feedback and comments.
  
 Thanks,
 Yingqi
 Software and workloads used in performance tests may have been optimized for 
 performance only on Intel microprocessors. Performance tests, such as SYSmark 
 and MobileMark, are measured using specific computer systems, components, 
 software, operations and functions. Any change to any of those factors may 
 cause the results to vary. You should consult other information and 
 performance tests to assist you in fully evaluating your contemplated 
 purchases, including the performance of that product when combined with other 
 products.
  
 From: Lu, Yingqi [mailto:yingqi...@intel.com]
 Sent: Monday, March 17, 2014 1:41 PM
 To: dev@httpd.apache.org
 Subject: RE: FW: [PATCH ASF bugzilla# 55897]prefork_mpm patch with 
 SO_REUSEPORT support
  
 Dear all,
  
 Based on the feedback we received, we modified this patch. Here is the most 
 recent version. We also modified the Bugzilla database(Bugzilla# 55897 for 
 SO_REUSEPORT patch; Bugzilla# 56279 for bucket patch).
  
 Below are the changes we made into this new version:
  
 According to Yann Ylavic and other people's comments, we separate the 
 original patch between with and without SO_REUSEPORT into two separated 
 patches. The SO_REUSEPORT patch does not change the original listen sockets, 
 it just duplicate the original one into multiple ones. Since the listen 
 sockets are identical, there is no need to change the idle_server_maintenance 
 function. The bucket patch (without SO_REUSEPORT), on the other hand, it 
 breaks down the original listen record (if there are multiple listen socks) 
 to multiple listen record linked lists. In this case, idle_server_maintenance 
 is implemented at bucket level to address the situation that imbalanced 
 traffic occurs among different listen sockets/children buckets. In the bucket 
 patch, the polling in the child process is removed since each child only 
 listens to 1 sock.
  
 According to Arkadiusz Miskiewicz's comment, we make the detection of 
 SO_REUSEPORT at run time.
  
 According to Jeff Trawick's comments,
 1. We generate the patches against the httpd trunk.
 2. We tested the current patches and they do not impact event and worker 
 mpms. If current patches can be accepted, we would be happy to extend them to 
 other Linux based mpms. There are not much code changes, but require some 
 time to setup the workload to test.
 3. We removed unnecessary comments and changed APLOGNO(). We also changed 
 some of the parameter/variable/function names to better represent their 
 meanings.
 4. There should be no build-in limitations for SO_REUSEPORT patch. For bucket 
 patch, the only thing is the number of children bucket only scales

RE: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT support

2014-04-07 Thread Lu, Yingqi
Thanks, Graham! I am looking forward to hearing your feedback.

Thanks,
Yingqi

From: Graham Leggett [mailto:minf...@sharp.fm]
Sent: Monday, April 07, 2014 12:08 PM
To: dev@httpd.apache.org
Subject: Re: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT 
support

On 07 Apr 2014, at 6:21 PM, Lu, Yingqi 
yingqi...@intel.commailto:yingqi...@intel.com wrote:

I just want to ping again on the modifications we made on both of the patches 
[bugzilla #55897 and bugzilla #56279]. Please let us know your comments and 
feedback.

I am reattaching the patch files here in case you missed original email.

I am very keen to review this, but have no time right now - sorry about that. 
From my side I am keen to review it soon.

Regards,
Graham
--



RE: FW: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT support

2014-03-31 Thread Lu, Yingqi
Dear all,

I just want to ping again on the modifications we made on both of the patches 
[bugzilla #55897 and bugzilla #56279]. Please let us know your comments and 
feedback.

Thanks,
Yingqi

From: Lu, Yingqi [mailto:yingqi...@intel.com]
Sent: Monday, March 24, 2014 1:56 PM
To: dev@httpd.apache.org
Subject: RE: FW: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT 
support

Dear all,

I just want to ping on both of these two patches to see if there is anything we 
can do to help them get accepted.

Your feedbacks and comments are very much appreciated.

Thanks,
Yingqi Lu

From: Lu, Yingqi [mailto:yingqi...@intel.com]
Sent: Monday, March 17, 2014 1:41 PM
To: dev@httpd.apache.orgmailto:dev@httpd.apache.org
Subject: RE: FW: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT 
support

Dear all,

Based on the feedback we received, we modified this patch. Here is the most 
recent version. We also modified the Bugzilla database(Bugzilla# 55897 for 
SO_REUSEPORT patch; Bugzilla# 56279 for bucket patch).

Below are the changes we made into this new version:

According to Yann Ylavic and other people's comments, we separate the original 
patch between with and without SO_REUSEPORT into two separated patches. The 
SO_REUSEPORT patch does not change the original listen sockets, it just 
duplicate the original one into multiple ones. Since the listen sockets are 
identical, there is no need to change the idle_server_maintenance function. The 
bucket patch (without SO_REUSEPORT), on the other hand, it breaks down the 
original listen record (if there are multiple listen socks) to multiple listen 
record linked lists. In this case, idle_server_maintenance is implemented at 
bucket level to address the situation that imbalanced traffic occurs among 
different listen sockets/children buckets. In the bucket patch, the polling in 
the child process is removed since each child only listens to 1 sock.

According to Arkadiusz Miskiewicz's comment, we make the detection of 
SO_REUSEPORT at run time.

According to Jeff Trawick's comments,
1. We generate the patches against the httpd trunk.
2. We tested the current patches and they do not impact event and worker mpms. 
If current patches can be accepted, we would be happy to extend them to other 
Linux based mpms. There are not much code changes, but require some time to 
setup the workload to test.
3. We removed unnecessary comments and changed APLOGNO(). We also changed some 
of the parameter/variable/function names to better represent their meanings.
4. There should be no build-in limitations for SO_REUSEPORT patch. For bucket 
patch, the only thing is the number of children bucket only scales to 
MAX_SPAWN_RATE. If there are more than 32 (current default MAX_SPQN_RATE) 
listen statements specified in the httpd.conf, the number of buckets will be 
fixed to 32. The reason for this is because that we implement the 
idle_server_maintenance at bucket level, each bucket's own max_spawn_rate is 
set to MAX_SPAWN_RATE/num_buckets.

Again, thanks very much for all the comments and feedback. Please let us know 
if there are more changes we need to complete to make them accepted.

Thanks,
Yingqi Lu



From: Lu, Yingqi
Sent: Tuesday, March 04, 2014 10:43 AM
To: dev@httpd.apache.orgmailto:dev@httpd.apache.org
Subject: RE: FW: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT 
support

Hi Jeff,

Thanks very much for your time reviewing the patch! We will modify the patch 
according to your comments and repost it here.

Thanks,
Yingqi

From: Jeff Trawick [mailto:traw...@gmail.com]
Sent: Tuesday, March 04, 2014 10:08 AM
To: Apache HTTP Server Development List
Subject: Re: FW: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT 
support

On Tue, Mar 4, 2014 at 10:35 AM, Lu, Yingqi 
yingqi...@intel.commailto:yingqi...@intel.com wrote:
Hi All,

I just want to ping again on this patch to gather your feedback and comments. 
Please refer to the messages below for patch details.

If you need any additional information/supporting data, please let us know as 
well.

Yeah, it has been on my todo list, but I don't have time to give an in depth 
review at the moment.  Here are a few questions/comments.  (And you'll have to 
deal with the fact that it is unnecessarily tedious for me to evaluate 
higher-level considerations if there are a lot of distractions, such as the 
code comments below ;)  But others are of course free to chime in.)

The patch should be against httpd trunk.  It probably won't take much time for 
you to create that patch and confirm basic operation.

What is the impact to other MPMs, even if they shouldn't use or don't have the 
necessary code to use SO_REUSEPORT at this time?

Have you tried the event MPM?

Is there a way for the admin to choose this behavior?  Most won't care, but 
everyone's behavior is changed AFAICT.

Are there built-in limitations in this patch that we should be aware of?  E.g., 
the free slot/spawn rate

RE: FW: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT support

2014-03-24 Thread Lu, Yingqi
Dear all,

I just want to ping on both of these two patches to see if there is anything we 
can do to help them get accepted.

Your feedbacks and comments are very much appreciated.

Thanks,
Yingqi Lu

From: Lu, Yingqi [mailto:yingqi...@intel.com]
Sent: Monday, March 17, 2014 1:41 PM
To: dev@httpd.apache.org
Subject: RE: FW: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT 
support

Dear all,

Based on the feedback we received, we modified this patch. Here is the most 
recent version. We also modified the Bugzilla database(Bugzilla# 55897 for 
SO_REUSEPORT patch; Bugzilla# 56279 for bucket patch).

Below are the changes we made into this new version:

According to Yann Ylavic and other people's comments, we separate the original 
patch between with and without SO_REUSEPORT into two separated patches. The 
SO_REUSEPORT patch does not change the original listen sockets, it just 
duplicate the original one into multiple ones. Since the listen sockets are 
identical, there is no need to change the idle_server_maintenance function. The 
bucket patch (without SO_REUSEPORT), on the other hand, it breaks down the 
original listen record (if there are multiple listen socks) to multiple listen 
record linked lists. In this case, idle_server_maintenance is implemented at 
bucket level to address the situation that imbalanced traffic occurs among 
different listen sockets/children buckets. In the bucket patch, the polling in 
the child process is removed since each child only listens to 1 sock.

According to Arkadiusz Miskiewicz's comment, we make the detection of 
SO_REUSEPORT at run time.

According to Jeff Trawick's comments,
1. We generate the patches against the httpd trunk.
2. We tested the current patches and they do not impact event and worker mpms. 
If current patches can be accepted, we would be happy to extend them to other 
Linux based mpms. There are not much code changes, but require some time to 
setup the workload to test.
3. We removed unnecessary comments and changed APLOGNO(). We also changed some 
of the parameter/variable/function names to better represent their meanings.
4. There should be no build-in limitations for SO_REUSEPORT patch. For bucket 
patch, the only thing is the number of children bucket only scales to 
MAX_SPAWN_RATE. If there are more than 32 (current default MAX_SPQN_RATE) 
listen statements specified in the httpd.conf, the number of buckets will be 
fixed to 32. The reason for this is because that we implement the 
idle_server_maintenance at bucket level, each bucket's own max_spawn_rate is 
set to MAX_SPAWN_RATE/num_buckets.

Again, thanks very much for all the comments and feedback. Please let us know 
if there are more changes we need to complete to make them accepted.

Thanks,
Yingqi Lu



From: Lu, Yingqi
Sent: Tuesday, March 04, 2014 10:43 AM
To: dev@httpd.apache.orgmailto:dev@httpd.apache.org
Subject: RE: FW: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT 
support

Hi Jeff,

Thanks very much for your time reviewing the patch! We will modify the patch 
according to your comments and repost it here.

Thanks,
Yingqi

From: Jeff Trawick [mailto:traw...@gmail.com]
Sent: Tuesday, March 04, 2014 10:08 AM
To: Apache HTTP Server Development List
Subject: Re: FW: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT 
support

On Tue, Mar 4, 2014 at 10:35 AM, Lu, Yingqi 
yingqi...@intel.commailto:yingqi...@intel.com wrote:
Hi All,

I just want to ping again on this patch to gather your feedback and comments. 
Please refer to the messages below for patch details.

If you need any additional information/supporting data, please let us know as 
well.

Yeah, it has been on my todo list, but I don't have time to give an in depth 
review at the moment.  Here are a few questions/comments.  (And you'll have to 
deal with the fact that it is unnecessarily tedious for me to evaluate 
higher-level considerations if there are a lot of distractions, such as the 
code comments below ;)  But others are of course free to chime in.)

The patch should be against httpd trunk.  It probably won't take much time for 
you to create that patch and confirm basic operation.

What is the impact to other MPMs, even if they shouldn't use or don't have the 
necessary code to use SO_REUSEPORT at this time?

Have you tried the event MPM?

Is there a way for the admin to choose this behavior?  Most won't care, but 
everyone's behavior is changed AFAICT.

Are there built-in limitations in this patch that we should be aware of?  E.g., 
the free slot/spawn rate changes suggest to me that there can't be more than 
1025 children???

We should assume for now that there's no reason this couldn't be committed to 
trunk after review/rework, so make sure it is as close as you can get it to 
what you think is the final form.

For the configure-time check for 3.9 kernel: I think we'd also use 
AC_TRY_COMPILE at configure time to confirm that the SO_REUSEPORT definition is 
available

RE: [PATCH ASF bugzilla# 55897] prefork_mpm patch with SO_REUSEPORT support

2014-03-17 Thread Lu, Yingqi
Hi Tim,

Thanks for your email. 

SO_REUSEPORT feature is enabled on Linux kernel 3.9 and newer. The feature is 
defined at /usr/include/asm-generic/socket.h. 

With the old kernel, the definition is there, but is commented out. 
/*#define SO_REUSEPORT  15*/

The section of code below is just to define SO_REUSEPORT if it is not already 
being defined. The code after this is to detect if SO_REUSEPORT is supported or 
not.

I am using x86_64 systems with Linux. If anyone finds something different on 
your system, please let me know.

Thanks,
Yingqi

-Original Message-
From: Tim Bannister [mailto:is...@jellybaby.net] 
Sent: Monday, March 17, 2014 2:31 PM
To: dev@httpd.apache.org
Subject: Re: [PATCH ASF bugzilla# 55897] prefork_mpm patch with SO_REUSEPORT 
support

I'm afraid I don't understand this particular part from 
httpd_trunk_so_reuseport.patch:

#ifndef SO_REUSEPORT
#define SO_REUSEPORT 15
#endif

Why 15? Is this going to be portable across different platforms?

-- 
Tim Bannister - is...@jellybaby.net



RE: SO_REUSEPORT in the children processes

2014-03-10 Thread Lu, Yingqi
Hi Yann,

As we pointed out in our original discussion thread, we dropped the child 
process implementation due to the kernel defect with changing the number of 
open sockets. Now, we quickly tested this child process implementation 
(prefork) with our workload on a modern Xeon dual sockets server and most 
recent 3.13.6 kernel again.

1. We do not see connection reset errors during the run (ramp up and steady 
stay) any more. However, we noticed that our workload cannot ramp down and 
terminate on its own with this child process implementation. This never 
happened before with either out of box httpd or the parent process 
implementation. After manually force shutdown the workload, we saw these 
connection reset errors again.

2. During the run, we noticed that there are tons of read timed out errors. 
These errors not only happen when the system is highly utilized, it even 
happens when system is only 10% utilized. The response time was high.

3. Compared to parent process implementation, we found child process 
implementation results in significantly higher (up to 10X) response time (The 
read timed out errors are not counted in the result) at different CPU 
utilization levels. At peak performance level, it has ~22% less throughput with 
tons of connection reset errors in additional to read timed out errors. 
Parent process implementation does not have errors.

We think the reason of above findings may be caused by: 1. Too many open 
sockets created by the children processes; and/or 2. Parent process does not 
have control, or maybe 3. Kernel defect is not fully addressed. On the other 
hand, the parent implementation keeps minimal number of open sockets that takes 
advantage of SO_REUSEPORT and keeps the environment more controllable.

We are currently modifying the code based on all the feedbacks from the 
community with the original parent process implementation which also includes 
separating the original patch between with and without SO_REUSEPORT support. 
This would make SO_REUSEPORT patch cleaner and simpler.

Thanks,
Yingqi (people at work also call me Lucy:))


From: Yann Ylavic [mailto:ylavic@gmail.com]
Sent: Friday, March 07, 2014 9:07 AM
To: httpd
Subject: SO_REUSEPORT in the children processes

Hi all,
the patch about SO_REUSEPORT proposed by Yingqi Lu in [1] and discussed in [2] 
uses listeners buckets to address a defect [3] in the current linux 
implementation (his patch goes beyond SO_REUSEPORT though, and suggests a new 
MPM even when the option is not available).
Should this defect be addressed by linux folks, the event/worker/prefork MPMs 
could take full advantage of the option (linux-3.9+) with quite simple 
modifications of the current code.
I'm proposing here the corresponding patch.

The idea is to re-create and re-bind/listen the parent's listeners sockets for 
each child process, when starting, before dropping priviledges.
For this, the patch introduces a new ap_reopen_listeners() function (meant to 
be called by each child) to do the job on the inherited listeners. It does 
nothing unless HAVE_SO_REUSEPORT is defined.

The advantage of this approach is that no accept mutex is needed anymore (each 
child has its own sockets), hence the SAFE_ACCEPT macros can do nothing when 
HAVE_SO_REUSEPORT is defined.
The new (incoming) connections are evenly distributed accross the children for 
all the listners (as assured by Linux when SO_REUSEPORT is used).
I'm proposing the patch here so that everyone can figure out whether 
SO_REUSEPORT per se needs its own MPM or not (once/if the defect is fixed).
The option seems to be quite easily pluggable to existing MPMs (no ABI change), 
and I don't see an advantage to not using it when available (and working).

Also, FreeBSD has an implementation of SO_REUSEPORT,
however
I couldn't find whether it has the same scheduling garantee
or not
(at least I guess the accept mutex can be avoided too).

Regarding the linux kernel defect, is someone aware of a fix/work on that in 
the latest versions?

Finally, about the accept mutex, mpm event seems to work well without any, why 
prefork and worker would need one (both poll() all the listeners in a single 
thread, while other children can do the same)?
The patch follows and is attached.
It can already be tested with a workaround against the defect: don't let 
perform_idle_server_maintenance() create/stop children after startup (eg. 
StartServers, ServerLimit, Min/MaxSpareServers using the same value).

Thoughts, feedbacks welcome.

Regards,
Yann.

[1] https://issues.apache.org/bugzilla/show_bug.cgi?id=55897#c7
[2] 
http://mail-archives.apache.org/mod_mbox/httpd-bugs/201312.mbox/%3cbug-55897-7...@https.issues.apache.org/bugzilla/%3E
[3] http://lwn.net/Articles/542629/ and http://lwn.net/Articles/542738/

Index: server/mpm/event/event.c
===
--- server/mpm/event/event.c(revision 1575322)
+++ server/mpm/event/event.c(working copy)

RE: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT support

2014-03-06 Thread Lu, Yingqi
Hi Yann,

Yes, without SO_REUSEPORT, child only accepts connections from a single 
listening socket only. In order to address the situation of in-balanced traffic 
among different sockets/listen statements, the patch makes each bucket does its 
own idler server maintenance. For example, if we have two listen statements 
defined, one is very busy and the other is almost idle. The patch creates two 
buckets, each listens to 1 IP:port. The busy bucket would end up with lots of 
children and idle bucket would only maintain minimum number of children which 
is equal to 1/2 of the min idle servers defined in the httpd.conf file.

Thanks,
Yingqi

From: Yann Ylavic [mailto:ylavic@gmail.com]
Sent: Thursday, March 06, 2014 5:49 AM
To: httpd
Subject: Re: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT 
support

On Wed, Mar 5, 2014 at 6:38 PM, Lu, Yingqi 
yingqi...@intel.commailto:yingqi...@intel.com wrote:

1. If I understand correctly (please correct me if not), do you suggest 
duplicating the listen socks inside the child process with SO_REUSEPROT 
enabled? Yes, I agree this would be a cleaner implementation and I actually 
tried that before. However, I encountered the connection reset error since 
the number of the child process is changing. I googled online and found it 
actually being discussed here at http://lwn.net/Articles/542629/.

Actually I found that article, but expected the defect was solved since 
then...
This looks like a thorn in the side of MPMs in general,
but couldn't find any pointer to a fix, do you know if there is some progress 
on this in the latest linux kernel?

For testing purpose (until then?), you could also configure MPM prefork to not 
create/terminate children processes once started (using the same value for 
StartServers and ServerLimit, still MaxRequetsPerChild 0).
It could be interesting to see how SO_REUSEPORT scales in these optimal 
conditions (no lock, full OS round-robin on all listeners).
For this you would have to use your former patch (duplicating listeners in each 
child process), and do nothing in SAFE_ACCEPT when HAVE_SO_REUSEPORT.
Also, SO_REUSEPORT exists on (and even comes from) FreeBSD if I am not 
mistaken, but it seems that there is no round-robin garantee for it, is there? 
Could this patch also take advantage of BSD's SO_REUSEPORT implementation?


2. Then, I decided to do the socket duplication in the parent process. The goal 
of this change is to extend the CPU thread scalability with the big thread 
count system. Therefore, I just very simply defined 
number_of_listen_buckets=total_number_active_thread/8, and each listen bucket 
has a dedicated listener. I do not want to over duplicate the socket; 
otherwise, it would create too many child processes at the beginning. One 
listen bucket should have at least one child process to start with. However, 
this is only my understanding and it may not be correct and complete. If you 
have other ideas, please share with us. Feedbacks and comments are very welcome 
here :)

The listeners buckets make sense with SO_REUSEPORT given the defect, I hope 
this is temporary.


3. I am struggling with myself as well on if we should put with and without 
SO_REUSEPORT into two different patches. The only reason I put them together is 
because they both use the concept of listen buckets. If you think it would make 
more sense to separate them into two patches, I can certainly do that. Also, I 
am a little bit confused about your comments On the other hand, each child is 
dedicated, won't one have to multiply the configured ServerLimit by the number 
of Listen to achieve the same (maximum theoretical) scalability with regard to 
all the listeners?. Can you please explain a little bit more on this? Really 
appreciate.

Sorry to have not been clear enough (nay at all).

I'm referring to the following code.

In prefork.c::make_child(), each child is assigned a listener like this (before 
fork()ing) :

child_listen = mpm_listen[bucket[slot]];

and then each child will use child_listen as listeners list.

The duplicated listeners array (mpm_listen) is built by the following (new) 
function :

/* This function is added for the patch. This function duplicates
 * open_listeners, alloc_listener() and re-call make_sock() for the
 * duplicated listeners. In this function, the newly created sockets
 * will bind and listen*/
AP_DECLARE(apr_status_t) ap_post_config_listeners(server_rec *s, apr_pool_t *p,
  int num_buckets) {
mpm_listen = apr_palloc(p, sizeof(ap_listen_rec*) * num_buckets);
int i;
ap_listen_rec *lr;
/* duplicate from alloc_listener() for the additional listen record*/
lr = ap_listeners;
for (i = 0; i  num_buckets; i++) {
#ifdef HAVE_SO_REUSEPORT
ap_listen_rec *templr;
ap_listen_rec *last = NULL;
while (lr) {
templr = ap_duplicate_listener(p, lr);

ap_apply_accept_filter(p, templr, s

RE: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT support

2014-03-05 Thread Lu, Yingqi
Hi Yann,

Thanks very much for your email.

1. If I understand correctly (please correct me if not), do you suggest 
duplicating the listen socks inside the child process with SO_REUSEPROT 
enabled? Yes, I agree this would be a cleaner implementation and I actually 
tried that before. However, I encountered the connection reset error since 
the number of the child process is changing. I googled online and found it 
actually being discussed here at http://lwn.net/Articles/542629/.

2. Then, I decided to do the socket duplication in the parent process. The goal 
of this change is to extend the CPU thread scalability with the big thread 
count system. Therefore, I just very simply defined 
number_of_listen_buckets=total_number_active_thread/8, and each listen bucket 
has a dedicated listener. I do not want to over duplicate the socket; 
otherwise, it would create too many child processes at the beginning. One 
listen bucket should have at least one child process to start with. However, 
this is only my understanding and it may not be correct and complete. If you 
have other ideas, please share with us. Feedbacks and comments are very welcome 
here :)

3. I am struggling with myself as well on if we should put with and without 
SO_REUSEPORT into two different patches. The only reason I put them together is 
because they both use the concept of listen buckets. If you think it would make 
more sense to separate them into two patches, I can certainly do that. Also, I 
am a little bit confused about your comments On the other hand, each child is 
dedicated, won't one have to multiply the configured ServerLimit by the number 
of Listen to achieve the same (maximum theoretical) scalability with regard to 
all the listeners?. Can you please explain a little bit more on this? Really 
appreciate.

This is our first patch to the open source and Apache community. We are still 
on the learning curve about a lot of things. Your feedback and comments really 
help us!

Please let me know if you have any further questions.

Thanks,
Yingqi


From: Yann Ylavic [mailto:ylavic@gmail.com]
Sent: Wednesday, March 05, 2014 5:04 AM
To: httpd
Subject: Re: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT 
support

Hi Yingqi,

I'm a bit confused about the patch, mainly because it seems to handle the same 
way both with and without SO_REUSEPORT available, while SO_REUSEPORT could 
(IMHO) be handled in children only (a less intrusive way).
With SO_REUSEPORT, I would have expected the accept mutex to be useless since, 
if I understand correcly the option, multiple processes/threads can accept() 
simultaneously provided they use their own socket (each one bound/listening on 
the same addr:port).
Couldn't then each child duplicate the listeners (ie. new 
socket+bind(SO_REUSEPORT)+listen), before switching UIDs, and then poll() all 
of them without synchronisation (accept() is probably not an option for timeout 
reasons), and then get fair scheduling from the OS (for all the listeners)?
Is the lock still needed because the duplicated listeners are inherited from 
the parent process?

Without SO_REUSEPORT, if I understand correctly still, each child will poll() a 
single listener to avoid the serialized accept.
On the other hand, each child is dedicated, won't one have to multiply the 
configured ServerLimit by the number of Listen to achieve the same (maximum 
theoretical) scalability with regard to all the listeners?
I don't pretend it is a good or bad thing, just figuring out what could then be 
a rule to size the configuration (eg. MaxClients/ServerLimit/#cores/#Listen).
It seems to me that the patches with and without SO_REUSEPORT should be 
separate ones, but I may be missing something.
Also, but this is not related to this patch particularly (addressed to who 
knows), it's unclear to me why an accept mutex is needed at all.
Multiple processes poll()ing the same inherited socket is safe but not multiple 
ones? Is that an OS issue? Process wide only? Still (in)valid in latest OSes?

Thanks for the patch anyway, it looks promising.
Regards,
Yann.

On Sat, Jan 25, 2014 at 12:25 AM, Lu, Yingqi 
yingqi...@intel.commailto:yingqi...@intel.com wrote:
Dear All,

Our analysis of Apache httpd 2.4.7 prefork mpm, on 32 and 64 thread Intel Xeon 
2600 series systems, using an open source three tier social networking web 
server workload, revealed performance scaling issues.  In current software 
single listen statement (listen 80) provides better scalability due to 
un-serialized accept. However, when system is under very high load, this can 
lead to big number of child processes stuck in D state.


On the other hand, the serialized accept approach cannot scale with the high 
load either.  In our analysis, a 32-thread system, with 2 listen statements 
specified, could scale to just 70% utilization, and a 64-thread system, with 
signal listen statement specified (listen 80, 4 network interfaces), could 
scale to only 60% utilization

RE: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT support

2014-03-05 Thread Lu, Yingqi
Hi Bill,

Thanks very much for your email and I am really happy that I got lots of very 
good feedbacks on the email list.

The patch was created only for Linux Prefork mpm so that it should not impact 
winnt_mpm. I may misunderstand you here, but do you mean in order to adopt the 
patch, we need to extend it for winnt_mpm?

Regarding to the testing result, what we provided was based on RHEL 6.2 (server 
version) with kernel 3.10.4. We measured the throughput as operations/sec as 
well as the response time defined by the time that a request sending from the 
client till it gets the response back. It is a three tier webserver workload. 
We measured the throughput on the frontend webserver tier (Apache httpd with 
Prefork + PHP as libphp5.so under httpd/modules).

Thanks,
Yingqi 

-Original Message-
From: William A. Rowe Jr. [mailto:wmr...@gmail.com] 
Sent: Wednesday, March 05, 2014 9:58 PM
To: dev@httpd.apache.org
Subject: Re: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT 
support

Yingqi,

as one of the 'Windows folks' here, your idea is very intriguing, and I'm sorry 
that other issues have distracted me from giving it the attention it deserves.

If you want to truly re-architect the MPM, by all means, propose it as another 
MPM module.  If it isn't adopted here, please don't hesitate to offer it to 
interested users as separate source (although I hope we find a way to adopt it.)

The idea of different MPM's was that they were swappable.  MPM foo isn't MPM 
bar.  E.g., worker, prefork, event each have their own tree.
 Likewise, there is nothing stopping us from having 2, or 3 MPM's on Windows, 
and there is nothing stopping us from stating that there is a prerequisite on a 
particular MPM of Linux 3.1 kernels or Windows
2008+.

The Windows build system hasn't been so flexible, but this can be remediated 
with cmake, as folks have spent many hours to accomplish.
I understand you are probably relying on functions authored entirely for the 
winnt_mpm, and we can re-factor those on trunk out to the os/win32/ directory 
so that MPM's may share them.

The definition of the word prefork is a single thread process which handles a 
request.  Please don't misuse the phrase, and without reviewing your code, I'll 
presume that is what you meant.

I don't doubt your results of benchmarking, but please make note that only 
Windows Server OS's can actually be used to perform any benchmarks.  Any 
'desktop' enterprise, professional or home editions are deliberately hobbled, 
and IMHO the project should make no accommodation for vendor stupidity.

In terms of benchmarking, I don't know how you measured, but if you can peg a 
machine at 95% total utilization yet httpd shows itself consuming only 70% or 
60%, that means it is kernel-bound.  That is usually a good thing, that the app 
is operating optimally and is only constrained by the architecture.

I think I understand where you are going with reuseport.  That doesn't equate 
to the Unix OS's... they can distribute the already opened listener to an 
unlimited number of forks.  On windows, we also distribute the listener through 
a write/stdin channel to the child process.  What doesn't work well is for 
parallel windows children to share certain resources such as the error log, 
access log etc.  But we can contend with that issue.  What we can't contend 
with is what 3rd party modules have chosen to do, and almost any patch you 
offer is not going to be suitable for binary compatibility with 3rd party httpd 
2.4 modules compiled for windows, so your patch presented for the 2.4 branch is 
rejected.

That said, we should endeavor to solve this for 2.6 (or 3.0 or whatever we call 
the 'next httpd').  We are all out of fresh ideas, so proposals such as yours 
are a welcome sight!!!

Finally, please do have patience, large patches require time for us to digest, 
and we have limited amounts of that resource.  As I mention, adding a whole new 
MPM directory to trunk, alone, should meet very little resistance for any 
architectures.

Thank you for your posts, and please do not feel ignored.  There are a handful 
of people active and we all have many details to attend to.

Yours,

Bill

On Fri, Jan 24, 2014 at 5:25 PM, Lu, Yingqi yingqi...@intel.com wrote:
 Dear All,



 Our analysis of Apache httpd 2.4.7 prefork mpm, on 32 and 64 thread 
 Intel Xeon 2600 series systems, using an open source three tier social 
 networking web server workload, revealed performance scaling issues.  
 In current software single listen statement (listen 80) provides 
 better scalability due to un-serialized accept. However, when system 
 is under very high load, this can lead to big number of child 
 processes stuck in D state. On the other hand, the serialized accept approach 
 cannot scale with the high load either.
 In our analysis, a 32-thread system, with 2 listen statements 
 specified, could scale to just 70% utilization, and a 64-thread 
 system, with signal listen

RE: FW: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT support

2014-03-04 Thread Lu, Yingqi
Hi All,

I just want to ping again on this patch to gather your feedback and comments. 
Please refer to the messages below for patch details.

If you need any additional information/supporting data, please let us know as 
well.

Thanks,
Yingqi


From: Lu, Yingqi
Sent: Monday, February 24, 2014 10:37 AM
To: dev@httpd.apache.org
Subject: RE: FW: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT 
support

Thanks very much, Jeff!

Thanks,
Lucy

From: Jeff Trawick [mailto:traw...@gmail.com]
Sent: Monday, February 24, 2014 10:36 AM
To: Apache HTTP Server Development List
Subject: Re: FW: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT 
support

On Mon, Feb 24, 2014 at 1:20 PM, Lu, Yingqi 
yingqi...@intel.commailto:yingqi...@intel.com wrote:
Hi All,

I just want to ping again on this patch to gather your feedback and comments. 
Please refer to the messages below for patch details.

Thanks,
Yingqi

Hi Yinqqi,

I'm sorry that nobody has responded yet.  I'll try to do so very soon.


From: Lu, Yingqi
Sent: Monday, February 10, 2014 11:13 AM

To: dev@httpd.apache.orgmailto:dev@httpd.apache.org
Subject: RE: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT 
support

I am reattaching the patch in case you missed the original email.

Thanks,
Yingqi

From: Lu, Yingqi
Sent: Monday, February 10, 2014 11:09 AM
To: dev@httpd.apache.orgmailto:dev@httpd.apache.org
Subject: RE: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT 
support

Hi All,

I just want to ping again on this patch to see if there are any feedback and 
comments. This is our first patch to the Apache community. Please let us know 
if there is anything we can do to help you test and comment the patch.

Thanks,
Yingqi

From: Lu, Yingqi
Sent: Friday, January 24, 2014 3:26 PM
To: dev@httpd.apache.orgmailto:dev@httpd.apache.org
Subject: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT support

Dear All,

Our analysis of Apache httpd 2.4.7 prefork mpm, on 32 and 64 thread Intel Xeon 
2600 series systems, using an open source three tier social networking web 
server workload, revealed performance scaling issues.  In current software 
single listen statement (listen 80) provides better scalability due to 
un-serialized accept. However, when system is under very high load, this can 
lead to big number of child processes stuck in D state. On the other hand, the 
serialized accept approach cannot scale with the high load either.  In our 
analysis, a 32-thread system, with 2 listen statements specified, could scale 
to just 70% utilization, and a 64-thread system, with signal listen statement 
specified (listen 80, 4 network interfaces), could scale to only 60% 
utilization.

Based on those findings, we created a prototype patch for prefork mpm which 
extends performance and thread utilization. In Linux kernel newer than 3.9, 
SO_REUSEPORT is enabled. This feature allows multiple sockets listen to the 
same IP:port and automatically round robins connections. We use this feature to 
create multiple duplicated listener records of the original one and partition 
the child processes into buckets. Each bucket listens to 1 IP:port. In case of 
old kernel which does not have the SO_REUSEPORT enabled, we modified the 
multiple listen statement case by creating 1 listen record for each listen 
statement and partitioning the child processes into different buckets. Each 
bucket listens to 1 IP:port.

Quick tests of the patch, running the same workload, demonstrated a 22% 
throughput increase with 32-threads system and 2 listen statements (Linux 
kernel 3.10.4). With the older kernel (Linux Kernel 3.8.8, without 
SO_REUSEPORT), 10% performance gain was measured. With single listen statement 
(listen 80) configuration, we observed over 2X performance improvements on 
modern dual socket Intel platforms (Linux Kernel 3.10.4). We also observed big 
reduction in response time, in addition to the throughput improvement gained in 
our tests 1.

Following the feedback from the bugzilla website where we originally submitted 
the patch, we removed the dependency of APR change to simplify the patch 
testing process. Thanks Jeff Trawick for his good suggestion! We are also 
actively working on extending the patch to worker and event MPMs, as a next 
step. Meanwhile, we would like to gather comments from all of you on the 
current prefork patch. Please take some time test it and let us know how it 
works in your environment.

This is our first patch to the Apache community. Please help us review it and 
let us know if there is anything we might revise to improve it. Your feedback 
is very much appreciated.

Configuration:
IfModule prefork.c
ListenBacklog 105384
ServerLimit 105000
MaxClients 1024
MaxRequestsPerChild 0
StartServers 64
MinSpareServers 8
MaxSpareServers 16
/IfModule

1. Software and workloads used in performance tests may have been optimized for 
performance only on Intel microprocessors

RE: FW: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT support

2014-03-04 Thread Lu, Yingqi
Hi Jeff,

Thanks very much for your time reviewing the patch! We will modify the patch 
according to your comments and repost it here.

Thanks,
Yingqi

From: Jeff Trawick [mailto:traw...@gmail.com]
Sent: Tuesday, March 04, 2014 10:08 AM
To: Apache HTTP Server Development List
Subject: Re: FW: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT 
support

On Tue, Mar 4, 2014 at 10:35 AM, Lu, Yingqi 
yingqi...@intel.commailto:yingqi...@intel.com wrote:
Hi All,

I just want to ping again on this patch to gather your feedback and comments. 
Please refer to the messages below for patch details.

If you need any additional information/supporting data, please let us know as 
well.

Yeah, it has been on my todo list, but I don't have time to give an in depth 
review at the moment.  Here are a few questions/comments.  (And you'll have to 
deal with the fact that it is unnecessarily tedious for me to evaluate 
higher-level considerations if there are a lot of distractions, such as the 
code comments below ;)  But others are of course free to chime in.)

The patch should be against httpd trunk.  It probably won't take much time for 
you to create that patch and confirm basic operation.

What is the impact to other MPMs, even if they shouldn't use or don't have the 
necessary code to use SO_REUSEPORT at this time?

Have you tried the event MPM?

Is there a way for the admin to choose this behavior?  Most won't care, but 
everyone's behavior is changed AFAICT.

Are there built-in limitations in this patch that we should be aware of?  E.g., 
the free slot/spawn rate changes suggest to me that there can't be more than 
1025 children???

We should assume for now that there's no reason this couldn't be committed to 
trunk after review/rework, so make sure it is as close as you can get it to 
what you think is the final form.

For the configure-time check for 3.9 kernel: I think we'd also use 
AC_TRY_COMPILE at configure time to confirm that the SO_REUSEPORT definition is 
available, and not enable it if the system includes doesn't define it.  (Does 
that cause a problem for any significant number of people?)

Don't mention the patch in the patch ;) (e.g., This function is added for the 
patch.)

Incomplete comments on style/syntax issues:

* mixing declarations and statements (e.g., duplr-next = 0; apr_socket_t 
*temps;) isn't supported by all compilers and is distracting when reviewing
* suitable identifier names (e.g., fix global variable flag and whatever else 
isn't appropriate; ap_post_config_listeners should be renamed to indicate 
what it does
* APLOGNO(9) and comments about fixing it: Instead put APLOGNO() and 
don't add reminders in comments
* this doesn't seem portable: int free_slots[MAX_SPAWN_RATE/num_buckets];
and so on


Thanks,
Yingqi


From: Lu, Yingqi
Sent: Monday, February 24, 2014 10:37 AM
To: dev@httpd.apache.orgmailto:dev@httpd.apache.org
Subject: RE: FW: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT 
support

Thanks very much, Jeff!

Thanks,
Lucy

From: Jeff Trawick [mailto:traw...@gmail.com]
Sent: Monday, February 24, 2014 10:36 AM
To: Apache HTTP Server Development List
Subject: Re: FW: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT 
support

On Mon, Feb 24, 2014 at 1:20 PM, Lu, Yingqi 
yingqi...@intel.commailto:yingqi...@intel.com wrote:
Hi All,

I just want to ping again on this patch to gather your feedback and comments. 
Please refer to the messages below for patch details.

Thanks,
Yingqi

Hi Yinqqi,

I'm sorry that nobody has responded yet.  I'll try to do so very soon.


From: Lu, Yingqi
Sent: Monday, February 10, 2014 11:13 AM

To: dev@httpd.apache.orgmailto:dev@httpd.apache.org
Subject: RE: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT 
support

I am reattaching the patch in case you missed the original email.

Thanks,
Yingqi

From: Lu, Yingqi
Sent: Monday, February 10, 2014 11:09 AM
To: dev@httpd.apache.orgmailto:dev@httpd.apache.org
Subject: RE: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT 
support

Hi All,

I just want to ping again on this patch to see if there are any feedback and 
comments. This is our first patch to the Apache community. Please let us know 
if there is anything we can do to help you test and comment the patch.

Thanks,
Yingqi

From: Lu, Yingqi
Sent: Friday, January 24, 2014 3:26 PM
To: dev@httpd.apache.orgmailto:dev@httpd.apache.org
Subject: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT support

Dear All,

Our analysis of Apache httpd 2.4.7 prefork mpm, on 32 and 64 thread Intel Xeon 
2600 series systems, using an open source three tier social networking web 
server workload, revealed performance scaling issues.  In current software 
single listen statement (listen 80) provides better scalability due to 
un-serialized accept. However, when system is under very high load, this can 
lead to big number of child processes stuck in D state. On the other hand

RE: FW: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT support

2014-03-04 Thread Lu, Yingqi
This is actually a very good point. We should make the SO_REUSEPORT detection 
over the runtime.

I will put this comment into the change list.

Thanks very much, Arkadiusz!

Yingqi

-Original Message-
From: Arkadiusz Miśkiewicz [mailto:ar...@maven.pl] 
Sent: Tuesday, March 04, 2014 10:51 AM
To: dev@httpd.apache.org
Subject: Re: FW: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT 
support

On Tuesday 04 of March 2014, Jeff Trawick wrote:

 For the configure-time check for 3.9 kernel: I think we'd also use 
 AC_TRY_COMPILE at configure time to confirm that the SO_REUSEPORT 
 definition is available, and not enable it if the system includes 
 doesn't define it.  (Does that cause a problem for any significant 
 number of
 people?)

What if I build apache on newer kernel and run it on older? (not that uncommon 
in rpm/deb/other binary packaging system).

It shouldn't be hard to detect SO_REUSEPORT support runtime, right?

--
Arkadiusz Miśkiewicz, arekm / maven.pl


RE: FW: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT support

2014-02-28 Thread Lu, Yingqi
Hi Jeff,

I am just checking if you got chance to check this?

Thanks,
Lucy

From: Lu, Yingqi
Sent: Monday, February 24, 2014 10:37 AM
To: dev@httpd.apache.org
Subject: RE: FW: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT 
support

Thanks very much, Jeff!

Thanks,
Lucy

From: Jeff Trawick [mailto:traw...@gmail.com]
Sent: Monday, February 24, 2014 10:36 AM
To: Apache HTTP Server Development List
Subject: Re: FW: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT 
support

On Mon, Feb 24, 2014 at 1:20 PM, Lu, Yingqi 
yingqi...@intel.commailto:yingqi...@intel.com wrote:
Hi All,

I just want to ping again on this patch to gather your feedback and comments. 
Please refer to the messages below for patch details.

Thanks,
Yingqi

Hi Yinqqi,

I'm sorry that nobody has responded yet.  I'll try to do so very soon.


From: Lu, Yingqi
Sent: Monday, February 10, 2014 11:13 AM

To: dev@httpd.apache.orgmailto:dev@httpd.apache.org
Subject: RE: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT 
support

I am reattaching the patch in case you missed the original email.

Thanks,
Yingqi

From: Lu, Yingqi
Sent: Monday, February 10, 2014 11:09 AM
To: dev@httpd.apache.orgmailto:dev@httpd.apache.org
Subject: RE: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT 
support

Hi All,

I just want to ping again on this patch to see if there are any feedback and 
comments. This is our first patch to the Apache community. Please let us know 
if there is anything we can do to help you test and comment the patch.

Thanks,
Yingqi

From: Lu, Yingqi
Sent: Friday, January 24, 2014 3:26 PM
To: dev@httpd.apache.orgmailto:dev@httpd.apache.org
Subject: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT support

Dear All,

Our analysis of Apache httpd 2.4.7 prefork mpm, on 32 and 64 thread Intel Xeon 
2600 series systems, using an open source three tier social networking web 
server workload, revealed performance scaling issues.  In current software 
single listen statement (listen 80) provides better scalability due to 
un-serialized accept. However, when system is under very high load, this can 
lead to big number of child processes stuck in D state. On the other hand, the 
serialized accept approach cannot scale with the high load either.  In our 
analysis, a 32-thread system, with 2 listen statements specified, could scale 
to just 70% utilization, and a 64-thread system, with signal listen statement 
specified (listen 80, 4 network interfaces), could scale to only 60% 
utilization.

Based on those findings, we created a prototype patch for prefork mpm which 
extends performance and thread utilization. In Linux kernel newer than 3.9, 
SO_REUSEPORT is enabled. This feature allows multiple sockets listen to the 
same IP:port and automatically round robins connections. We use this feature to 
create multiple duplicated listener records of the original one and partition 
the child processes into buckets. Each bucket listens to 1 IP:port. In case of 
old kernel which does not have the SO_REUSEPORT enabled, we modified the 
multiple listen statement case by creating 1 listen record for each listen 
statement and partitioning the child processes into different buckets. Each 
bucket listens to 1 IP:port.

Quick tests of the patch, running the same workload, demonstrated a 22% 
throughput increase with 32-threads system and 2 listen statements (Linux 
kernel 3.10.4). With the older kernel (Linux Kernel 3.8.8, without 
SO_REUSEPORT), 10% performance gain was measured. With single listen statement 
(listen 80) configuration, we observed over 2X performance improvements on 
modern dual socket Intel platforms (Linux Kernel 3.10.4). We also observed big 
reduction in response time, in addition to the throughput improvement gained in 
our tests 1.

Following the feedback from the bugzilla website where we originally submitted 
the patch, we removed the dependency of APR change to simplify the patch 
testing process. Thanks Jeff Trawick for his good suggestion! We are also 
actively working on extending the patch to worker and event MPMs, as a next 
step. Meanwhile, we would like to gather comments from all of you on the 
current prefork patch. Please take some time test it and let us know how it 
works in your environment.

This is our first patch to the Apache community. Please help us review it and 
let us know if there is anything we might revise to improve it. Your feedback 
is very much appreciated.

Configuration:
IfModule prefork.c
ListenBacklog 105384
ServerLimit 105000
MaxClients 1024
MaxRequestsPerChild 0
StartServers 64
MinSpareServers 8
MaxSpareServers 16
/IfModule

1. Software and workloads used in performance tests may have been optimized for 
performance only on Intel microprocessors. Performance tests, such as SYSmark 
and MobileMark, are measured using specific computer systems, components, 
software, operations and functions. Any change to any

FW: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT support

2014-02-24 Thread Lu, Yingqi
Hi All,

I just want to ping again on this patch to gather your feedback and comments. 
Please refer to the messages below for patch details.

Thanks,
Yingqi

From: Lu, Yingqi
Sent: Monday, February 10, 2014 11:13 AM
To: dev@httpd.apache.org
Subject: RE: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT 
support

I am reattaching the patch in case you missed the original email.

Thanks,
Yingqi

From: Lu, Yingqi
Sent: Monday, February 10, 2014 11:09 AM
To: dev@httpd.apache.orgmailto:dev@httpd.apache.org
Subject: RE: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT 
support

Hi All,

I just want to ping again on this patch to see if there are any feedback and 
comments. This is our first patch to the Apache community. Please let us know 
if there is anything we can do to help you test and comment the patch.

Thanks,
Yingqi

From: Lu, Yingqi
Sent: Friday, January 24, 2014 3:26 PM
To: dev@httpd.apache.orgmailto:dev@httpd.apache.org
Subject: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT support

Dear All,

Our analysis of Apache httpd 2.4.7 prefork mpm, on 32 and 64 thread Intel Xeon 
2600 series systems, using an open source three tier social networking web 
server workload, revealed performance scaling issues.  In current software 
single listen statement (listen 80) provides better scalability due to 
un-serialized accept. However, when system is under very high load, this can 
lead to big number of child processes stuck in D state. On the other hand, the 
serialized accept approach cannot scale with the high load either.  In our 
analysis, a 32-thread system, with 2 listen statements specified, could scale 
to just 70% utilization, and a 64-thread system, with signal listen statement 
specified (listen 80, 4 network interfaces), could scale to only 60% 
utilization.

Based on those findings, we created a prototype patch for prefork mpm which 
extends performance and thread utilization. In Linux kernel newer than 3.9, 
SO_REUSEPORT is enabled. This feature allows multiple sockets listen to the 
same IP:port and automatically round robins connections. We use this feature to 
create multiple duplicated listener records of the original one and partition 
the child processes into buckets. Each bucket listens to 1 IP:port. In case of 
old kernel which does not have the SO_REUSEPORT enabled, we modified the 
multiple listen statement case by creating 1 listen record for each listen 
statement and partitioning the child processes into different buckets. Each 
bucket listens to 1 IP:port.

Quick tests of the patch, running the same workload, demonstrated a 22% 
throughput increase with 32-threads system and 2 listen statements (Linux 
kernel 3.10.4). With the older kernel (Linux Kernel 3.8.8, without 
SO_REUSEPORT), 10% performance gain was measured. With single listen statement 
(listen 80) configuration, we observed over 2X performance improvements on 
modern dual socket Intel platforms (Linux Kernel 3.10.4). We also observed big 
reduction in response time, in addition to the throughput improvement gained in 
our tests 1.

Following the feedback from the bugzilla website where we originally submitted 
the patch, we removed the dependency of APR change to simplify the patch 
testing process. Thanks Jeff Trawick for his good suggestion! We are also 
actively working on extending the patch to worker and event MPMs, as a next 
step. Meanwhile, we would like to gather comments from all of you on the 
current prefork patch. Please take some time test it and let us know how it 
works in your environment.

This is our first patch to the Apache community. Please help us review it and 
let us know if there is anything we might revise to improve it. Your feedback 
is very much appreciated.

Configuration:
IfModule prefork.c
ListenBacklog 105384
ServerLimit 105000
MaxClients 1024
MaxRequestsPerChild 0
StartServers 64
MinSpareServers 8
MaxSpareServers 16
/IfModule

1. Software and workloads used in performance tests may have been optimized for 
performance only on Intel microprocessors. Performance tests, such as SYSmark 
and MobileMark, are measured using specific computer systems, components, 
software, operations and functions. Any change to any of those factors may 
cause the results to vary. You should consult other information and performance 
tests to assist you in fully evaluating your contemplated purchases, including 
the performance of that product when combined with other products.

Thanks,
Yingqi


unified.diff.httpd-2.4.7.patch
Description: unified.diff.httpd-2.4.7.patch


RE: FW: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT support

2014-02-24 Thread Lu, Yingqi
Thanks very much, Jeff!

Thanks,
Lucy

From: Jeff Trawick [mailto:traw...@gmail.com]
Sent: Monday, February 24, 2014 10:36 AM
To: Apache HTTP Server Development List
Subject: Re: FW: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT 
support

On Mon, Feb 24, 2014 at 1:20 PM, Lu, Yingqi 
yingqi...@intel.commailto:yingqi...@intel.com wrote:
Hi All,

I just want to ping again on this patch to gather your feedback and comments. 
Please refer to the messages below for patch details.

Thanks,
Yingqi

Hi Yinqqi,

I'm sorry that nobody has responded yet.  I'll try to do so very soon.


From: Lu, Yingqi
Sent: Monday, February 10, 2014 11:13 AM

To: dev@httpd.apache.orgmailto:dev@httpd.apache.org
Subject: RE: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT 
support

I am reattaching the patch in case you missed the original email.

Thanks,
Yingqi

From: Lu, Yingqi
Sent: Monday, February 10, 2014 11:09 AM
To: dev@httpd.apache.orgmailto:dev@httpd.apache.org
Subject: RE: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT 
support

Hi All,

I just want to ping again on this patch to see if there are any feedback and 
comments. This is our first patch to the Apache community. Please let us know 
if there is anything we can do to help you test and comment the patch.

Thanks,
Yingqi

From: Lu, Yingqi
Sent: Friday, January 24, 2014 3:26 PM
To: dev@httpd.apache.orgmailto:dev@httpd.apache.org
Subject: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT support

Dear All,

Our analysis of Apache httpd 2.4.7 prefork mpm, on 32 and 64 thread Intel Xeon 
2600 series systems, using an open source three tier social networking web 
server workload, revealed performance scaling issues.  In current software 
single listen statement (listen 80) provides better scalability due to 
un-serialized accept. However, when system is under very high load, this can 
lead to big number of child processes stuck in D state. On the other hand, the 
serialized accept approach cannot scale with the high load either.  In our 
analysis, a 32-thread system, with 2 listen statements specified, could scale 
to just 70% utilization, and a 64-thread system, with signal listen statement 
specified (listen 80, 4 network interfaces), could scale to only 60% 
utilization.

Based on those findings, we created a prototype patch for prefork mpm which 
extends performance and thread utilization. In Linux kernel newer than 3.9, 
SO_REUSEPORT is enabled. This feature allows multiple sockets listen to the 
same IP:port and automatically round robins connections. We use this feature to 
create multiple duplicated listener records of the original one and partition 
the child processes into buckets. Each bucket listens to 1 IP:port. In case of 
old kernel which does not have the SO_REUSEPORT enabled, we modified the 
multiple listen statement case by creating 1 listen record for each listen 
statement and partitioning the child processes into different buckets. Each 
bucket listens to 1 IP:port.

Quick tests of the patch, running the same workload, demonstrated a 22% 
throughput increase with 32-threads system and 2 listen statements (Linux 
kernel 3.10.4). With the older kernel (Linux Kernel 3.8.8, without 
SO_REUSEPORT), 10% performance gain was measured. With single listen statement 
(listen 80) configuration, we observed over 2X performance improvements on 
modern dual socket Intel platforms (Linux Kernel 3.10.4). We also observed big 
reduction in response time, in addition to the throughput improvement gained in 
our tests 1.

Following the feedback from the bugzilla website where we originally submitted 
the patch, we removed the dependency of APR change to simplify the patch 
testing process. Thanks Jeff Trawick for his good suggestion! We are also 
actively working on extending the patch to worker and event MPMs, as a next 
step. Meanwhile, we would like to gather comments from all of you on the 
current prefork patch. Please take some time test it and let us know how it 
works in your environment.

This is our first patch to the Apache community. Please help us review it and 
let us know if there is anything we might revise to improve it. Your feedback 
is very much appreciated.

Configuration:
IfModule prefork.c
ListenBacklog 105384
ServerLimit 105000
MaxClients 1024
MaxRequestsPerChild 0
StartServers 64
MinSpareServers 8
MaxSpareServers 16
/IfModule

1. Software and workloads used in performance tests may have been optimized for 
performance only on Intel microprocessors. Performance tests, such as SYSmark 
and MobileMark, are measured using specific computer systems, components, 
software, operations and functions. Any change to any of those factors may 
cause the results to vary. You should consult other information and performance 
tests to assist you in fully evaluating your contemplated purchases, including 
the performance of that product when combined with other products

RE: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT support

2014-02-10 Thread Lu, Yingqi
Hi All,

I just want to ping again on this patch to see if there are any feedback and 
comments. This is our first patch to the Apache community. Please let us know 
if there is anything we can do to help you test and comment the patch.

Thanks,
Yingqi

From: Lu, Yingqi
Sent: Friday, January 24, 2014 3:26 PM
To: dev@httpd.apache.orgmailto:dev@httpd.apache.org
Subject: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT support

Dear All,

Our analysis of Apache httpd 2.4.7 prefork mpm, on 32 and 64 thread Intel Xeon 
2600 series systems, using an open source three tier social networking web 
server workload, revealed performance scaling issues.  In current software 
single listen statement (listen 80) provides better scalability due to 
un-serialized accept. However, when system is under very high load, this can 
lead to big number of child processes stuck in D state. On the other hand, the 
serialized accept approach cannot scale with the high load either.  In our 
analysis, a 32-thread system, with 2 listen statements specified, could scale 
to just 70% utilization, and a 64-thread system, with signal listen statement 
specified (listen 80, 4 network interfaces), could scale to only 60% 
utilization.

Based on those findings, we created a prototype patch for prefork mpm which 
extends performance and thread utilization. In Linux kernel newer than 3.9, 
SO_REUSEPORT is enabled. This feature allows multiple sockets listen to the 
same IP:port and automatically round robins connections. We use this feature to 
create multiple duplicated listener records of the original one and partition 
the child processes into buckets. Each bucket listens to 1 IP:port. In case of 
old kernel which does not have the SO_REUSEPORT enabled, we modified the 
multiple listen statement case by creating 1 listen record for each listen 
statement and partitioning the child processes into different buckets. Each 
bucket listens to 1 IP:port.

Quick tests of the patch, running the same workload, demonstrated a 22% 
throughput increase with 32-threads system and 2 listen statements (Linux 
kernel 3.10.4). With the older kernel (Linux Kernel 3.8.8, without 
SO_REUSEPORT), 10% performance gain was measured. With single listen statement 
(listen 80) configuration, we observed over 2X performance improvements on 
modern dual socket Intel platforms (Linux Kernel 3.10.4). We also observed big 
reduction in response time, in addition to the throughput improvement gained in 
our tests 1.

Following the feedback from the bugzilla website where we originally submitted 
the patch, we removed the dependency of APR change to simplify the patch 
testing process. Thanks Jeff Trawick for his good suggestion! We are also 
actively working on extending the patch to worker and event MPMs, as a next 
step. Meanwhile, we would like to gather comments from all of you on the 
current prefork patch. Please take some time test it and let us know how it 
works in your environment.

This is our first patch to the Apache community. Please help us review it and 
let us know if there is anything we might revise to improve it. Your feedback 
is very much appreciated.

Configuration:
IfModule prefork.c
ListenBacklog 105384
ServerLimit 105000
MaxClients 1024
MaxRequestsPerChild 0
StartServers 64
MinSpareServers 8
MaxSpareServers 16
/IfModule

1. Software and workloads used in performance tests may have been optimized for 
performance only on Intel microprocessors. Performance tests, such as SYSmark 
and MobileMark, are measured using specific computer systems, components, 
software, operations and functions. Any change to any of those factors may 
cause the results to vary. You should consult other information and performance 
tests to assist you in fully evaluating your contemplated purchases, including 
the performance of that product when combined with other products.

Thanks,
Yingqi


RE: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT support

2014-02-10 Thread Lu, Yingqi
I am reattaching the patch in case you missed the original email.

Thanks,
Yingqi

From: Lu, Yingqi
Sent: Monday, February 10, 2014 11:09 AM
To: dev@httpd.apache.org
Subject: RE: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT 
support

Hi All,

I just want to ping again on this patch to see if there are any feedback and 
comments. This is our first patch to the Apache community. Please let us know 
if there is anything we can do to help you test and comment the patch.

Thanks,
Yingqi

From: Lu, Yingqi
Sent: Friday, January 24, 2014 3:26 PM
To: dev@httpd.apache.orgmailto:dev@httpd.apache.org
Subject: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT support

Dear All,

Our analysis of Apache httpd 2.4.7 prefork mpm, on 32 and 64 thread Intel Xeon 
2600 series systems, using an open source three tier social networking web 
server workload, revealed performance scaling issues.  In current software 
single listen statement (listen 80) provides better scalability due to 
un-serialized accept. However, when system is under very high load, this can 
lead to big number of child processes stuck in D state. On the other hand, the 
serialized accept approach cannot scale with the high load either.  In our 
analysis, a 32-thread system, with 2 listen statements specified, could scale 
to just 70% utilization, and a 64-thread system, with signal listen statement 
specified (listen 80, 4 network interfaces), could scale to only 60% 
utilization.

Based on those findings, we created a prototype patch for prefork mpm which 
extends performance and thread utilization. In Linux kernel newer than 3.9, 
SO_REUSEPORT is enabled. This feature allows multiple sockets listen to the 
same IP:port and automatically round robins connections. We use this feature to 
create multiple duplicated listener records of the original one and partition 
the child processes into buckets. Each bucket listens to 1 IP:port. In case of 
old kernel which does not have the SO_REUSEPORT enabled, we modified the 
multiple listen statement case by creating 1 listen record for each listen 
statement and partitioning the child processes into different buckets. Each 
bucket listens to 1 IP:port.

Quick tests of the patch, running the same workload, demonstrated a 22% 
throughput increase with 32-threads system and 2 listen statements (Linux 
kernel 3.10.4). With the older kernel (Linux Kernel 3.8.8, without 
SO_REUSEPORT), 10% performance gain was measured. With single listen statement 
(listen 80) configuration, we observed over 2X performance improvements on 
modern dual socket Intel platforms (Linux Kernel 3.10.4). We also observed big 
reduction in response time, in addition to the throughput improvement gained in 
our tests 1.

Following the feedback from the bugzilla website where we originally submitted 
the patch, we removed the dependency of APR change to simplify the patch 
testing process. Thanks Jeff Trawick for his good suggestion! We are also 
actively working on extending the patch to worker and event MPMs, as a next 
step. Meanwhile, we would like to gather comments from all of you on the 
current prefork patch. Please take some time test it and let us know how it 
works in your environment.

This is our first patch to the Apache community. Please help us review it and 
let us know if there is anything we might revise to improve it. Your feedback 
is very much appreciated.

Configuration:
IfModule prefork.c
ListenBacklog 105384
ServerLimit 105000
MaxClients 1024
MaxRequestsPerChild 0
StartServers 64
MinSpareServers 8
MaxSpareServers 16
/IfModule

1. Software and workloads used in performance tests may have been optimized for 
performance only on Intel microprocessors. Performance tests, such as SYSmark 
and MobileMark, are measured using specific computer systems, components, 
software, operations and functions. Any change to any of those factors may 
cause the results to vary. You should consult other information and performance 
tests to assist you in fully evaluating your contemplated purchases, including 
the performance of that product when combined with other products.

Thanks,
Yingqi


unified.diff.httpd-2.4.7.patch
Description: unified.diff.httpd-2.4.7.patch


RE: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT support

2014-01-31 Thread Lu, Yingqi
Hi All,

I just want to check if there are feedback/comments to this patch?

Thanks,
Yingqi

From: Lu, Yingqi
Sent: Friday, January 24, 2014 3:26 PM
To: dev@httpd.apache.org
Subject: [PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT support

Dear All,

Our analysis of Apache httpd 2.4.7 prefork mpm, on 32 and 64 thread Intel Xeon 
2600 series systems, using an open source three tier social networking web 
server workload, revealed performance scaling issues.  In current software 
single listen statement (listen 80) provides better scalability due to 
un-serialized accept. However, when system is under very high load, this can 
lead to big number of child processes stuck in D state. On the other hand, the 
serialized accept approach cannot scale with the high load either.  In our 
analysis, a 32-thread system, with 2 listen statements specified, could scale 
to just 70% utilization, and a 64-thread system, with signal listen statement 
specified (listen 80, 4 network interfaces), could scale to only 60% 
utilization.

Based on those findings, we created a prototype patch for prefork mpm which 
extends performance and thread utilization. In Linux kernel newer than 3.9, 
SO_REUSEPORT is enabled. This feature allows multiple sockets listen to the 
same IP:port and automatically round robins connections. We use this feature to 
create multiple duplicated listener records of the original one and partition 
the child processes into buckets. Each bucket listens to 1 IP:port. In case of 
old kernel which does not have the SO_REUSEPORT enabled, we modified the 
multiple listen statement case by creating 1 listen record for each listen 
statement and partitioning the child processes into different buckets. Each 
bucket listens to 1 IP:port.

Quick tests of the patch, running the same workload, demonstrated a 22% 
throughput increase with 32-threads system and 2 listen statements (Linux 
kernel 3.10.4). With the older kernel (Linux Kernel 3.8.8, without 
SO_REUSEPORT), 10% performance gain was measured. With single listen statement 
(listen 80) configuration, we observed over 2X performance improvements on 
modern dual socket Intel platforms (Linux Kernel 3.10.4). We also observed big 
reduction in response time, in addition to the throughput improvement gained in 
our tests 1.

Following the feedback from the bugzilla website where we originally submitted 
the patch, we removed the dependency of APR change to simplify the patch 
testing process. Thanks Jeff Trawick for his good suggestion! We are also 
actively working on extending the patch to worker and event MPMs, as a next 
step. Meanwhile, we would like to gather comments from all of you on the 
current prefork patch. Please take some time test it and let us know how it 
works in your environment.

This is our first patch to the Apache community. Please help us review it and 
let us know if there is anything we might revise to improve it. Your feedback 
is very much appreciated.

Configuration:
IfModule prefork.c
ListenBacklog 105384
ServerLimit 105000
MaxClients 1024
MaxRequestsPerChild 0
StartServers 64
MinSpareServers 8
MaxSpareServers 16
/IfModule

1. Software and workloads used in performance tests may have been optimized for 
performance only on Intel microprocessors. Performance tests, such as SYSmark 
and MobileMark, are measured using specific computer systems, components, 
software, operations and functions. Any change to any of those factors may 
cause the results to vary. You should consult other information and performance 
tests to assist you in fully evaluating your contemplated purchases, including 
the performance of that product when combined with other products.

Thanks,
Yingqi


[PATCH ASF bugzilla# 55897]prefork_mpm patch with SO_REUSEPORT support

2014-01-24 Thread Lu, Yingqi
Dear All,

Our analysis of Apache httpd 2.4.7 prefork mpm, on 32 and 64 thread Intel Xeon 
2600 series systems, using an open source three tier social networking web 
server workload, revealed performance scaling issues.  In current software 
single listen statement (listen 80) provides better scalability due to 
un-serialized accept. However, when system is under very high load, this can 
lead to big number of child processes stuck in D state. On the other hand, the 
serialized accept approach cannot scale with the high load either.  In our 
analysis, a 32-thread system, with 2 listen statements specified, could scale 
to just 70% utilization, and a 64-thread system, with signal listen statement 
specified (listen 80, 4 network interfaces), could scale to only 60% 
utilization.

Based on those findings, we created a prototype patch for prefork mpm which 
extends performance and thread utilization. In Linux kernel newer than 3.9, 
SO_REUSEPORT is enabled. This feature allows multiple sockets listen to the 
same IP:port and automatically round robins connections. We use this feature to 
create multiple duplicated listener records of the original one and partition 
the child processes into buckets. Each bucket listens to 1 IP:port. In case of 
old kernel which does not have the SO_REUSEPORT enabled, we modified the 
multiple listen statement case by creating 1 listen record for each listen 
statement and partitioning the child processes into different buckets. Each 
bucket listens to 1 IP:port.

Quick tests of the patch, running the same workload, demonstrated a 22% 
throughput increase with 32-threads system and 2 listen statements (Linux 
kernel 3.10.4). With the older kernel (Linux Kernel 3.8.8, without 
SO_REUSEPORT), 10% performance gain was measured. With single listen statement 
(listen 80) configuration, we observed over 2X performance improvements on 
modern dual socket Intel platforms (Linux Kernel 3.10.4). We also observed big 
reduction in response time, in addition to the throughput improvement gained in 
our tests 1.

Following the feedback from the bugzilla website where we originally submitted 
the patch, we removed the dependency of APR change to simplify the patch 
testing process. Thanks Jeff Trawick for his good suggestion! We are also 
actively working on extending the patch to worker and event MPMs, as a next 
step. Meanwhile, we would like to gather comments from all of you on the 
current prefork patch. Please take some time test it and let us know how it 
works in your environment.

This is our first patch to the Apache community. Please help us review it and 
let us know if there is anything we might revise to improve it. Your feedback 
is very much appreciated.

Configuration:
IfModule prefork.c
ListenBacklog 105384
ServerLimit 105000
MaxClients 1024
MaxRequestsPerChild 0
StartServers 64
MinSpareServers 8
MaxSpareServers 16
/IfModule

1. Software and workloads used in performance tests may have been optimized for 
performance only on Intel microprocessors. Performance tests, such as SYSmark 
and MobileMark, are measured using specific computer systems, components, 
software, operations and functions. Any change to any of those factors may 
cause the results to vary. You should consult other information and performance 
tests to assist you in fully evaluating your contemplated purchases, including 
the performance of that product when combined with other products.

Thanks,
Yingqi


unified.diff.httpd-2.4.7.patch
Description: unified.diff.httpd-2.4.7.patch