Stuart, thanks for your detailed response.

>>  I find it unlikely that Apache is your bottleneck,
>> especially with a service involving MySQL.
>> How have you come to this conclusion?

Apache is the entry-point to our service, and I did a
benchmark with AB to see how it can handle concurrent
requests in a timely fashion.  After a number of 50 concurrent
requests, the average "time per request" reached from less than
a second to 5 seconds.

On the other hand, the MySQL's slow_query_log was clear,
with long_query_time = 1.

Our MySQL database consists of less than 200 records,
distributed in normalized tables, yes, queries are making joins,
but the overall performance is OK.

>> As far as keep-alive goes, how frequently will individual
>> clients be accessing the service?

There are only "a few" clients that call the service.  These clients
are PHP-driven web pages. Each page has its own unique ClickID
and a set of other unique parameters per user visit.  These pages send
these parameters to the service using php-curl, and expect a generated
response to be returned.  That's why I'm saying each request and
response is unique.

Whenever a user visits a web-page, there would be a call to the
web-service.  At the moment, we don't know number of concurrent
visits.  We're looking for a way to figure that out in Apache.

Is there a way to see if the requests are using the previously keep-alived
http channel?  Because same client will send requests to the service,
and I'm curious to know if the Apache will allocate the already-opened
channel, or will create a new one?

>> If you are using joins to pull in extra data (i.e. IDs to a name
>> or similar) look at using Memcache for those, but make sure
>> that when they're updated in the DB they're also updated in Memcache.

Memcache or Redis, I'm going to add a caching layer between
MySQL and PHP, to store the de-normilized data.

I'm starting to learn more about nginx + php-fpm, thanks for
sharing your positive experience about this.

-behzad

Reply via email to