Hi Merlin,
I use some tools made by my self. So, I have done some PHP scripts to
gather this info.
The information I gave to you regarding concurrency (requests per sec)
are obtained simply by analyzing the apache logs. Look at the left, you
will see the date and time of the request, just group all the requests
that are in the very same second.
The active users were obtained analyzing the sess_* files and looking
for a special session var directly in the serialized data. Ofcorse,
this session var has to be setted by the corresponding app in the user
session.
There are others measures you can get. One of the most useful ones is
the execution time of the script. For this, you will have to modify the
apache log format in this way:
LogFormat %h %l %u %t \%r\ %s %b %T %P mycustomlog
and then tell apache to use it:
CustomLog /usr/local/apache/logs/access_log mycustomlog
This format adds the last two parameters, %T indicates the execution
time of the request and %P is the proccess id of the server that served
the request. Note1: %P is not mandatory, it is just nice ;-). Note2:
pay attention to the case, it is case-sensitive.
The important one is %T, the time is in seconds and it includes
everything, from PHP input processing to database interaction and PHP
output processing. As you can see, this is the total execution time of
a PHP script. The only thing that is not included obviously is the
network transfer time from the web server to the user's web browser.
So, a PHP script digest all the apache log and generates something like
this: (An example of the execution times from my app)
| Seconds | Requests | Percentage (%) | Accumulated (%) |
|0 |2003346 |76.4794 | 76.4794 |
|1 | 554878 |21.1829 | 97.6623 |
|2 | 43392 | 1.6565 | 99.3188 |
|3 | 11247 | 0.4294 | 99.7482 |
|4 | 3437 | 0.1312 | 99.8794 |
|5 | 1371 | 0.0523 | 99.9317 |
|6 |573 | 0.0219 | 99.9536 |
|7 |300 | 0.0115 | 99.9650 |
|8 |171 | 0.0065 | 99.9716 |
|9 |113 | 0.0043 | 99.9759 |
| 10 | 66 | 0.0025 | 99.9784 |
| 11 | 26 | 0.0010 | 99.9794 |
| 12 | 25 | 0.0010 | 99.9803 |
| 13 | 17 | 0.0006 | 99.9810 |
| 14 | 10 | 0.0004 | 99.9814 |
| 15 | 15 | 0.0006 | 99.9819 |
| 16 | 10 | 0.0004 | 99.9823 |
| 17 | 7 | 0.0003 | 99.9826 |
| 18 | 10 | 0.0004 | 99.9830 |
| 19 | 5 | 0.0002 | 99.9832 |
| 20 | 8 | 0.0003 | 99.9835 |
| 21+ |433 | 0.0165 | 100. |
| Total Requests: 2619460 |
Another thing that I would suggest is that you should profile your
application. May be using something like Zend Debugger if you can or
writting your application with profiling in mind.
First of all, it is easier if your application was made thinking in
reuse of code with a reasonable architecture. For example, your SQL
queries must be placed in a business loginc layer so you can reuse it
every time you need it. Extra care must be taken when designing the
API. Let's see an example of this:
function getCustomerTransactions($customerId) {
// Here goes the SQL logic:
// SELECT * from transactions where customer_id = $customerId
}
Inside this function you can take measures of the sql execution time or
the fetch times from the database. Remember that fetching thousands and
thousands of records through the network takes a lot of time.
Just take an initial time, execute an operation and then take the final
time, do the math (a difference BTW) and log it in a file with the
format of your preference (CSV is a good option). And finally you might
want to write an script to proccess the log file and show the results.
Ex:
$it = getmicrotime(); // the profiling starts
// Execute something here (usually, a query execution)
$ft = getmicrotime(); // the profiling ends
$totalTime = $ft - $it;
logThis(date(Y-m-d H:i:s), function-name, $totalTime);
For high precision measures take a look of microtime(). Look at the
user comments too.
http://www.php.net/microtime
With this you will be able to see the most used functions (queries) and
how much time they take to execute. Based on this information you will
know what you have to optimize first.