Hi Merlin, I use some tools made by my self. So, I have done some PHP scripts to gather this info.
The information I gave to you regarding concurrency (requests per sec) are obtained simply by analyzing the apache logs. Look at the left, you will see the date and time of the request, just group all the requests that are in the very same second. The active users were obtained analyzing the sess_* files and looking for a special session var directly in the serialized data. Ofcorse, this session var has to be setted by the corresponding app in the user session. There are others measures you can get. One of the most useful ones is the execution time of the script. For this, you will have to modify the apache log format in this way: LogFormat "%h %l %u %t \"%r\" %>s %b %T %P" mycustomlog and then tell apache to use it: CustomLog /usr/local/apache/logs/access_log mycustomlog This format adds the last two parameters, %T indicates the execution time of the request and %P is the proccess id of the server that served the request. Note1: %P is not mandatory, it is just nice ;-). Note2: pay attention to the case, it is case-sensitive. The important one is %T, the time is in seconds and it includes everything, from PHP input processing to database interaction and PHP output processing. As you can see, this is the total execution time of a PHP script. The only thing that is not included obviously is the network transfer time from the web server to the user's web browser. So, a PHP script digest all the apache log and generates something like this: (An example of the execution times from my app) ------------------------------------------------------------ | Seconds | Requests | Percentage (%) | Accumulated (%) | ------------------------------------------------------------ | 0 | 2003346 | 76.4794 | 76.4794 | | 1 | 554878 | 21.1829 | 97.6623 | | 2 | 43392 | 1.6565 | 99.3188 | | 3 | 11247 | 0.4294 | 99.7482 | | 4 | 3437 | 0.1312 | 99.8794 | | 5 | 1371 | 0.0523 | 99.9317 | | 6 | 573 | 0.0219 | 99.9536 | | 7 | 300 | 0.0115 | 99.9650 | | 8 | 171 | 0.0065 | 99.9716 | | 9 | 113 | 0.0043 | 99.9759 | | 10 | 66 | 0.0025 | 99.9784 | | 11 | 26 | 0.0010 | 99.9794 | | 12 | 25 | 0.0010 | 99.9803 | | 13 | 17 | 0.0006 | 99.9810 | | 14 | 10 | 0.0004 | 99.9814 | | 15 | 15 | 0.0006 | 99.9819 | | 16 | 10 | 0.0004 | 99.9823 | | 17 | 7 | 0.0003 | 99.9826 | | 18 | 10 | 0.0004 | 99.9830 | | 19 | 5 | 0.0002 | 99.9832 | | 20 | 8 | 0.0003 | 99.9835 | | 21+ | 433 | 0.0165 | 100.0000 | ------------------------------------------------------------ | Total Requests: 2619460 | ============================================================ Another thing that I would suggest is that you should profile your application. May be using something like Zend Debugger if you can or writting your application with profiling in mind. First of all, it is easier if your application was made thinking in reuse of code with a reasonable architecture. For example, your SQL queries must be placed in a business loginc layer so you can reuse it every time you need it. Extra care must be taken when designing the API. Let's see an example of this: function &getCustomerTransactions($customerId) { // Here goes the SQL logic: // SELECT * from transactions where customer_id = $customerId } Inside this function you can take measures of the sql execution time or the fetch times from the database. Remember that fetching thousands and thousands of records through the network takes a lot of time. Just take an initial time, execute an operation and then take the final time, do the math (a difference BTW) and log it in a file with the format of your preference (CSV is a good option). And finally you might want to write an script to proccess the log file and show the results. Ex: $it = getmicrotime(); // the profiling starts // Execute something here (usually, a query execution) $ft = getmicrotime(); // the profiling ends $totalTime = $ft - $it; logThis(date("Y-m-d H:i:s"), "<function-name>", $totalTime); For high precision measures take a look of microtime(). Look at the user comments too. http://www.php.net/microtime With this you will be able to see the most used functions (queries) and how much time they take to execute. Based on this information you will know what you have to optimize first. Just remember, "Premature optimization is the root of all evil". -William PS. Since I developed this scripts for my employer, I'm not able to send this programs to the mailing list. I hope that with this information you will be able to do your own analyzer scripts. El lun, 12-04-2004 a las 09:45, Merlin escribió: > Hello William, > > how do you messure your statistics? I would be interested in comparing > those numbers. Do you use a special tool? > > Thanx > > Merlin > > > William Lovaton wrote: > > > I have never used any kind of stress tool in my web app. Right now it > > is in production under heavy load and some statistics are: > > > > Authenticated users: 520 (this are the active sessions) > > Dynamic requests per second: 25 Average > > Max. Dynamic requests per second: 60 to 80 (these are peak values) > > > > When I say dynamic requests I mean just PHP scripts. Static content > > (js, css, images, etc) is not registered here but it is usually 3 to 4 > > times dynamic requests... so, do the math. ;-) > > > > For this app I'm using Apache 1.3, PHP 4.3 and Oracle 8.1.7 > > > > The server is an old compaq with RedHat Linux 9, 2 GB RAM and 4 > > proccessor with 550MHz each. > > > > I hope it can serve you as a reference. > > > > > > Regards, > > > > > > -William > > > > > > El lun, 12-04-2004 a las 08:37, Merlin escribió: > > > >>Hi there, > >> > >>I am trying to stress my LAMP app with MS stress tool for web applications. > >>It simulates about 100 threads. > >> > >>It looks like the LAMP app is only able to handel about 6 requests per second :-( > >>Is this normal für php with database queries? > >> > >>Thanx for any info, > >> > >>Merlin -- PHP General Mailing List (http://www.php.net/) To unsubscribe, visit: http://www.php.net/unsub.php