In data 30 mars 2009 alle ore 13:46:09, Rolf Banting <rolf.b...@gmail.com> ha 
scritto:

> On Sun, Mar 29, 2009 at 9:52 PM, Perrin Harkins <phark...@gmail.com>  
> wrote:
>
>> On Sun, Mar 29, 2009 at 4:44 PM, Cosimo Streppone <cos...@streppone.it>
>> wrote:
>> > The main problem is that in the past we experienced some kind of
>> > performance problems that only manifested themselves really clearly
>> > in production and only at peak traffic hours.
>> > Out of peak hours, everything was fine.
>>
>> That sounds like a problem with a shared resource like the database,
>> not something you'll find by profiling the code.  You'd be better off
>> either using DBI::Profile or using logging on your database to find
>> the problem.

I get the points.

The problem that we had, this was in November last year,
was that all the backends were at load 15.0-20.0 (normal was ~3-4)
after an update to the application.

In those cases, it's pretty clear where the problem is
(CPU/load/etc...). What's not really clear is which point in the
code is causing it.

In our case, it was the code, and particularly a single function,
used in many places, that spent a lot of time doing useless things.
We sorted that out "by intuition", knowing the hot spots of the code.

What I want to do now is to prevent this kind of problems, possibly
in a more systematic and/or scientific way, and I thought of doing
this by running automated performance/stress tests before deployment.

While I try to get there, I thought it might be useful to dedicate 1 of the
live backends to this "live profiling". Even if the application now is
not having any problem, even at peak times.

Maybe I just have to try and see what I get :)

DBI::Profile is also another good "track" to follow.

-- 
Cosimo

Reply via email to