B10m wrote:
Since there are more and more customers, and more and more
"features" are added I now see the rendering time grow, in other
words, the site is getting "slower". I use mod_perl with a
PostgreSQL database (all on one FreeBSD server). Sometimes (in busy
times) it the rendering process of a "page" can grow to over 2
seconds, which is just quite long (and yes, in really bad times, it
can be over 10 seconds). On average, my benchmarks tell me it takes
about half a second to a second to render the output.

Is the 0.5 sec the time on a quiet box? If so, then your slowdowns with increased traffic suggest a resource problem. Given that the times increase so much, I suspect you're running into CPU and IO contention.

Instead of just throwing more RAM and CPU power at it, I wondered if
there are better tricks available to speed up the site. Would it
help to move PostgreSQL to a different server? Maybe move some time
consuming processes to a different machine? Set up load balancing?
Reconfigure Apache (again ...) ?

The first thing I would suggest is setting up a reverse proxy. This limits the total number of mod_perl processes, and that allows you to tune the total resource consumption.

Moving the database to another machine would be a good second step, but I'd check if your database is the bottleneck first. Eyeball "top" for a while, and see if mod_perl or pgsql is the one maxing out.

Do you use persistent database connections? Can your database deal with that many concurrent connections? Limiting the number of mod_perl processes would limit the number of database connections, which would reduce the pressure there.

Also, what would be the best way (in C::A) to check for the
bottleneck of this problem? My benchmarking right now is limited to
a Benchmark::Timer implementation, which can tell me the time it
took to get the work done.

If you think there are bottlenecks in your code, you'll need to increase the resolution of your timings to smaller portions of the code. I'd suggest adding timings around the code that gets your database connection, and in a prerun- and postrun hook. Then drill down on the longest parts. But remember to do this on an unused box, otherwise the customer traffic will taint your measurements. If there's something obvious you can do to improve the performance without compromising readability of your code, then do it. Otherwise don't bother.

Any pointers of what I can do to speed things up again?

Check "top" etc to see which process is the most resource hungry. Who's maxing out the CPU? Are you swapping?

I'd just add more hardware. If your site is important for your business, you will want more capacity and redundancy anyway. You shouldn't rely on a single machine for a site that makes you money: if it goes down, it'll cost you.

Have a look at Apache 2.2. Its proxy-balancer feature is very good, and a cheap and simple way to build a cluster of mod_perl servers. PostgreSQL has clustering capabilities as well. If you can afford the hardware, make use of it.

The first step should be to remove all the single points of failure. Using more hardware probably means you'll have enough capacity to grow. After that, it may be time to look at specific code optimizations.

Rhesa

--
#!/usr/bin/perl
tie %rope, 'Tree' && hang $self;

---------------------------------------------------------------------
Web Archive:  http://www.mail-archive.com/[email protected]/
             http://marc.theaimsgroup.com/?l=cgiapp&r=1&w=2
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to