I would probably be looking at 2-4 machines with 4 gigs each to handle a
load like this balanced with mod_backhand or lvs.  The likelihood is that
this would be a peak load experienced only a couple times a year during
which system failure is not an option :)  The majority of requests would
be of the web service nature and not likely require a whole lot of
transformation, in most cases serving raw xml, but with the need for
transformation available.

The benchmark I've based my assumptions on is
http://www.chamas.com/bench/.  While this is definitely a case of needing
to evaluate your own specific application resource needs, I think that my
main concerns here are efficiency of resources given a large amount of
delivery flexibility needed.  It sounds like that while the performance
probably rivals Mason given all other criteria being equal (static content
proxying, database/network bottlenecks, etc.) this solution offers the
features I need.

Definitely 1000 rps is a LOT to handle without a performance slowdown and
is something I wouldn't see on a daily basis.  Sounds like at that level
apache/mod_perl and system tuning is the deciding factor - but having had
dealt with some situations where my app has overrun a machine with 4 gigs
ram I am in the market for an app platform that doesn't have admitted
memory leaks :o

Thanks for the help guys, I'll let you know how it goes.

> I was going to say... these sound like pretty reasonable numbers to me
> as  well.
>
> Here's the thing with requests per second. Suppose you say "I want 100
> RPS",  that means that your average joe web user with his 56k (still
> around half of  all users at least) is going to take 1-2 seconds at best
> (and more like 3-4  typically) to receive a request. So your 100 RPS
> server needs to be serving  400 requests at a time on average, which is
> 400 Apaches in RAM, which if they  are typical mod_perl apaches means
> upwards of 2 and more likely around 4 gigs  of RAM required to avoid
> total meltdown. With reverse proxying and the best  possible mod_perl
> tuning you might get that down to 1 gig or so, which means  you can
> expect that memory-wise you can hit 100 RPS on a single machine. Now
> you have to consider I/O bandwidth, and your dealing with 50kx100 is
> 5000k/Sec, thats not bad and probably mostly served out of memory if its
>  fairly static AxKit content anyway, probably disk is not even close to
> a  problem. CPU may or may not be a problem, its going to entirely
> depend on  your application.
>
> Now, consider 1 THOUSAND requests per second. Now your talking about a
> machine  that would have to have AT LEAT 10 gigs of RAM in it, probably
> at least 4-way  SMP and more likely 8-way SMP, and a decent set of disk
> arrays. There ARE  machines in that class. VERY few people have them....
> In general when you get  to that level fault tolerance alone demands
> serious clustering and  multi-homing. We haven't even considered
> database requirements either, a 1000  hit-per-second MySQL server is
> also non-trivial iron, and you will want that  clustered as well.
>
> The question I have to ask is "do you REALLY think you'll get 1000 hits
> a  second???" I mean thats several million hits per day, conservatively.
> Unless  your building one of the top 50 sites on the net I am really
> skeptical you'll  ever see 1000 hits in one second. I've probably built
> 50 large sites, and  some fairly heavily used at that, and yet to see
> one break that mark.
>
> On Tuesday 25 February 2003 01:00 pm, Matt Sergeant wrote:
>> On Tue, 25 Feb 2003, Fred Moyer wrote:
>> > 3) XML compatibility - yes I could use mason as a filter like you
>> suggested but I'm still limited by the performance constraints of
>> the component structure.  I'm hoping to use server-side includes or
>> something similar with Axkit to emulate this kind of template
>> structure - I'm sure it will be more work but I'm at the point where
>> I need to process thousands of requests per second as opposed to
>> hundreds and integrate with other vendors using xml standards.
>> Axkit seems to be the 'big kids' Mason equivalent.
>>
>> While we the developers are extremely flattered by this, I have severe
>> doubts about reaching the thousands of request/sec mark. I have a
>> pretty funky AxKit app here that has to scale (possibly to millions of
>> users) but I'm achieving that via sensible partitioning, not via a
>> fast AxKit.
>>
>> FYI, with no tuning whatsoever (either to Apache, or PostgreSQL) on a
>> page that does no SQL, but does run through some XSP code, I get about
>> 16rps.
>>
>> "Static" content running through axkit on similar hardware gets about
>> 120rps.
>>
>> I hope you have a lot of machines ;-)
>
> --
> Tod Harter
> Giant Electronic Brain


Fred Moyer
Digital Campaigns, Inc.




---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to