LiveJournal is an excelent example it is where I stole the idea of using memcache: http://www.linuxjournal.com/article/7451

My visions for scaling up is to us url partitioning so that simular data all goes to the same host(s) and have no shared cache. For example if you have a news site all the tech articles could go to one server(s) that had all tech articles in there cache. This makes the invalidation have to iterate over a list of hosts but thems the breaks.

On 3/17/06, Bob Ippolito <[EMAIL PROTECTED]> wrote:


On Mar 17, 2006, at 3:17 PM, Robin Haswell wrote:

>
>
>> "my time has a cost and optimisation often buys less performance
>> than,
>> say, a Dell SC1425"
>> Unfortunatly my time is not worth a IBM 64way mainframe (or I
>> would be
>> one happy hacker). Bigger machines help but as my comment said before
>> this will give you only linear optimization at some point you will
>> need
>> _exponential_ optmizaitions. This also depends on the complexity
>> of the
>> data relationships that your application needs. You need a machine
>> that
>> is 64 times faster buy
>
> Nah mate you miss my point! Not bigger machines, *more* machines. A
> Dell
> SC1425 is a pretty low-end piece of kit, the idea is you use multiple
> machines.
>
> Let's say you have an application that is currently running at 100%
> above acceptable capacity. You can solve this problem in basically
> four
> ways:
>
> 1. Buy hardware that is twice as powerful
> 2. Perform optimisation, caching - etc.
> 3. A combination of the 1) and 2)
> 4. Buy another similar server and run them both
>
> In my experience, 4) is always the cheapest option, and requires less
> hassle than 2) and 3) (and less hassle is the TG way!). The trick
> is to
> make option 4 possible by asking questions like "What will happen if I
> use two app or database servers - or both" early on in the build
> process. I do this for everything and it's served me right so far :-)
> Part of my personal PHP standard library is some wrappers around
> session
> management and database handling that means:

Scaling horizontally, what you list as 4, is the only real option.
There's plenty of public record that shows that all the successful
guys (Google and LiveJournal come to mind) are using lots of
relatively cheap servers, rather than small numbers of giant
servers.  If you design for that, you'll never have a problem so long
as you can afford to operate, and that's not so tough of a problem
because the costs are at worst linear.  With any other option, the
price to upgrade grows exponentially and there's a ceiling on what
kind of power you can even buy to run an app that is mostly serial.

Good optimizations can do wonders in the short term, e.g. cut
immediate hardware costs in half... but you get that anyway if you
wait about a year.  It's typically better to expand your service such
that it maximizes profits, rather than optimize your service to
minimize your overhead.  There's only so low you can go with cutting
your overhead.. but there's no well defined ceiling for maximum


--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "TurboGears" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/turbogears
-~----------~----~----~----~------~----~------~--~---

Reply via email to