I guess my confusion was from my original post where I said:
* Application server scale easy as pie, database servers scale like
hernias
I should have been more specific that applications server what I meant was TG, PHP where ever the business logic of the code is. Maybe easy as pie was the wrong analogy to use.

The m/custer technology is what I usually call database proxy. CJDBC was the first time I thought about using a db proxy. This is a really great solution for failover but not for scalability. If I am doing more database transaction persecond I can't just add another database server, I have to buy a bigger database server and if I am using a database proxy and have 3 database server that means I have to buy 3 database servers. You are technically correct database writes don't end up in exactly one place but those write are copied not split up. So now you have every write ending up on every single database node, which only scales if you writes stays the same and your reads increases. All the applications I have worked on reads and write increase as load increases, sometimes you can have a spike in the reads and not the writes it just depends on the applications.

To write in more than one place you have to have the technology to be to load balance the writes. Then reads will need to know what server to go to for what data. I think this would be a blast to code!

Fortunatly reads are a majority of most applications especially web applications. Thus why some simple caching is easier and more effective than replication of data.


On 3/17/06, Robin Haswell <[EMAIL PROTECTED]> wrote:


> "my time has a cost and optimisation often buys less performance than,
> say, a Dell SC1425"
> Unfortunatly my time is not worth a IBM 64way mainframe (or I would be
> one happy hacker). Bigger machines help but as my comment said before
> this will give you only linear optimization at some point you will need
> _exponential_ optmizaitions. This also depends on the complexity of the
> data relationships that your application needs. You need a machine that
> is 64 times faster buy

Nah mate you miss my point! Not bigger machines, *more* machines. A Dell
SC1425 is a pretty low-end piece of kit, the idea is you use multiple
machines.

Let's say you have an application that is currently running at 100%
above acceptable capacity. You can solve this problem in basically four
ways:

1. Buy hardware that is twice as powerful
2. Perform optimisation, caching - etc.
3. A combination of the 1) and 2)
4. Buy another similar server and run them both

In my experience, 4) is always the cheapest option, and requires less
hassle than 2) and 3) (and less hassle is the TG way!). The trick is to
make option 4 possible by asking questions like "What will happen if I
use two app or database servers - or both" early on in the build
process. I do this for everything and it's served me right so far :-)
Part of my personal PHP standard library is some wrappers around session
management and database handling that means:

1) All my session data is stored in the database, which means from then
on I can implement *all* my persistent storage in an RDBMS.

2) My database "reads" and my database "writes" are separated and
controllable, so if we need to add replication it's possible to direct
all writes to the master server and balance reads between the slaves.
(Yes I said there are alternatives to the master/slave setup, but in web
apps which are mostly read-heavy it's a pretty good solution anyway).

-Rob

PS. If you're interested in the "writing to one place" problem, you
should look m/custer
( http://www.continuent.com/index.php?option=com_content&task=view&id=211&Itemid=168).
We have our own solution, but in general it's a pretty awesome setup for
database scaling through multiple servers.
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "TurboGears" group.
To post to this group, send email to turbogears@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/turbogears
-~----------~----~----~----~------~----~------~--~---

Reply via email to