I would use replication with multiple masters. You could setup 2, 3 or however many servers, that all replicate to 1 server (which you may be doing). I would just set the auto_increment increment larger than you need to allow for growth, like 10. If you are using auto_increment. You then have a list of servers to connect to in your php code. Every time it needs to connect it, it picks a random server to connect to.

Splitting by user id ranges isn't going to split the load evenly. If you are going to do it that way, I would have even ids go one way and odd another. Or any other increment beside 2.


On Jul 30, 2007, at 12:37 AM, Hiroyuki Yamada wrote:

I have a question about data partitioning.

I'm currently runnning a database server having too many writes, so
trying to partition data in application level.

What I'm trying to do is partitioning data by user,
for example, doing like the following.

user_id 1-100000 => to server1
user_id 100001-200000 => to server2


When I have two tables listed below,

user - user profile and so on. (few writes)
user_write - user write history (many writes)

and trying to partition user_write table because of too many writes.

           --------
             user
           --------

     --------     --------
     user_write1  user_write2
     --------     --------


What would you do to execute consistent insertion to both user and user_write1
in this situation ?
(both insertion have to be successful and one insertion fail is not granted.)

Does two phase commit in MySQL5 work in this kind of situation ?
(transaction beyond multiple servers.)

What would you do when using MySQL4 ?


Thanks,

Hiroyuki Yamada

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql? [EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to