Hello all,

        We are doing a lot of data warehousing type work. The catch is we
need to get data back out in seconds rather than minutes. To accomplish this
we generate summary tables in real time. The problem we run into is that
generating the summary tables becomes very costly and our selects and
updates are stepping on each others toes. 

        So here is what I'm wondering: In such an update heavy environment
would replication be helpful? Would reading from the slave be any faster
than reading from the master or would the overhead of replicating hundreds
of updates/second make replication a bad idea? And, as a follow up, if we
set up a replication scheme like this, could we use lock tables on the
master server for 10 seconds or so at a time to increase the number of
updates/second we can process? 

Thanks,
Jon Gardiner

---------------------------------------------------------------------
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/           (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

Reply via email to