On Fri, 24 Jul 2009 10:34:56 -0400, Jay Pipes <[email protected]> wrote: > Barry Leslie wrote: >> >> >> On 7/24/09 1:56 AM, "Paul McCullagh" <[email protected]> >> wrote: >> >>> On Jul 23, 2009, at 3:15 PM, Stewart Smith wrote: >>> >>>> On Tue, Jul 21, 2009 at 09:28:54PM -0700, MARK CALLAGHAN wrote: >>>>> How is the serial log to be kept in sync with a storage engine given >>>>> the Applier interface? MySQL uses two phase commit, but the Applier >>>>> interface has one method, ::apply(). In addition to methods for >>>>> performing 2PC, keeping a storage engine and the serial log in sync >>>>> requires additional methods for crash recovery to support commit or >>>>> rollback of transactions in state PREPARED in the storage engine >>>>> depending on the outcome recorded in the serial log. >>>> The bit that keeps banging in my head in regards to this is storing it >>>> in the same engine as part of the transaction and so avoiding 2pc. >>> We discussed this on Drizzle Day, and that was my recommendation. >>> >>> This would mean, after a transaction has committed, the replication >>> system asks the engine for a "list of operations" that were performed >>> by the transaction. >>> >>> For engines that have this information in their transaction log, it is >>> a relatively simple task. >>> >> >> So at the engine level there would be something like a 'get' and 'put' >> for >> transactions so that on the Master server after a commit the 'get' method >> is >> called and the data is sent to the slaves where they call the engines >> 'put' >> method to apply the transaction? >> >> Would it be possible to cut out the middle man here and have the server >> tell >> the engine where it's slaves are and the engine can just send the >> committed >> transactions directly to the slave engines? This would make it a lot >> easier >> to stream the transactions.
Or rather pass some sort of preconfigured replicator handle to storage engine to spare it from the complexities of cluster topology and semantics. On the second thought this could be not just replicator, but any entity interested in storage engine events. Jay, could it be the publisher/subscriber thing you're talking about below? I'm afraid I'm missing the context here: how pulisher/subscriber can be a single module? Perhaps an interface? Is it about communication between different machines or different modules on a single machine? Could you give an example of what could be "publisher" and "subscriber"? > Yes, this is certainly possible in a PBXT-specific replication > publisher/subscriber module. I'm in the process of creating the default > asynchronous publisher/subscriber module and we can use that as a model > for doing a PBXT-specific one that takes advantage of the efficiencies > you point out above. > > Cheers! > > jay > > _______________________________________________ > Mailing list: https://launchpad.net/~drizzle-discuss > Post to : [email protected] > Unsubscribe : https://launchpad.net/~drizzle-discuss > More help : https://help.launchpad.net/ListHelp -- Alexey Yurchenko, Codership Oy, www.codership.com Skype: alexey.yurchenko, Phone: +358-400-516-011 _______________________________________________ Mailing list: https://launchpad.net/~drizzle-discuss Post to : [email protected] Unsubscribe : https://launchpad.net/~drizzle-discuss More help : https://help.launchpad.net/ListHelp

