Specify wal dirs as a ':' separated list(like the PATH env var). Then each directory is written to in parallel, while only the first directory is used for reading. Te point of such a change allows for the data to exist on different disks without having to use something like drbd. Then in the event of a failure of a disk the data can be restored though it be a manual process.
Discussion on how to implement replication in the protocol. I like the idea of peer-peer replication scheme. At a high level what I am thinking is you define a list of peers to push/pull replication events to/from.( I am still thinking through if we want to allow more than one peer). This way multiple servers can have the same job. Each server will allow the job to be reserved but quickly replicate that it has been reserved. The inconstancy that can arise is that a single job could be reserved by two different clients at the same time. This condition can already exist if the TTR runs out and the job is released while the client that originally reserved the continues to run the job anyway. So we could resolve this issue the same way we already do. We also can prevent infinite loops in replication because we can uniquely identify a job with the server id(currently in progress) and its own job id. So to take a crack at the protocol it could be something along the lines of: replicate put <jobrec>\r\n <data>\r\n replicate reserve <jobid> <serverid>\r\n replicate release <jobid> <serverid> <pri> <delay>\r\n etc for each of the job specific command. Just some ideas to get the discussion going. On Tuesday, December 4, 2012 4:25:50 PM UTC-7, Keith Rarick wrote: > On Tue, Dec 4, 2012 at 8:39 AM, Nathaniel Cook > <[email protected]<javascript:>> > wrote: > > So once again I am a volunteer to work on this. Same question: has there > > been any progress? I have already forked the github repo and will be > > submitting pull requests soon. > > No progress that I know of. > > > I think there is a simple change that can be made to move this in the > right > > direction. I propose that we add support for multiple binlog dirs so the > > binlogs can exists somewhere on a shared file system for disater > recovery. > > Then we can work on replicating hte bin log itself. > > That doesn't sound simple. Can you give a more concrete description? > What would the command-line flags look like? What files would be > created in various scenarios, when would they be written, when would > they be read, and by whom? > -- You received this message because you are subscribed to the Google Groups "beanstalk-talk" group. To view this discussion on the web visit https://groups.google.com/d/msg/beanstalk-talk/-/1bY5UvxTMfgJ. To post to this group, send email to [email protected]. To unsubscribe from this group, send email to [email protected]. For more options, visit this group at http://groups.google.com/group/beanstalk-talk?hl=en.
