[
https://issues.apache.org/jira/browse/ZOOKEEPER-1508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13413270#comment-13413270
]
Flavio Junqueira commented on ZOOKEEPER-1508:
---------------------------------------------
bq. When you say it diverges I'm assuming your concern is more at the code
level than the functional
Exactly, code level. It will touch the Zab implementation, and I'm concerned
about generating more bugs and perhaps complexity. This proposal focuses on
replication across disks rather than replication across processors as with the
current code. Currently each processor has its own disks for durability.
I need to think more carefully, but I'm thinking that we might need a different
replication protocol.
bq. Is this what you meant by subprojects?
A subproject is an independent project, with its own repository, community,
etc. In this case it could be a subproject under the zookeeper umbrella. Check
the bookkeeper project for an example.
> Reliable standalone mode through redundant databases
> ----------------------------------------------------
>
> Key: ZOOKEEPER-1508
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-1508
> Project: ZooKeeper
> Issue Type: New Feature
> Environment: Single server with multiple disks or two node cluster
> with multiple shared disks
> Reporter: Bill Bridge
>
> Currently ZooKeeper requires 3 servers to provide both reliability and
> availability. This is fine for large internet scale clusters, but there are
> lots of two node clusters that could benefit from ZooKeeper. There are also
> single server use cases where it is highly desirable to have ZooKeeper
> survive a disk failure, but availability is not as important.
> This feature would allow the configuration of multiple destinations for logs
> and snapshots. A transaction is committed when a majority of the log writes
> complete successfully. If one log gets an error on write, then it is taken
> offline until an administrator brings it online or replaces it with a new
> destination. ZooKeeper continues to run as long as a quorum of disks can be
> written.
> High availability can be provided with a two node cluster. When the ZooKeeper
> node dies, the disks are switched to the surviving node and a new ZooKeeper
> starts. Faster switch over can be done if there is an observer already
> running in the new node.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira