Ted, Ketan, Thanks for the input!
jason On 7/1/11 10:01 AM, "Ted Dunning" <[email protected]> wrote: >By nature, you can't go back in time with ZK. Some clients may be at an >earlier point time, but each client will see a strictly ordered (and not >very old) version of the world. > >If you want to preserve old copies, you need to do that explicitly. One >think that would make this easier is the multi command (self-serving plug, >here). This was committed to trunk yesterday. > >What this would let you do is to atomically create a saved version of the >config, delete the oldest version and update the live version. > >If you don't mind a slightly more complex client protocol, you can just >add >new configs in sequentially named files and then update a "currentConfig" >znode that has the name of the current configuration. If you test the >zkVersion on the currentConfig znode before starting the process, you can >even be safe in the presence of conflicting simultaneous updates. Reaping >really old versions can happen safely after the new config is in place. > >Bottom line is that ZK can support you in your need, but you have to >explicitly perform the actions to give you the semantics that you want. > >On Fri, Jul 1, 2011 at 8:21 AM, Burgess, Jason < >[email protected]> wrote: > >> Greetings! >> >> After doing several searches on this list, as well as going through the >> documentation and code, I've come to the conclusion that it is not >>possible >> to access the data associated with a specific version number for a >> particular znode. Am I correct? >> >> For those wondering, here is my use case: >> >> We are looking to store configuration in the znode (usually less than >>10KB) >> and using Watches to notify the entities that are being configured that >>a >> new configuration is available. However, we may run into an instance >>where a >> faulty configuration is stored on the znode, and we would like to go >>back >> through the previous versions until we reach a valid configuration, or >>we >> have no more configs. >> >> Thanks! >> >> jason >>
