Re: [Gluster-devel] pluggability of some aspects in afr/nsr/ec
On 10/29/2015 06:11 PM, Jeff Darcy wrote: I want to understand if there is a possibility of exposing these as different modules that we can mix and match, using options. It’s not only possible, but it’s easier than you might think. If an option is set (cluster.nsr IIRC) then we replace cluster/afr with cluster/nsr-client and then add some translators to the server-side stack. A year ago that was just one nsr-server translator. The journaling part has already been split out, and I plan to do the same with the leader-election parts (making them usable for server-side AFR or EC) as well. It shouldn’t be hard to control the addition and removal of these and related translators (e.g. index) with multiple options instead of just one. The biggest stumbling block I’ve actually hit when trying to do this with AFR on the server side is the *tests*, many of which can’t handle delays on the client side while the server side elects leaders and cross-connects peers. That’s all solvable. It just would have taken more time than I had available for the experiment. precisely. I think switching is not that difficult once we make sure healing is complete. Switching is a rare operation IMO so we can probably ask the users to do stop/choose-new-value/start the volume after choosing the options. This way is simpler than to migrate between the volumes where you have to probably copy the data. The two sets of metadata are *entirely* disjoint, which puts us in a good position compared e.g. to DHT/tiering which had overlaps. As long as the bricks are “clean” switching back and forth should be simple. In fact I expect to do this a lot when we get to characterizing performance etc. Good to hear this. choose 1b, 2b it becomes nsr and 1a, 2a becomes afr/ec. In future if we come up with better metadata journals/stores it should be easy to plug them is what I'm thinking. The idea I have is based on the workload, users should be able to decide which pair of synchronization/metadata works best for them (Or we can also recommend based on our tests). Wanted to seek your inputs. Absolutely. As I’m sure you’re tired of hearing, I believe NSR will outperform AFR by a significant margin for most workloads and configurations. I wouldn’t be the project’s initiator/leader if I didn’t believe that, but I’m OK if others disagree. We’ll find out eventually. ;) More importantly, “most” is still not “all”. Even by my own reckoning, there are cases in which AFR will perform better or be preferable for other reasons. EC’s durability and space-efficiency advantages make an even stronger case for preserving both kinds of data paths and metadata arrangements. That’s precisely why I want to make the journaling and leader-election parts more generic. All the best for your endeavors! Lets make users happy. Pranith ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] RFC: Gluster.Next: Where and how DHT2 work/code would be hosted
On 10/29/2015 10:26 PM, Jeff Darcy wrote: On October 29, 2015 at 8:42:50 PM, Shyam (srang...@redhat.com) wrote: I assume this is about infra changes (as the first 2 points are for some reason squashed in my reader). I think what you state is infra (or other non-experimental) code impact due to changes by experimental/inprogress code, should be dealt with clearly and carefully so as to not impact regular functionality. In which case I *agree* and do not mean otherwise. I think this sort of goes back to what Niels commented on my squashing a .patch and proposed using #define EXPERIMENTAL in this thread (or such methods). Sort of, although I would prefer that the distinction be run-time instead of compile-time whenever possible. 3. All experimental functionality, whether in its own translator or otherwise, should be controlled by an option which is off by default. Ah! I think this is something akin to the "#define EXPERIMENTAL" suggestion and non-experimental code impact I guess, right? I think so. Also, since I wasn’t clear before, I think there should be *separate* options per feature, not one blanket “experimental” option. Agreed For example, if you want to play with DHT you’d need to: (1) Install the gluster-experimental RPM (2) Tweak the glusterd script or volfile to allow experimental features (3) Set cluster.dht2 (or whatever) on your volume Note the absence of steps to download, hand-edit, or build anything yourself. I think that’s key: no risk if you don’t go out of your way to enable experimental code, but you don’t have to be a full-time Gluster developer to walk on the wild side. Agreed. One more question, should we package all experimental code in the future, or things that reach a certain level of experimental maturity? As an example, say DHT2 has 3 FOPs only implemented when we do *some release*, should we package it, or leave it behind? Asking as I am unsure of the course, or maybe a consideration for the future. ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel