Re: [Gluster-devel] NetBSD regression recovered (with patches)
On Mon, Mar 30, 2015 at 10:10:20AM +0530, Venky Shankar wrote: Review done, awaiting clarification on a minor question from Venky (changelog maintainer). Have a minor comment. Rest is all good. I addressed the minor comment. Please bring the +1 review)! There is no need for release-3.6 backport, right? -- Emmanuel Dreyfus m...@netbsd.org ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] Hangouts for 3.7 features
On Mon, Mar 30, 2015 at 11:23 AM, Venky Shankar yknev.shan...@gmail.com wrote: On Fri, Mar 27, 2015 at 4:31 PM, Vijay Bellur vbel...@redhat.com wrote: Hi All, As we inch closer to 3.7.0, I think it might be a good idea to talk about new/improved features in 3.7 and do a demo of the features to our users over Google hangout sessions. With that in mind, I have created an etherpad with the list of prominent features in 3.7 at [1]. If you are a feature owner and interested in doing a hangout session to the community, can you please update your name and preferred time in the etherpad? Please feel free to update the etherpad if I have missed adding your feature to the list :). Given the number of features that we have, we can possibly look at doing two hangouts per week - possibly on Tuesdays and Thursdays from the coming week. What do you folks think? I've added BitRot Detection for Thursday (2nd April). Will send out an Hangout invite soon. Ah! I realize now that 04/02 isn't available and 04/07 is taken by Tiering. I'll take 04/09. ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] NetBSD regression recovered (with patches)
On Mon, Mar 30, 2015 at 12:36 PM, Emmanuel Dreyfus m...@netbsd.org wrote: On Mon, Mar 30, 2015 at 10:10:20AM +0530, Venky Shankar wrote: Review done, awaiting clarification on a minor question from Venky (changelog maintainer). Have a minor comment. Rest is all good. I addressed the minor comment. Please bring the +1 review)! There is no need for release-3.6 backport, right? Correct. -- Emmanuel Dreyfus m...@netbsd.org ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] About split-brain-resolution.t
On 03/30/2015 06:01 PM, Emmanuel Dreyfus wrote: On Mon, Mar 30, 2015 at 05:44:23PM +0530, Pranith Kumar Karampuri wrote: Problem here is that ' inode_forget' is coming even before it gets to inspect the file. We initially thought we should 'ref' the inode when the user specifies the choice and 'unref' it at the time of 'finalize' or 'abort' of the operation. But that may lead to un-necessary leaks when the user forgets to do either finalize/abort the operation. One way to get around it is to ref the inode for some 'pre-determined time' when 'choice' is given. That suggests the design is not finalized ans the implementation ought to have unwanted behaviors. IMO the test should be retired until the design and implementation is completed. I will work with Anuradha tomorrow about this one and either send a patch to remove the .t file or send the fix which makes things right. Pranith ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] About split-brain-resolution.t
On 03/30/2015 06:34 PM, Emmanuel Dreyfus wrote: Pranith Kumar Karampuri pkara...@redhat.com wrote: Since spb_choice is not saved as an attribute for the file on the bricks, it cannot be recovered when the context is reallocated. Either that save feature has been forgotten, or going to afr_destroy() here is a bug. Here is the backtrace leading there: This is a known issue :-(. I will need to talk to Anuradha once about this issue. She is not in today. Will let you know about the decision. It seems the thing arise because a threads quits and decide to cleanup stuff. Do we have an idea what this thread is? For the test to pass we need to keep the thread alive. Of course that works around a real problem. Why don't we immediatly clear pending xattr when replica.split-brain-choice is set? That would clear the split brain state. this is how the feature is supposed to work: https://github.com/gluster/glusterfs/blob/master/doc/features/heal-info-and-split-brain-resolution.md Basically the choice is given to inspect the 'data' of the file. Then one can finalize the choice which will clear the pending xattrs after resolving the split-brain. Problem here is that ' inode_forget' is coming even before it gets to inspect the file. We initially thought we should 'ref' the inode when the user specifies the choice and 'unref' it at the time of 'finalize' or 'abort' of the operation. But that may lead to un-necessary leaks when the user forgets to do either finalize/abort the operation. One way to get around it is to ref the inode for some 'pre-determined time' when 'choice' is given. Pranith ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] Spurious Failures in regression runs
On 30 Mar 2015, at 18:54, Vijay Bellur vbel...@redhat.com wrote: Hi All, We are attempting to capture all known spurious regression failures from the jenkins instance in build.gluster.org at [1]. The issues listed in the etherpad impede our patch merging workflow and need to be sorted out before we branch release-3.7. If you happen to be the owner of one or more issues in the etherpad, can you please look into the failures and have them addressed soon? To help show up more regression failures, we ran 20x new VM's in Rackspace with a full regression test each of master head branch: * Two hung regression tests on tests/bugs/posix/bug-1113960.t * Still hung in case anyone wants to check them out * 162.242.167.96 * 162.242.167.132 * Both allowing remote root login, and using our jenkins slave password as their root pw * 2 x failures on ./tests/basic/afr/sparse-file-self-heal.t Failed tests: 1-6, 11, 20-30, 33-34, 36, 41, 50-61, 64 Added to etherpad * 1 x failure on ./tests/bugs/disperse/bug-1187474.t Failed tests: 11-12 Added to etherpad * 1 x failure on ./tests/basic/uss.t Failed test: 153 Already on etherpad Looks like our general failure rate is improving. :) The hangs are a bit worrying though. :( Regards and best wishes, Justin Clift -- GlusterFS - http://www.gluster.org An open source, distributed file system scaling to several petabytes, and handling thousands of clients. My personal twitter: twitter.com/realjustinclift ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] New features and their location in packages
On 03/26/2015 02:45 AM, Niels de Vos wrote: With all the new features merged, we need to know on what side of the system the new xlators, libraries and scripts are used. There always are questions on reducing the installation size on client systems, so anything that is not strictly needed client-side, should not be in the client packages. For example, I would like to hear from all feature owners, which files, libraries, scripts, docs or other bits are required for clients to operate. Hi Niels, For AFR arbiter (BZ 1199985), the 'features/arbiter' xlator will be on the server side. The client bits that are needed to make it work will be a part of AFR code. Thanks, Ravi For example, here is what tiering does: - client-side: cluster/tiering xlator - server-side: libgfdb (with sqlite dependency) By default, any library is included in the glusterfs-libs RPM. If the library is only useful on a system with glusterd installed, it should move to the glusterfs-server RPM. Clients include fuse and libgfapi, the common package for client-side bits (and shared client/server bits) is 'glusterfs'. There is no need for you to post patches for moving files around in the RPMs, that is something I can do in one go. Just let me know which files are needed where. Thanks! Niels ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] Congrats, your patch got MERGED. But you're not finished yet!
I've been personally following the same model while working on upstream bugs. IMO, this needs no additional effort but just the discipline. Because of lack of transition of bug states, our upstream BZ statistics doesn't show the exact state where we are in now. I am pretty sure numbers will look really promising if we follow this practice. ~Atin On 03/30/2015 01:50 PM, Niels de Vos wrote: Hi all, it's great to see that there are so many changes getting merged before the 3.7 release. Unfortunately it is very difficult to keep track of all the features that have all their patches merged, and the features that still have some patches that need to be reviewed. Developers should be aware how/when the status of a bug needs to change. At the time patches get sent to Gerrit, the status of the associated bug should be POST. Once all patches for that bug have been merged, the developer (with the maintainers as weak backup) should move the status of the bug to MODIFIED. The complete tree of 3.7 blocker bugs is here, and only very few bugs are in status MODIFIED: https://bugzilla.redhat.com/showdependencytree.cgi?id=glusterfs-3.7.0 If you have features/bugs that still need work for the 3.7 release, please add glusterfs-3.7.0 in the blocks field of your bug. This will automatically add the bug to the above link. At the moment there are many bugs that have their patches merged, but these bugs do not have the correct status. Developers are requested to review their recent patches and verify the status of the bugs for these. This Gerrit search should help with finding your patches: http://review.gluster.org/#/q/status:merged+owner:self,n,z We also have http://bugs.cloud.gluster.org/ that shows all the open bugs and the status of the associated patches (review status). The column shows three letters (M, A and N) for the changes that have been posted: M = Modified A = Abandoned N = New The most helpful options on http://bugs.cloud.gluster.org/ are the 'In Progress, but all Merged' and 'Ready For Review' selections. [http://bugs.cloud.gluster.org/ updates nightly, data is not real-time] Please have a look at the status of the bugs where you filed changes, and update the status appropriately. Thanks, Niels and all other people checking the status of bugs. ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel -- ~Atin ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] Hangouts for 3.7 features
I did reserve 04/02 for you and striked out 04/02 in the etherpad :). You can still go ahead with 04/02. works for me. thanks! Thanks, Vijay ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] Hangouts for 3.7 features
On 03/30/2015 01:57 PM, Venky Shankar wrote: On Mon, Mar 30, 2015 at 11:23 AM, Venky Shankar yknev.shan...@gmail.com wrote: On Fri, Mar 27, 2015 at 4:31 PM, Vijay Bellur vbel...@redhat.com wrote: Hi All, As we inch closer to 3.7.0, I think it might be a good idea to talk about new/improved features in 3.7 and do a demo of the features to our users over Google hangout sessions. With that in mind, I have created an etherpad with the list of prominent features in 3.7 at [1]. If you are a feature owner and interested in doing a hangout session to the community, can you please update your name and preferred time in the etherpad? Please feel free to update the etherpad if I have missed adding your feature to the list :). Given the number of features that we have, we can possibly look at doing two hangouts per week - possibly on Tuesdays and Thursdays from the coming week. What do you folks think? I've added BitRot Detection for Thursday (2nd April). Will send out an Hangout invite soon. Ah! I realize now that 04/02 isn't available and 04/07 is taken by Tiering. I'll take 04/09. I did reserve 04/02 for you and striked out 04/02 in the etherpad :). You can still go ahead with 04/02. Thanks, Vijay ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel