Re: [Gluster-devel] [Gluster-users] How about replacing old versions in Bugzilla by "deprecated"
On 11/09/2014 05:23 PM, Niels de Vos wrote: Hi, we've been working on triaging bugs against unsupported versions. I think we would do our community users a favour if we have a "deprecated" or "old unsupported" version in Bugzilla. There should be no need for users to select old versions that we do not update anymore. There are still many bugs that need checking and some form of an update in them. Everything bug that is <= 3.3 could be moved to the unsupported version to at least get a generic message out. http://goo.gl/IA7zaq contains a report of all open bugs/versions. Many of the old bugs are feature requests, and just need to be labeled as such and moved to the "mainline" version. Others could possibly get closed as a duplicate in case there is a fix in the current releases. Do you think that this is a good, or bad idea? Please let us know before, or during the next Bug Triage meeting on Tuesday 12:00 UTC: - https://public.pad.fsfe.org/p/gluster-bug-triage Sounds like a good idea to me. Thanks, Vijay ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] [Gluster-users] info heal-failed shown as gfid
Heal-failed can be for any reason that's not defined as split brain. The only place I've been able to find clues is in the log files. Look at the timestamp on the heal-failed output and match it to log entries in glustershd logs. On November 7, 2014 6:49:36 PM PST, Peter Auyeung wrote: >I have a node down while gfs still open for writing. > >Got tons of heal-failed on a replicated volume showing as gfid. > >Tried gfid-resolver and got the following: > ># ./gfid-resolver.sh /brick02/gfs/ 88417c43-7d0f-4ec5-8fcd-f696617b5bc1 >88417c43-7d0f-4ec5-8fcd-f696617b5bc1==File:11/07/14 >18:47:19 [ /root/scripts ] > >Any one has clue how to resolve and fix these heal-failed entries?? > > ># gluster volume heal sas02 info heal-failed >Gathering list of heal failed entries on volume sas02 has been >successful > >Brick glusterprod001.bo.shopzilla.sea:/brick02/gfs >Number of entries: 0 > >Brick glusterprod002.bo.shopzilla.sea:/brick02/gfs >Number of entries: 1024 >atpath on brick >--- >2014-11-08 01:22:23 >2014-11-08 01:22:23 >2014-11-08 01:22:23 >2014-11-08 01:22:23 >2014-11-08 01:22:23 >2014-11-08 01:22:23 >2014-11-08 01:22:23 >2014-11-08 01:22:23 >2014-11-08 01:22:23 >2014-11-08 01:22:23 >2014-11-08 01:22:23 >2014-11-08 01:22:23 >2014-11-08 01:22:23 >2014-11-08 01:22:23 >2014-11-08 01:22:23 >2014-11-08 01:22:23 >2014-11-08 01:22:23 >2014-11-08 01:22:23 >2014-11-08 01:22:23 >2014-11-08 01:22:23 >2014-11-08 01:22:23 >2014-11-08 01:22:23 >2014-11-08 01:22:23 >2014-11-08 01:22:23 > > > > >___ >Gluster-devel mailing list >Gluster-devel@gluster.org >http://supercolony.gluster.org/mailman/listinfo/gluster-devel -- Sent from my Android device with K-9 Mail. Please excuse my brevity.___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] new spurious regressions
On 11/10/2014 10:58 AM, Emmanuel Dreyfus wrote: Pranith Kumar Karampuri wrote: Since I could spot nothing from, I reconnected it. I will try by submitting a change with set -x for that script. It was consistently happening with my change just on regression machine. So I added set -x and submitted the change. Lets see what the results will be. I did that too and submitted a possible fix: http://review.gluster.com/9081 Cool. Thats great. Thanks for taking a look into this one :-) Pranith ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] [Gluster-users] info heal-failed shown as gfid
On 11/08/2014 08:19 AM, Peter Auyeung wrote: I have a node down while gfs still open for writing. Got tons of heal-failed on a replicated volume showing as gfid. Tried gfid-resolver and got the following: # ./gfid-resolver.sh /brick02/gfs/ 88417c43-7d0f-4ec5-8fcd-f696617b5bc1 88417c43-7d0f-4ec5-8fcd-f696617b5bc1==File:11/07/14 18:47:19 [ /root/scripts ] Any one has clue how to resolve and fix these heal-failed entries?? Procedure detailed in [1] might be useful. -Vijay [1] https://github.com/gluster/glusterfs/blob/master/doc/debugging/split-brain.md ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] new spurious regressions
Pranith Kumar Karampuri wrote: > > Since I could spot nothing from, I reconnected it. I will try by > > submitting a change with set -x for that script. > It was consistently happening with my change just on regression machine. > So I added set -x and submitted the change. Lets see what the results > will be. I did that too and submitted a possible fix: http://review.gluster.com/9081 -- Emmanuel Dreyfus http://hcpnet.free.fr/pubz m...@netbsd.org ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] new spurious regressions
On 11/10/2014 01:04 AM, Emmanuel Dreyfus wrote: Justin Clift wrote: I've just used that page to disconnect slave25, so you're fine to investigate there (same login credentials as before). Please reconnect it when you're done. :) Since I could spot nothing from, I reconnected it. I will try by submitting a change with set -x for that script. It was consistently happening with my change just on regression machine. So I added set -x and submitted the change. Lets see what the results will be. Pranith ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] new spurious regressions
Justin Clift wrote: > I've just used that page to disconnect slave25, so you're fine to > investigate there (same login credentials as before). Please reconnect > it when you're done. :) Since I could spot nothing from, I reconnected it. I will try by submitting a change with set -x for that script. -- Emmanuel Dreyfus http://hcpnet.free.fr/pubz m...@netbsd.org ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] Spurious regression of tests/basic/mgmt_v3-locks.t
On 11/08/2014 05:21 AM, Justin Clift wrote: > On Wed, 05 Nov 2014 14:58:06 +0530 > Atin Mukherjee wrote: > >> Can there be any cases where glusterd instance may go down >> unexpectedly without a crash? >> >> [1] http://build.gluster.org/job/rackspace-regression-2GB-triggered >> /2319/consoleFull > > Has anyone gotten back to you about this? > Not yet :( ~Atin > + Justin > ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] new spurious regressions
Emmanuel Dreyfus wrote: > I have made a build for testing on slave25:~root/manu20141109 > But the problem with spurious failures is that they are spurious: It > seems diffcult to trigger. I ran the test 198 times without an error :-/ -- Emmanuel Dreyfus http://hcpnet.free.fr/pubz m...@netbsd.org ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] new spurious regressions
Justin Clift wrote: > I've just used that page to disconnect slave25, so you're fine to > investigate there (same login credentials as before). Please reconnect > it when you're done. :) I have made a build for testing on slave25:~root/manu20141109 But the problem with spurious failures is that they are spurious: It seems diffcult to trigger. -- Emmanuel Dreyfus http://hcpnet.free.fr/pubz m...@netbsd.org ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
[Gluster-devel] How about replacing old versions in Bugzilla by "deprecated"
Hi, we've been working on triaging bugs against unsupported versions. I think we would do our community users a favour if we have a "deprecated" or "old unsupported" version in Bugzilla. There should be no need for users to select old versions that we do not update anymore. There are still many bugs that need checking and some form of an update in them. Everything bug that is <= 3.3 could be moved to the unsupported version to at least get a generic message out. http://goo.gl/IA7zaq contains a report of all open bugs/versions. Many of the old bugs are feature requests, and just need to be labeled as such and moved to the "mainline" version. Others could possibly get closed as a duplicate in case there is a fix in the current releases. Do you think that this is a good, or bad idea? Please let us know before, or during the next Bug Triage meeting on Tuesday 12:00 UTC: - https://public.pad.fsfe.org/p/gluster-bug-triage Responses by email are welcome, additional visitors that voice their opinion during the IRC meeting would be appreciated too. Thanks, Niels ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] new spurious regressions
On Sun, 9 Nov 2014 09:57:36 +0100 m...@netbsd.org (Emmanuel Dreyfus) wrote: > Pranith Kumar Karampuri wrote: > > > The following tests keep failing spuriously nowadays. I CCed > > glusterd folks and original author(Kritika) and Last change author > > (Emmanuel). > > I did not suspect it was my change. The strange bit is that the same > wrappers in other tests do not produce anything spurious. And of > course it is never spurious at mine. > > Do we have rackspace node where it is possible to investigate? Yep. It's pretty easy to disconnect a node in Jenkins so it won't run tasks. eg: http://build.gluster.org/computer/slave25.cloud.gluster.org/markOffline I've just used that page to disconnect slave25, so you're fine to investigate there (same login credentials as before). Please reconnect it when you're done. :) + Justin -- GlusterFS - http://www.gluster.org An open source, distributed file system scaling to several petabytes, and handling thousands of clients. My personal twitter: twitter.com/realjustinclift ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] new spurious regressions
Pranith Kumar Karampuri wrote: > The following tests keep failing spuriously nowadays. I CCed > glusterd folks and original author(Kritika) and Last change author > (Emmanuel). I did not suspect it was my change. The strange bit is that the same wrappers in other tests do not produce anything spurious. And of course it is never spurious at mine. Do we have rackspace node where it is possible to investigate? -- Emmanuel Dreyfus http://hcpnet.free.fr/pubz m...@netbsd.org ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel