Re: [Gluster-devel] Tests failing on Centos 7
On Tue, Nov 28, 2017 at 11:35 AM, Ravishankar N wrote: > > > On 11/27/2017 07:12 PM, Nigel Babu wrote: > > Hello folks, > > I have an update on chunking. There's good news and bad. The first bit is > that We a chunked regression job now. It splits it out into 10 chunks that > are run in parallel. This chunking is quite simple at the moment and > doesn't try to be very smart. The intelligence steps will come in once > we're ready to go live. > > In the meanwhile, we've run into a few road blocks. The following tests do > not work on CentOS 7: > > ./tests/bugs/cli/bug-1169302.t > ./tests/bugs/posix/bug-990028.t > ./tests/bugs/glusterd/bug-1260185-donot-allow-detach- > commit-unnecessarily.t > ./tests/bugs/core/multiplex-limit-issue-151.t > ./tests/basic/afr/split-brain-favorite-child-policy.t > > > Raised 1518062 for the centOS 7 machine. > -Ravi > > ./tests/bugs/core/bug-1432542-mpx-restart-crash.t > > https://review.gluster.org/#/c/18873/ should fix the above. > Can the maintainers for these components please take a look at this test > and fix them to run on Centos 7? When we land chunked regressions, we'll > switch out our entire build farm over to centos 7. If you want a test > machine to reproduce the failure and debug, please file a bug requesting > one with your SSH public key attached. > > -- > nigelb > > > ___ > Gluster-devel mailing > listGluster-devel@gluster.orghttp://lists.gluster.org/mailman/listinfo/gluster-devel > > > > ___ > Gluster-devel mailing list > Gluster-devel@gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-devel > ___ Gluster-devel mailing list Gluster-devel@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] Tests failing on Centos 7
On 11/27/2017 07:12 PM, Nigel Babu wrote: Hello folks, I have an update on chunking. There's good news and bad. The first bit is that We a chunked regression job now. It splits it out into 10 chunks that are run in parallel. This chunking is quite simple at the moment and doesn't try to be very smart. The intelligence steps will come in once we're ready to go live. In the meanwhile, we've run into a few road blocks. The following tests do not work on CentOS 7: ./tests/bugs/cli/bug-1169302.t ./tests/bugs/posix/bug-990028.t ./tests/bugs/glusterd/bug-1260185-donot-allow-detach-commit-unnecessarily.t ./tests/bugs/core/multiplex-limit-issue-151.t ./tests/basic/afr/split-brain-favorite-child-policy.t Raised 1518062 for the centOS 7 machine. -Ravi ./tests/bugs/core/bug-1432542-mpx-restart-crash.t Can the maintainers for these components please take a look at this test and fix them to run on Centos 7? When we land chunked regressions, we'll switch out our entire build farm over to centos 7. If you want a test machine to reproduce the failure and debug, please file a bug requesting one with your SSH public key attached. -- nigelb ___ Gluster-devel mailing list Gluster-devel@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-devel ___ Gluster-devel mailing list Gluster-devel@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] Need help figuring out the reason for test failure
Also refer to http://lists.gluster.org/pipermail/gluster-devel/2017-November/054009.html & http://lists.gluster.org/pipermail/gluster-devel/2017-November/054014.html I am glad if you get to debug why the test case is failing, but the patch https://review.gluster.org/18870 is trying to mark them known issue at the moment (and is in discussion if it is a good idea). -Amar On Tue, Nov 28, 2017 at 9:17 AM, Pranith Kumar Karampuri < pkara...@redhat.com> wrote: > > > On Tue, Nov 28, 2017 at 9:07 AM, Nigel Babu wrote: > >> Pranith, >> >> Our logging has changed slightly. Please read my email titled "Changes in >> handling logs from (centos) regressions and smoke" to gluster-devel and >> gluster-infra. >> > > Thanks for the pointer. Able to get the logs now! > > >> >> On Tue, Nov 28, 2017 at 8:06 AM, Pranith Kumar Karampuri < >> pkara...@redhat.com> wrote: >> >>> One of my patches(https://review.gluster.org/18857) is consistently >>> leading to a failure for the test: >>> >>> tests/bugs/core/bug-1432542-mpx-restart-crash.t >>> >>> https://build.gluster.org/job/centos6-regression/7676/consoleFull >>> >>> Jeff/Atin, >>> Do you know anything about these kinds of failures for this test? >>> >>> Nigel, >>>Unfortunately I am not able to look at the logs because the logs >>> location is not given properly (at least for me :-) ) >>> >>> *11:41:14* >>> filename="${ARCHIVED_LOGS}/glusterfs-logs-${UNIQUE_ID}.tgz"*11:41:14* sudo >>> -E tar -czf "${ARCHIVE_BASE}/${filename}" /var/log/glusterfs >>> /var/log/messages*;*11:41:14* echo "Logs archived in >>> http://${SERVER}/${filename}"; >>> >>> >>> Could you help me find what the location could be? >>> -- >>> Pranith >>> >> >> >> >> -- >> nigelb >> > > > > -- > Pranith > > ___ > Gluster-devel mailing list > Gluster-devel@gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-devel > -- Amar Tumballi (amarts) ___ Gluster-devel mailing list Gluster-devel@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] [Gluster-Maintainers] Release 4.0: Schedule and scope clarity (responses needed)
On Tue, Nov 28, 2017 at 12:57 AM, Vijay Bellur wrote: > Top posting here. > > I would like to propose that all new features for 4.x have tests in glusto > and that one successful run of those tests as a necessary condition for the > release to happen. This effort would help us catch regressions that happen > in multi node setups and also prevent accrual of further technical debt > with respect to testing. > > Thoughts? > > I understand the intent here, and would be very ideal for us as a project if we achieve it. Two concerns I have to make it a necessary condition to release (or some more): * Do we (as developers) aware of glusto framework and how to write tests at this time ? If not, this activity itself may take some time, and we are sure to miss the deadline. * Is the glusto framework ready to get contributions? - I may be naive in asking it, but how easy is it to add a new feature into it? specially something like GD2 (which is a core part of Gluster 4.0), or a feature like reflink which touches most of components, but is technically just 1 of the 50 fops we have? Also we did ask this question or suggestion to have 'line coverage' report with you new component along with the feature to get the feature called 'supported', and I don't think any one signed-off on that. Here is what I am thinking: * Add as much as possible feature to 4.0, start with marking every one of them as 'experimental'. * Move them out of 'experimental' bucket once the test cases are added. * Aim to get most of these features out of 'experimental' by adding more test cases before release of 4.1/4.2 (which ever is going to be our LTM release). This way, we get to keep the 4.0 release promise (both on features and timeline), and by our LTM, we can say which of it is 'recommended', and which is not. My 2cents. Thanks, > Vijay > > > On Mon, Nov 20, 2017 at 1:04 PM, Shyam Ranganathan > wrote: > >> Hi, >> >> As this is a longish mail, there are a few asks below, that I request >> folks to focus and answer. >> >> 4.0 is a STM release (Short Term Maintenance), further, 4.1 is also >> slated as a STM release (although the web pages read differently and will >> be corrected shortly). Finally 4.2 would be the first LTM (Long Term ...) >> in the 4.x release line for Gluster. >> >> * Schedule * >> The above also considers that 4.0 will release 2 months from 3.13, which >> puts 4.0 branching (also read as feature freeze deadline) around >> mid-December (4 weeks from now). >> >> 4.0/1/2 release calendar hence looks as follows, >> >> - Release 4.0: (STM) >> - Feature freeze/branching: mid-December >> - Release date: Jan, 31st 2018 >> - Release 4.1: (STM) >> - Feature freeze/branching: mid-March >> - Release date: Apr, 30th 2018 >> - Release 4.2: (LTM, release 3.10 EOL'd) >> - Feature freeze/branching: mid-June >> - Release date: Jul, 31st 2018 >> >> * Scope * >> >> The main focus in 4.0 is landing GlusterD2, and all efforts towards this >> take priority. >> >> Further big features in 4.0 are around GFProxy, protocol layer changes, >> monitoring and usability changes, FUSE catchup, +1 scaling. >> >> Also, some code cleanup/debt areas are in focus. >> >> Now, glusterfs/github [1] reads ~50 issues as being targets in 4.0, and >> among this about 2-4 are marked closed (or done). >> >> Ask1: Request each of you to go through the issue list and coordinate >> with a maintainer, to either mark an issues milestone correctly (i.e retain >> it in 4.0 or move it out) and also leave a comment on the issue about its >> readiness. >> >> Ask 2: If there are issues that you are working on and are not marked >> against the 4.0 milestone, please do the needful for the same. >> >> Ask 3: Please mail the devel list, on features that are making it to 4.0, >> so that the project board can be rightly populated with the issue. >> >> Ask 4: If the 4.0 branching date was extended by another 4 weeks, would >> that enable you to finish additional features that are already marked for >> 4.0? This helps us move the needle on branching to help land the right set >> of features. >> >> Thanks, >> Shyam >> ___ >> maintainers mailing list >> maintain...@gluster.org >> http://lists.gluster.org/mailman/listinfo/maintainers >> > > > ___ > maintainers mailing list > maintain...@gluster.org > http://lists.gluster.org/mailman/listinfo/maintainers > > -- Amar Tumballi (amarts) ___ Gluster-devel mailing list Gluster-devel@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] Need help figuring out the reason for test failure
On Tue, Nov 28, 2017 at 9:07 AM, Nigel Babu wrote: > Pranith, > > Our logging has changed slightly. Please read my email titled "Changes in > handling logs from (centos) regressions and smoke" to gluster-devel and > gluster-infra. > Thanks for the pointer. Able to get the logs now! > > On Tue, Nov 28, 2017 at 8:06 AM, Pranith Kumar Karampuri < > pkara...@redhat.com> wrote: > >> One of my patches(https://review.gluster.org/18857) is consistently >> leading to a failure for the test: >> >> tests/bugs/core/bug-1432542-mpx-restart-crash.t >> >> https://build.gluster.org/job/centos6-regression/7676/consoleFull >> >> Jeff/Atin, >> Do you know anything about these kinds of failures for this test? >> >> Nigel, >>Unfortunately I am not able to look at the logs because the logs >> location is not given properly (at least for me :-) ) >> >> *11:41:14* >> filename="${ARCHIVED_LOGS}/glusterfs-logs-${UNIQUE_ID}.tgz"*11:41:14* sudo >> -E tar -czf "${ARCHIVE_BASE}/${filename}" /var/log/glusterfs >> /var/log/messages*;*11:41:14* echo "Logs archived in >> http://${SERVER}/${filename}"; >> >> >> Could you help me find what the location could be? >> -- >> Pranith >> > > > > -- > nigelb > -- Pranith ___ Gluster-devel mailing list Gluster-devel@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] Need help figuring out the reason for test failure
Pranith, Our logging has changed slightly. Please read my email titled "Changes in handling logs from (centos) regressions and smoke" to gluster-devel and gluster-infra. On Tue, Nov 28, 2017 at 8:06 AM, Pranith Kumar Karampuri < pkara...@redhat.com> wrote: > One of my patches(https://review.gluster.org/18857) is consistently > leading to a failure for the test: > > tests/bugs/core/bug-1432542-mpx-restart-crash.t > > https://build.gluster.org/job/centos6-regression/7676/consoleFull > > Jeff/Atin, > Do you know anything about these kinds of failures for this test? > > Nigel, >Unfortunately I am not able to look at the logs because the logs > location is not given properly (at least for me :-) ) > > *11:41:14* > filename="${ARCHIVED_LOGS}/glusterfs-logs-${UNIQUE_ID}.tgz"*11:41:14* sudo -E > tar -czf "${ARCHIVE_BASE}/${filename}" /var/log/glusterfs > /var/log/messages*;*11:41:14* echo "Logs archived in > http://${SERVER}/${filename}"; > > > Could you help me find what the location could be? > -- > Pranith > -- nigelb ___ Gluster-devel mailing list Gluster-devel@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-devel
[Gluster-devel] Need help figuring out the reason for test failure
One of my patches(https://review.gluster.org/18857) is consistently leading to a failure for the test: tests/bugs/core/bug-1432542-mpx-restart-crash.t https://build.gluster.org/job/centos6-regression/7676/consoleFull Jeff/Atin, Do you know anything about these kinds of failures for this test? Nigel, Unfortunately I am not able to look at the logs because the logs location is not given properly (at least for me :-) ) *11:41:14* filename="${ARCHIVED_LOGS}/glusterfs-logs-${UNIQUE_ID}.tgz"*11:41:14* sudo -E tar -czf "${ARCHIVE_BASE}/${filename}" /var/log/glusterfs /var/log/messages*;*11:41:14* echo "Logs archived in http://${SERVER}/${filename}"; Could you help me find what the location could be? -- Pranith ___ Gluster-devel mailing list Gluster-devel@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] [Gluster-Maintainers] Release 4.0: Schedule and scope clarity (responses needed)
Top posting here. I would like to propose that all new features for 4.x have tests in glusto and that one successful run of those tests as a necessary condition for the release to happen. This effort would help us catch regressions that happen in multi node setups and also prevent accrual of further technical debt with respect to testing. Thoughts? Thanks, Vijay On Mon, Nov 20, 2017 at 1:04 PM, Shyam Ranganathan wrote: > Hi, > > As this is a longish mail, there are a few asks below, that I request > folks to focus and answer. > > 4.0 is a STM release (Short Term Maintenance), further, 4.1 is also slated > as a STM release (although the web pages read differently and will be > corrected shortly). Finally 4.2 would be the first LTM (Long Term ...) in > the 4.x release line for Gluster. > > * Schedule * > The above also considers that 4.0 will release 2 months from 3.13, which > puts 4.0 branching (also read as feature freeze deadline) around > mid-December (4 weeks from now). > > 4.0/1/2 release calendar hence looks as follows, > > - Release 4.0: (STM) > - Feature freeze/branching: mid-December > - Release date: Jan, 31st 2018 > - Release 4.1: (STM) > - Feature freeze/branching: mid-March > - Release date: Apr, 30th 2018 > - Release 4.2: (LTM, release 3.10 EOL'd) > - Feature freeze/branching: mid-June > - Release date: Jul, 31st 2018 > > * Scope * > > The main focus in 4.0 is landing GlusterD2, and all efforts towards this > take priority. > > Further big features in 4.0 are around GFProxy, protocol layer changes, > monitoring and usability changes, FUSE catchup, +1 scaling. > > Also, some code cleanup/debt areas are in focus. > > Now, glusterfs/github [1] reads ~50 issues as being targets in 4.0, and > among this about 2-4 are marked closed (or done). > > Ask1: Request each of you to go through the issue list and coordinate with > a maintainer, to either mark an issues milestone correctly (i.e retain it > in 4.0 or move it out) and also leave a comment on the issue about its > readiness. > > Ask 2: If there are issues that you are working on and are not marked > against the 4.0 milestone, please do the needful for the same. > > Ask 3: Please mail the devel list, on features that are making it to 4.0, > so that the project board can be rightly populated with the issue. > > Ask 4: If the 4.0 branching date was extended by another 4 weeks, would > that enable you to finish additional features that are already marked for > 4.0? This helps us move the needle on branching to help land the right set > of features. > > Thanks, > Shyam > ___ > maintainers mailing list > maintain...@gluster.org > http://lists.gluster.org/mailman/listinfo/maintainers > ___ Gluster-devel mailing list Gluster-devel@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] Tests failing on Centos 7
> > In the meanwhile, we've run into a few road blocks. The following tests do >> not work on CentOS 7: >> >> ./tests/bugs/cli/bug-1169302.t >> ./tests/bugs/posix/bug-990028.t >> ./tests/bugs/glusterd/bug-1260185-donot-allow-detach-commit- >> unnecessarily.t >> ./tests/bugs/core/multiplex-limit-issue-151.t >> >> > Patch posted at https://review.gluster.org/#/c/18869/ > > Thanks Nithya! > > >> ./tests/basic/afr/split-brain-favorite-child-policy.t >> ./tests/bugs/core/bug-1432542-mpx-restart-crash.t >> >> > > I am happy to volunteer here (to file bug and send a patch to mark them known issue) BUG: https://bugzilla.redhat.com/show_bug.cgi?id=1517961 PATCH: https://review.gluster.org/18870 Regards, Amar > Can the maintainers for these components please take a look at this test >> and fix them to run on Centos 7? When we land chunked regressions, we'll >> switch out our entire build farm over to centos 7. If you want a test >> machine to reproduce the failure and debug, please file a bug requesting >> one with your SSH public key attached. >> >> -- >> >> ___ Gluster-devel mailing list Gluster-devel@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] Tests failing on Centos 7
On 27-Nov-2017 8:44 AM, "Nigel Babu" wrote: > > Hello folks, > > I have an update on chunking. There's good news and bad. The first bit is > that We a chunked regression job now. It splits it out into 10 chunks that > are run in parallel. This chunking is quite simple at the moment and > doesn't try to be very smart. The intelligence steps will come in once > we're ready to go live. > > In the meanwhile, we've run into a few road blocks. The following tests do > not work on CentOS 7: > > ./tests/bugs/cli/bug-1169302.t > ./tests/bugs/posix/bug-990028.t > ./tests/bugs/glusterd/bug-1260185-donot-allow-detach-commit- > unnecessarily.t > ./tests/bugs/core/multiplex-limit-issue-151.t > > Patch posted at https://review.gluster.org/#/c/18869/ regards, Nithya > ./tests/basic/afr/split-brain-favorite-child-policy.t > ./tests/bugs/core/bug-1432542-mpx-restart-crash.t > > Can the maintainers for these components please take a look at this test > and fix them to run on Centos 7? When we land chunked regressions, we'll > switch out our entire build farm over to centos 7. If you want a test > machine to reproduce the failure and debug, please file a bug requesting > one with your SSH public key attached. > > -- > nigelb > > ___ > Gluster-devel mailing list > Gluster-devel@gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-devel > > > > ___ > Gluster-devel mailing list > Gluster-devel@gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-devel > ___ Gluster-devel mailing list Gluster-devel@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] Tests failing on Centos 7
Thanks for bringing this out Nigel. I remember Atin asking for help here last week too. Meantime, can we file a bug for each this test and continue with marking this as known issue? I am happy to volunteer here (to file bug and send a patch to mark them known issue) While we work on it in background, it should allow other critical patches for 4.0 release. Regards, Amar On 27-Nov-2017 8:44 AM, "Nigel Babu" wrote: Hello folks, I have an update on chunking. There's good news and bad. The first bit is that We a chunked regression job now. It splits it out into 10 chunks that are run in parallel. This chunking is quite simple at the moment and doesn't try to be very smart. The intelligence steps will come in once we're ready to go live. In the meanwhile, we've run into a few road blocks. The following tests do not work on CentOS 7: ./tests/bugs/cli/bug-1169302.t ./tests/bugs/posix/bug-990028.t ./tests/bugs/glusterd/bug-1260185-donot-allow-detach-commit-unnecessarily.t ./tests/bugs/core/multiplex-limit-issue-151.t ./tests/basic/afr/split-brain-favorite-child-policy.t ./tests/bugs/core/bug-1432542-mpx-restart-crash.t Can the maintainers for these components please take a look at this test and fix them to run on Centos 7? When we land chunked regressions, we'll switch out our entire build farm over to centos 7. If you want a test machine to reproduce the failure and debug, please file a bug requesting one with your SSH public key attached. -- nigelb ___ Gluster-devel mailing list Gluster-devel@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-devel ___ Gluster-devel mailing list Gluster-devel@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] [Gluster-Maintainers] Release 3.13: Release notes (Please read and contribute)
Sorry, missed this earlier. On 20-Nov-2017 3:12 PM, "Shyam Ranganathan" wrote: Hi, 3.13 RC0 is around the corner (possibly tomorrow). Towards this and the final 3.13.0 release, I was compiling the features that are a part of 3.13 and also attempted to write out the release notes for the same [1]. Some features have data and other do not (either in the commit message or in the github issue) and it is increasingly difficult to write the release notes by myself. So here is calling out folks who have committed the following features, to provide release notes as a patch to [1] to aid closing this activity out. Please refer older release notes, for what data goes into the respective sections [2]. Also, please provide CLI examples where required and/or command outputs when required. 1) Addition of summary option to the heal info CLI (@karthik-us) 2) Support for max-port range in glusterd.vol (@atin) 3) Prevention of other processes accessing the mounted brick snapshots (@sunnykumar) 4) Ability to reserve backend storage space (@amarts) @mohit, @nithya, can you guys help here? 5) List all the connected clients for a brick and also exported bricks/snapshots from each brick process (@harigowtham) 6) Imporved write performance with Disperse xlator, by intorducing parallel writes to file (@pranith/@xavi) 7) Disperse xlator now supports discard operations (@sunil) 8) Included details about memory pools in statedumps (@nixpanic) 9) Gluster APIs added to register callback functions for upcalls (@soumya) 10) Gluster API added with a glfs_mem_header for exported memory (@nixpanic) 11) Provided a new xlator to delay fops, to aid slow brick response simulation and debugging (@pranith) Thanks, Shyam [1] gerrit link to release-notes: https://review.gluster.org/#/c/18815/ [2] Release 3.12.0 notes for reference: https://github.com/gluster/glu sterfs/blob/release-3.12/doc/release-notes/3.12.0.md ___ maintainers mailing list maintain...@gluster.org http://lists.gluster.org/mailman/listinfo/maintainers ___ Gluster-devel mailing list Gluster-devel@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] [Gluster-Maintainers] Release 3.13: Release notes (Please read and contribute)
On Tue, Nov 21, 2017 at 1:41 AM, Shyam Ranganathan wrote: > Hi, > > 3.13 RC0 is around the corner (possibly tomorrow). Towards this and the > final 3.13.0 release, I was compiling the features that are a part of 3.13 > and also attempted to write out the release notes for the same [1]. > > Some features have data and other do not (either in the commit message or > in the github issue) and it is increasingly difficult to write the release > notes by myself. > > So here is calling out folks who have committed the following features, to > provide release notes as a patch to [1] to aid closing this activity out. > > Please refer older release notes, for what data goes into the respective > sections [2]. Also, please provide CLI examples where required and/or > command outputs when required. > > 1) Addition of summary option to the heal info CLI (@karthik-us) > 2) Support for max-port range in glusterd.vol (@atin) > https://review.gluster.org/18867 posted. 3) Prevention of other processes accessing the mounted brick snapshots > (@sunnykumar) > 4) Ability to reserve backend storage space (@amarts) > 5) List all the connected clients for a brick and also exported > bricks/snapshots from each brick process (@harigowtham) > 6) Imporved write performance with Disperse xlator, by intorducing > parallel writes to file (@pranith/@xavi) > 7) Disperse xlator now supports discard operations (@sunil) > 8) Included details about memory pools in statedumps (@nixpanic) > 9) Gluster APIs added to register callback functions for upcalls (@soumya) > 10) Gluster API added with a glfs_mem_header for exported memory > (@nixpanic) > 11) Provided a new xlator to delay fops, to aid slow brick response > simulation and debugging (@pranith) > > Thanks, > Shyam > > [1] gerrit link to release-notes: https://review.gluster.org/#/c/18815/ > > [2] Release 3.12.0 notes for reference: https://github.com/gluster/glu > sterfs/blob/release-3.12/doc/release-notes/3.12.0.md > ___ > maintainers mailing list > maintain...@gluster.org > http://lists.gluster.org/mailman/listinfo/maintainers > ___ Gluster-devel mailing list Gluster-devel@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-devel
[Gluster-devel] Tests failing on Centos 7
Hello folks, I have an update on chunking. There's good news and bad. The first bit is that We a chunked regression job now. It splits it out into 10 chunks that are run in parallel. This chunking is quite simple at the moment and doesn't try to be very smart. The intelligence steps will come in once we're ready to go live. In the meanwhile, we've run into a few road blocks. The following tests do not work on CentOS 7: ./tests/bugs/cli/bug-1169302.t ./tests/bugs/posix/bug-990028.t ./tests/bugs/glusterd/bug-1260185-donot-allow-detach-commit-unnecessarily.t ./tests/bugs/core/multiplex-limit-issue-151.t ./tests/basic/afr/split-brain-favorite-child-policy.t ./tests/bugs/core/bug-1432542-mpx-restart-crash.t Can the maintainers for these components please take a look at this test and fix them to run on Centos 7? When we land chunked regressions, we'll switch out our entire build farm over to centos 7. If you want a test machine to reproduce the failure and debug, please file a bug requesting one with your SSH public key attached. -- nigelb ___ Gluster-devel mailing list Gluster-devel@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-devel
[Gluster-devel] Coverity covscan for 2017-11-27-f7861e2a (master branch)
GlusterFS Coverity covscan results are available from http://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2017-11-27-f7861e2a ___ Gluster-devel mailing list Gluster-devel@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-devel