[Gluster-Maintainers] Jenkins build is back to normal : regression-test-with-multiplex #1185

2019-03-04 Thread jenkins
See 


___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] GlusterFS - 6.0RC - Test days (27th, 28th Feb)

2019-03-04 Thread Shyam Ranganathan
On 3/4/19 10:08 AM, Atin Mukherjee wrote:
> 
> 
> On Mon, 4 Mar 2019 at 20:33, Amar Tumballi Suryanarayan
> mailto:atumb...@redhat.com>> wrote:
> 
> Thanks to those who participated.
> 
> Update at present:
> 
> We found 3 blocker bugs in upgrade scenarios, and hence have marked
> release
> as pending upon them. We will keep these lists updated about progress.
> 
> 
> I’d like to clarify that upgrade testing is blocked. So just fixing
> these test blocker(s) isn’t enough to call release-6 green. We need to
> continue and finish the rest of the upgrade tests once the respective
> bugs are fixed.

Based on fixes expected by tomorrow for the upgrade fixes, we will build
an RC1 candidate on Wednesday (6-Mar) (tagging early Wed. Eastern TZ).
This RC can be used for further testing.

> 
> 
> 
> -Amar
> 
> On Mon, Feb 25, 2019 at 11:41 PM Amar Tumballi Suryanarayan <
> atumb...@redhat.com > wrote:
> 
> > Hi all,
> >
> > We are calling out our users, and developers to contribute in
> validating
> > ‘glusterfs-6.0rc’ build in their usecase. Specially for the cases of
> > upgrade, stability, and performance.
> >
> > Some of the key highlights of the release are listed in release-notes
> > draft
> >
> 
> .
> > Please note that there are some of the features which are being
> dropped out
> > of this release, and hence making sure your setup is not going to
> have an
> > issue is critical. Also the default lru-limit option in fuse mount for
> > Inodes should help to control the memory usage of client
> processes. All the
> > good reason to give it a shot in your test setup.
> >
> > If you are developer using gfapi interface to integrate with other
> > projects, you also have some signature changes, so please make
> sure your
> > project would work with latest release. Or even if you are using a
> project
> > which depends on gfapi, report the error with new RPMs (if any).
> We will
> > help fix it.
> >
> > As part of test days, we want to focus on testing the latest upcoming
> > release i.e. GlusterFS-6, and one or the other gluster volunteers
> would be
> > there on #gluster channel on freenode to assist the people. Some
> of the key
> > things we are looking as bug reports are:
> >
> >    -
> >
> >    See if upgrade from your current version to 6.0rc is smooth,
> and works
> >    as documented.
> >    - Report bugs in process, or in documentation if you find mismatch.
> >    -
> >
> >    Functionality is all as expected for your usecase.
> >    - No issues with actual application you would run on production
> etc.
> >    -
> >
> >    Performance has not degraded in your usecase.
> >    - While we have added some performance options to the code, not
> all of
> >       them are turned on, as they have to be done based on usecases.
> >       - Make sure the default setup is at least same as your current
> >       version
> >       - Try out few options mentioned in release notes (especially,
> >       --auto-invalidation=no) and see if it helps performance.
> >    -
> >
> >    While doing all the above, check below:
> >    - see if the log files are making sense, and not flooding with some
> >       “for developer only” type of messages.
> >       - get ‘profile info’ output from old and now, and see if
> there is
> >       anything which is out of normal expectation. Check with us
> on the numbers.
> >       - get a ‘statedump’ when there are some issues. Try to make
> sense
> >       of it, and raise a bug if you don’t understand it completely.
> >
> >
> >
> 
> Process
> > expected on test days.
> >
> >    -
> >
> >    We have a tracker bug
> >    [0]
> >    - We will attach all the ‘blocker’ bugs to this bug.
> >    -
> >
> >    Use this link to report bugs, so that we have more metadata around
> >    given bugzilla.
> >    - Click Here
> >     
>  
> 
> >       [1]
> >    -
> >
> >    The test cases which are to be tested are listed here in this sheet
> >   
> 
> [2],
> >    please add, update, and keep it up-to-date to reduce duplicate
> efforts
> 
> -- 
> - Atin (atinm)
> 
> ___
> Gluster-devel mailing list

[Gluster-Maintainers] Build failed in Jenkins: regression-test-with-multiplex #1184

2019-03-04 Thread jenkins
See 


Changes:

[Vijay Bellur] mgmt/glusterd: Fix a memory leak when peer detach fails

[Amar Tumballi] quotad: fix passing GF_DATA_TYPE_STR_OLD dict data to v4 
protocol

--
[...truncated 1.05 MB...]
./tests/bugs/gfapi/bug-1447266/1460514.t  -  7 second
./tests/bugs/fuse/bug-985074.t  -  7 second
./tests/bugs/ec/bug-1227869.t  -  7 second
./tests/bugs/ec/bug-1179050.t  -  7 second
./tests/bugs/distribute/bug-1122443.t  -  7 second
./tests/bugs/core/bug-949242.t  -  7 second
./tests/bugs/changelog/bug-1208470.t  -  7 second
./tests/bugs/bug-1258069.t  -  7 second
./tests/bugs/bitrot/1209818-vol-info-show-scrub-process-properly.t  -  7 second
./tests/bugs/bitrot/1209752-volume-status-should-show-bitrot-scrub-info.t  -  7 
second
./tests/basic/xlator-pass-through-sanity.t  -  7 second
./tests/basic/volume-status.t  -  7 second
./tests/basic/pgfid-feat.t  -  7 second
./tests/basic/inode-quota-enforcing.t  -  7 second
./tests/basic/glusterd/arbiter-volume-probe.t  -  7 second
./tests/basic/distribute/file-create.t  -  7 second
./tests/basic/afr/gfid-heal.t  -  7 second
./tests/basic/afr/arbiter-statfs.t  -  7 second
./tests/basic/afr/arbiter-remove-brick.t  -  7 second
./tests/gfid2path/block-mount-access.t  -  6 second
./tests/bugs/upcall/bug-1458127.t  -  6 second
./tests/bugs/snapshot/bug-1064768.t  -  6 second
./tests/bugs/replicate/bug-767585-gfid.t  -  6 second
./tests/bugs/replicate/bug-1498570-client-iot-graph-check.t  -  6 second
./tests/bugs/quota/bug-1243798.t  -  6 second
./tests/bugs/posix/bug-990028.t  -  6 second
./tests/bugs/nfs/bug-1143880-fix-gNFSd-auth-crash.t  -  6 second
./tests/bugs/md-cache/setxattr-prepoststat.t  -  6 second
./tests/bugs/io-cache/bug-read-hang.t  -  6 second
./tests/bugs/io-cache/bug-858242.t  -  6 second
./tests/bugs/glusterfs-server/bug-904300.t  -  6 second
./tests/bugs/glusterfs/bug-861015-log.t  -  6 second
./tests/bugs/gfapi/bug-1630804/gfapi-bz1630804.t  -  6 second
./tests/bugs/fuse/bug-963678.t  -  6 second
./tests/bugs/distribute/bug-884597.t  -  6 second
./tests/bugs/distribute/bug-882278.t  -  6 second
./tests/bugs/distribute/bug-1088231.t  -  6 second
./tests/bugs/core/bug-908146.t  -  6 second
./tests/bugs/core/bug-834465.t  -  6 second
./tests/bugs/bug-1371806_2.t  -  6 second
./tests/bugs/bitrot/bug-1229134-bitd-not-support-vol-set.t  -  6 second
./tests/bugs/bitrot/1209751-bitrot-scrub-tunable-reset.t  -  6 second
./tests/bugs/bitrot/1207029-bitrot-daemon-should-start-on-valid-node.t  -  6 
second
./tests/bitrot/br-stub.t  -  6 second
./tests/basic/playground/template-xlator-sanity.t  -  6 second
./tests/basic/gfapi/bug-1241104.t  -  6 second
./tests/basic/fencing/fence-basic.t  -  6 second
./tests/basic/ec/ec-read-policy.t  -  6 second
./tests/basic/ctime/ctime-noatime.t  -  6 second
./tests/basic/ctime/ctime-glfs-init.t  -  6 second
./tests/basic/afr/tarissue.t  -  6 second
./tests/basic/afr/gfid-mismatch.t  -  6 second
./tests/bugs/upcall/bug-1369430.t  -  5 second
./tests/bugs/snapshot/bug-1178079.t  -  5 second
./tests/bugs/shard/bug-1468483.t  -  5 second
./tests/bugs/shard/bug-1272986.t  -  5 second
./tests/bugs/shard/bug-1259651.t  -  5 second
./tests/bugs/shard/bug-1258334.t  -  5 second
./tests/bugs/replicate/bug-1365455.t  -  5 second
./tests/bugs/replicate/bug-1250170-fsync.t  -  5 second
./tests/bugs/replicate/bug-1101647.t  -  5 second
./tests/bugs/posix/bug-1034716.t  -  5 second
./tests/bugs/nfs/bug-877885.t  -  5 second
./tests/bugs/nfs/bug-1116503.t  -  5 second
./tests/bugs/md-cache/bug-1211863_unlink.t  -  5 second
./tests/bugs/md-cache/afr-stale-read.t  -  5 second
./tests/bugs/io-stats/bug-1598548.t  -  5 second
./tests/bugs/glusterfs-server/bug-873549.t  -  5 second
./tests/bugs/glusterd/quorum-value-check.t  -  5 second
./tests/bugs/core/bug-986429.t  -  5 second
./tests/bugs/core/bug-1168803-snapd-option-validation-fix.t  -  5 second
./tests/bugs/cli/bug-1022905.t  -  5 second
./tests/bugs/bitrot/bug-1210684-scrub-pause-resume-error-handling.t  -  5 second
./tests/bitrot/bug-1221914.t  -  5 second
./tests/basic/posix/zero-fill-enospace.t  -  5 second
./tests/basic/hardlink-limit.t  -  5 second
./tests/basic/glusterd/arbiter-volume.t  -  5 second
./tests/basic/gfapi/upcall-cache-invalidate.t  -  5 second
./tests/basic/gfapi/glfs_xreaddirplus_r.t  -  5 second
./tests/basic/gfapi/glfd-lkowner.t  -  5 second
./tests/basic/gfapi/gfapi-dup.t  -  5 second
./tests/basic/fencing/fencing-crash-conistency.t  -  5 second
./tests/basic/ec/nfs.t  -  5 second
./tests/basic/ec/ec-internal-xattrs.t  -  5 second
./tests/basic/ec/ec-fallocate.t  -  5 second
./tests/basic/ec/dht-rename.t  -  5 second
./tests/basic/distribute/throttle-rebal.t  -  5 second
./tests/basic/changelog/changelog-rename.t  -  5 second
./tests/basic/afr/heal-info.t  -  5 second
./tests/basic/afr/afr-read-hash-mode.t  -  5 second

Re: [Gluster-Maintainers] GlusterFS - 6.0RC - Test days (27th, 28th Feb)

2019-03-04 Thread Atin Mukherjee
On Mon, 4 Mar 2019 at 20:33, Amar Tumballi Suryanarayan 
wrote:

> Thanks to those who participated.
>
> Update at present:
>
> We found 3 blocker bugs in upgrade scenarios, and hence have marked release
> as pending upon them. We will keep these lists updated about progress.


I’d like to clarify that upgrade testing is blocked. So just fixing these
test blocker(s) isn’t enough to call release-6 green. We need to continue
and finish the rest of the upgrade tests once the respective bugs are fixed.


>
> -Amar
>
> On Mon, Feb 25, 2019 at 11:41 PM Amar Tumballi Suryanarayan <
> atumb...@redhat.com> wrote:
>
> > Hi all,
> >
> > We are calling out our users, and developers to contribute in validating
> > ‘glusterfs-6.0rc’ build in their usecase. Specially for the cases of
> > upgrade, stability, and performance.
> >
> > Some of the key highlights of the release are listed in release-notes
> > draft
> > <
> https://github.com/gluster/glusterfs/blob/release-6/doc/release-notes/6.0.md
> >.
> > Please note that there are some of the features which are being dropped
> out
> > of this release, and hence making sure your setup is not going to have an
> > issue is critical. Also the default lru-limit option in fuse mount for
> > Inodes should help to control the memory usage of client processes. All
> the
> > good reason to give it a shot in your test setup.
> >
> > If you are developer using gfapi interface to integrate with other
> > projects, you also have some signature changes, so please make sure your
> > project would work with latest release. Or even if you are using a
> project
> > which depends on gfapi, report the error with new RPMs (if any). We will
> > help fix it.
> >
> > As part of test days, we want to focus on testing the latest upcoming
> > release i.e. GlusterFS-6, and one or the other gluster volunteers would
> be
> > there on #gluster channel on freenode to assist the people. Some of the
> key
> > things we are looking as bug reports are:
> >
> >-
> >
> >See if upgrade from your current version to 6.0rc is smooth, and works
> >as documented.
> >- Report bugs in process, or in documentation if you find mismatch.
> >-
> >
> >Functionality is all as expected for your usecase.
> >- No issues with actual application you would run on production etc.
> >-
> >
> >Performance has not degraded in your usecase.
> >- While we have added some performance options to the code, not all of
> >   them are turned on, as they have to be done based on usecases.
> >   - Make sure the default setup is at least same as your current
> >   version
> >   - Try out few options mentioned in release notes (especially,
> >   --auto-invalidation=no) and see if it helps performance.
> >-
> >
> >While doing all the above, check below:
> >- see if the log files are making sense, and not flooding with some
> >   “for developer only” type of messages.
> >   - get ‘profile info’ output from old and now, and see if there is
> >   anything which is out of normal expectation. Check with us on the
> numbers.
> >   - get a ‘statedump’ when there are some issues. Try to make sense
> >   of it, and raise a bug if you don’t understand it completely.
> >
> >
> > <
> https://hackmd.io/YB60uRCMQRC90xhNt4r6gA?both#Process-expected-on-test-days
> >Process
> > expected on test days.
> >
> >-
> >
> >We have a tracker bug
> >[0]
> >- We will attach all the ‘blocker’ bugs to this bug.
> >-
> >
> >Use this link to report bugs, so that we have more metadata around
> >given bugzilla.
> >- Click Here
> >   <
> https://bugzilla.redhat.com/enter_bug.cgi?blocked=1672818_severity=high=core=high=GlusterFS_whiteboard=gluster-test-day=6
> >
> >   [1]
> >-
> >
> >The test cases which are to be tested are listed here in this sheet
> ><
> https://docs.google.com/spreadsheets/d/1AS-tDiJmAr9skK535MbLJGe_RfqDQ3j1abX1wtjwpL4/edit?usp=sharing
> >[2],
> >please add, update, and keep it up-to-date to reduce duplicate efforts

-- 
- Atin (atinm)
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Various upgrades are Broken

2019-03-04 Thread Sanju Rakonde
On Mon, Mar 4, 2019 at 6:54 PM Shyam Ranganathan 
wrote:

> On 3/4/19 8:09 AM, Hari Gowtham wrote:
> > On Mon, Mar 4, 2019 at 6:18 PM Shyam Ranganathan 
> wrote:
> >>
> >> On 3/4/19 7:29 AM, Amar Tumballi Suryanarayan wrote:
> >>> Thanks for testing this Hari.
> >>>
> >>> On Mon, Mar 4, 2019 at 5:42 PM Hari Gowtham  >>> > wrote:
> >>>
> >>> Hi,
> >>>
> >>> With the patch https://review.gluster.org/#/c/glusterfs/+/21838/
> the
> >>> upgrade from 3.12 to 6, 4.1 to 6 and 5 to 6 is broken.
> >>>
> >>> The above patch is available in release 6 and has been back-ported
> >>> to 4.1 and 5.
> >>> Though there isn't any release made with this patch on 4.1 and 5,
> if
> >>> made there are a number of scenarios that will fail. Few are
> mentioned
> >>> below:
> >>>
> >>>
> >>> Considering there is no release with this patch in, lets not consider
> >>> backporting at all.
> >
> > It has been back-ported to 4 and 5 already.
> > Regarding 5 we have decided to revert and make the release.
> > Are we going to revert the patch for 4 or wait for the fix?
>
> Release-4.1 next minor release is slated for week of 20th March, 2019.
> Hence, we have time to get the fix in place, but before that I would
> revert it anyway, so that tracking need not bother with possible late
> arrival of the fix.
>
> >
> >>
> >> Current 5.4 release (yet to be announced and released on the CentOS SIG
> >> (as testing is pending) *has* the fix. We need to revert it and rebuild
> >> 5.4, so that we can make the 5.4 release (without the fix).
> >>
> >> Hari/Sanju are you folks already on it?
> >
> > Yes, Sanju is working on the patch.
>

Fix is posted for review https://review.gluster.org/#/c/glusterfs/+/22297/

>
> Thank you!
>
> >
> >>
> >> Shyam
> >
> >
>


-- 
Thanks,
Sanju
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] GlusterFS - 6.0RC - Test days (27th, 28th Feb)

2019-03-04 Thread Amar Tumballi Suryanarayan
Thanks to those who participated.

Update at present:

We found 3 blocker bugs in upgrade scenarios, and hence have marked release
as pending upon them. We will keep these lists updated about progress.

-Amar

On Mon, Feb 25, 2019 at 11:41 PM Amar Tumballi Suryanarayan <
atumb...@redhat.com> wrote:

> Hi all,
>
> We are calling out our users, and developers to contribute in validating
> ‘glusterfs-6.0rc’ build in their usecase. Specially for the cases of
> upgrade, stability, and performance.
>
> Some of the key highlights of the release are listed in release-notes
> draft
> .
> Please note that there are some of the features which are being dropped out
> of this release, and hence making sure your setup is not going to have an
> issue is critical. Also the default lru-limit option in fuse mount for
> Inodes should help to control the memory usage of client processes. All the
> good reason to give it a shot in your test setup.
>
> If you are developer using gfapi interface to integrate with other
> projects, you also have some signature changes, so please make sure your
> project would work with latest release. Or even if you are using a project
> which depends on gfapi, report the error with new RPMs (if any). We will
> help fix it.
>
> As part of test days, we want to focus on testing the latest upcoming
> release i.e. GlusterFS-6, and one or the other gluster volunteers would be
> there on #gluster channel on freenode to assist the people. Some of the key
> things we are looking as bug reports are:
>
>-
>
>See if upgrade from your current version to 6.0rc is smooth, and works
>as documented.
>- Report bugs in process, or in documentation if you find mismatch.
>-
>
>Functionality is all as expected for your usecase.
>- No issues with actual application you would run on production etc.
>-
>
>Performance has not degraded in your usecase.
>- While we have added some performance options to the code, not all of
>   them are turned on, as they have to be done based on usecases.
>   - Make sure the default setup is at least same as your current
>   version
>   - Try out few options mentioned in release notes (especially,
>   --auto-invalidation=no) and see if it helps performance.
>-
>
>While doing all the above, check below:
>- see if the log files are making sense, and not flooding with some
>   “for developer only” type of messages.
>   - get ‘profile info’ output from old and now, and see if there is
>   anything which is out of normal expectation. Check with us on the 
> numbers.
>   - get a ‘statedump’ when there are some issues. Try to make sense
>   of it, and raise a bug if you don’t understand it completely.
>
>
> Process
> expected on test days.
>
>-
>
>We have a tracker bug
>[0]
>- We will attach all the ‘blocker’ bugs to this bug.
>-
>
>Use this link to report bugs, so that we have more metadata around
>given bugzilla.
>- Click Here
>   
> 
>   [1]
>-
>
>The test cases which are to be tested are listed here in this sheet
>
> [2],
>please add, update, and keep it up-to-date to reduce duplicate efforts.
>
> Lets together make this release a success.
>
> Also check if we covered some of the open issues from Weekly untriaged
> bugs
> 
> [3]
>
> For details on build and RPMs check this email
> 
> [4]
>
> Finally, the dates :-)
>
>- Wednesday - Feb 27th, and
>- Thursday - Feb 28th
>
> Note that our goal is to identify as many issues as possible in upgrade
> and stability scenarios, and if any blockers are found, want to make sure
> we release with the fix for same. So each of you, Gluster users, feel
> comfortable to upgrade to 6.0 version.
>
> Regards,
> Gluster Ants.
>
> --
> Amar Tumballi (amarts)
>


-- 
Amar Tumballi (amarts)
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Various upgrades are Broken

2019-03-04 Thread Shyam Ranganathan
On 3/4/19 8:09 AM, Hari Gowtham wrote:
> On Mon, Mar 4, 2019 at 6:18 PM Shyam Ranganathan  wrote:
>>
>> On 3/4/19 7:29 AM, Amar Tumballi Suryanarayan wrote:
>>> Thanks for testing this Hari.
>>>
>>> On Mon, Mar 4, 2019 at 5:42 PM Hari Gowtham >> > wrote:
>>>
>>> Hi,
>>>
>>> With the patch https://review.gluster.org/#/c/glusterfs/+/21838/ the
>>> upgrade from 3.12 to 6, 4.1 to 6 and 5 to 6 is broken.
>>>
>>> The above patch is available in release 6 and has been back-ported
>>> to 4.1 and 5.
>>> Though there isn't any release made with this patch on 4.1 and 5, if
>>> made there are a number of scenarios that will fail. Few are mentioned
>>> below:
>>>
>>>
>>> Considering there is no release with this patch in, lets not consider
>>> backporting at all.
> 
> It has been back-ported to 4 and 5 already.
> Regarding 5 we have decided to revert and make the release.
> Are we going to revert the patch for 4 or wait for the fix?

Release-4.1 next minor release is slated for week of 20th March, 2019.
Hence, we have time to get the fix in place, but before that I would
revert it anyway, so that tracking need not bother with possible late
arrival of the fix.

> 
>>
>> Current 5.4 release (yet to be announced and released on the CentOS SIG
>> (as testing is pending) *has* the fix. We need to revert it and rebuild
>> 5.4, so that we can make the 5.4 release (without the fix).
>>
>> Hari/Sanju are you folks already on it?
> 
> Yes, Sanju is working on the patch.

Thank you!

> 
>>
>> Shyam
> 
> 
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Various upgrades are Broken

2019-03-04 Thread Hari Gowtham
On Mon, Mar 4, 2019 at 6:18 PM Shyam Ranganathan  wrote:
>
> On 3/4/19 7:29 AM, Amar Tumballi Suryanarayan wrote:
> > Thanks for testing this Hari.
> >
> > On Mon, Mar 4, 2019 at 5:42 PM Hari Gowtham  > > wrote:
> >
> > Hi,
> >
> > With the patch https://review.gluster.org/#/c/glusterfs/+/21838/ the
> > upgrade from 3.12 to 6, 4.1 to 6 and 5 to 6 is broken.
> >
> > The above patch is available in release 6 and has been back-ported
> > to 4.1 and 5.
> > Though there isn't any release made with this patch on 4.1 and 5, if
> > made there are a number of scenarios that will fail. Few are mentioned
> > below:
> >
> >
> > Considering there is no release with this patch in, lets not consider
> > backporting at all.

It has been back-ported to 4 and 5 already.
Regarding 5 we have decided to revert and make the release.
Are we going to revert the patch for 4 or wait for the fix?

>
> Current 5.4 release (yet to be announced and released on the CentOS SIG
> (as testing is pending) *has* the fix. We need to revert it and rebuild
> 5.4, so that we can make the 5.4 release (without the fix).
>
> Hari/Sanju are you folks already on it?

Yes, Sanju is working on the patch.

>
> Shyam


-- 
Regards,
Hari Gowtham.
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Various upgrades are Broken

2019-03-04 Thread Shyam Ranganathan
On 3/4/19 7:29 AM, Amar Tumballi Suryanarayan wrote:
> Thanks for testing this Hari.
> 
> On Mon, Mar 4, 2019 at 5:42 PM Hari Gowtham  > wrote:
> 
> Hi,
> 
> With the patch https://review.gluster.org/#/c/glusterfs/+/21838/ the
> upgrade from 3.12 to 6, 4.1 to 6 and 5 to 6 is broken.
> 
> The above patch is available in release 6 and has been back-ported
> to 4.1 and 5.
> Though there isn't any release made with this patch on 4.1 and 5, if
> made there are a number of scenarios that will fail. Few are mentioned
> below:
> 
> 
> Considering there is no release with this patch in, lets not consider
> backporting at all. 

Current 5.4 release (yet to be announced and released on the CentOS SIG
(as testing is pending) *has* the fix. We need to revert it and rebuild
5.4, so that we can make the 5.4 release (without the fix).

Hari/Sanju are you folks already on it?

Shyam
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Various upgrades are Broken

2019-03-04 Thread Amar Tumballi Suryanarayan
Thanks for testing this Hari.

On Mon, Mar 4, 2019 at 5:42 PM Hari Gowtham  wrote:

> Hi,
>
> With the patch https://review.gluster.org/#/c/glusterfs/+/21838/ the
> upgrade from 3.12 to 6, 4.1 to 6 and 5 to 6 is broken.
>
> The above patch is available in release 6 and has been back-ported to 4.1
> and 5.
> Though there isn't any release made with this patch on 4.1 and 5, if
> made there are a number of scenarios that will fail. Few are mentioned
> below:
>

Considering there is no release with this patch in, lets not consider
backporting at all.


> 3.12 to 4.1 with patch
> 3.12 to 5 with patch
> 4.1 to 4.1 with patch
> 4.1 to any higher versions with patch.
> 5 to 5 or higher version with patch.
>
> The fix is being worked on. Until then, its a request to stop making
> releases to avoid more complication.
>
>
Also, we can revert this patch in release-6 right away, as this fix is
supposed to help AFR configs with gNFS. Ravi, you know more history about
the patch, any thing more we should be considering?


>
> --
> Regards,
> Hari Gowtham.
>


-- 
Amar Tumballi (amarts)
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Various upgrades are Broken

2019-03-04 Thread Hari Gowtham
Hi,

With the patch https://review.gluster.org/#/c/glusterfs/+/21838/ the
upgrade from 3.12 to 6, 4.1 to 6 and 5 to 6 is broken.

The above patch is available in release 6 and has been back-ported to 4.1 and 5.
Though there isn't any release made with this patch on 4.1 and 5, if
made there are a number of scenarios that will fail. Few are mentioned
below:
3.12 to 4.1 with patch
3.12 to 5 with patch
4.1 to 4.1 with patch
4.1 to any higher versions with patch.
5 to 5 or higher version with patch.

The fix is being worked on. Until then, its a request to stop making
releases to avoid more complication.


-- 
Regards,
Hari Gowtham.
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers