Re: [Gluster-Maintainers] [Gluster-devel] Release 11: Revisting our proposed timeline and features

2022-11-06 Thread Amar Tumballi
Other than 2 large PRs (CDC and zlib changes) we don't have any major
pending tasks. I would like to propose we keep up with the proposed dates,
and go ahead with the branching. If we merge these PRs, we can rebase and
send it to the branch again.

Shwetha Can you please go ahead with the branching related activities?

-Amar

On Mon, Oct 17, 2022 at 3:24 PM Xavi Hernandez  wrote:

> On Mon, Oct 17, 2022 at 10:40 AM Yaniv Kaul  wrote:
>
>>
>>
>> On Mon, Oct 17, 2022 at 8:41 AM Xavi Hernandez 
>> wrote:
>>
>>> On Mon, Oct 17, 2022 at 4:03 AM Amar Tumballi  wrote:
>>>
>>>> Here is my honest take on this one.
>>>>
>>>> On Tue, Oct 11, 2022 at 3:06 PM Shwetha Acharya 
>>>> wrote:
>>>>
>>>>> It is time to evaluate the fulfillment of our committed
>>>>> features/improvements and the feasibility of the proposed deadlines as 
>>>>> per Release
>>>>> 11 tracker <https://github.com/gluster/glusterfs/issues/3023>.
>>>>>
>>>>>
>>>>> Currently our timeline is as follows:
>>>>>
>>>>> Code Freeze: 31-Oct-2022
>>>>> RC : 30-Nov-2022
>>>>> GA : 10-JAN-2023
>>>>>
>>>>> *Please evaluate the following and reply to this thread if you want to
>>>>> convey anything important:*
>>>>>
>>>>> - Can we ensure to fulfill all the proposed requirements by the Code
>>>>> Freeze?
>>>>> - Do we need to add any more changes to accommodate any shortcomings
>>>>> or improvements?
>>>>> - Are we all good to go with the proposed timeline?
>>>>>
>>>>>
>>>> We have delayed the release already by more than 1year, and that is a
>>>> significant delay for any project. If the changes we work on is not getting
>>>> released frequently, the feedback loop for the project is delayed and hence
>>>> the further improvements. So, regardless of any pending promised things, we
>>>> should go ahead with the code-freeze and release on these dates.
>>>>
>>>> It is crucial for any projects / companies dependent on the project to
>>>> plan accordingly. There may be already few others who would have planned
>>>> their product release around these dates. Lets keep the same dates, and try
>>>> to achieve the tasks we have planned in these dates.
>>>>
>>>
>>> I agree. Pending changes will need to be added to next release. Doing it
>>> at last time is not safe for stability.
>>>
>>
>> Generally, +1.
>>
>> - Some info on my in-flight PRs:
>>
>> I have multiple independent patches for the flexible array member
>> conversion of different variables that are pending:
>> https://github.com/gluster/glusterfs/pull/3873
>> https://github.com/gluster/glusterfs/pull/3872
>> https://github.com/gluster/glusterfs/pull/3868  (this one is
>> particularly interesting, I hope it works!)
>> https://github.com/gluster/glusterfs/pull/3861
>> https://github.com/gluster/glusterfs/pull/3870 (already in review,
>> perhaps it can get it soon?)
>>
>
> I'm already looking at these and I expect they can be merged before the
> current code-freeze date.
>
>
>> I have this for one for inode related code, which got some attention
>> recently:
>> https://github.com/gluster/glusterfs/pull/3226
>>
>
> I'll try to review this one before code-freeze, but it requires much more
> care. Any help will be appreciated.
>
>
>>
>> I think this one is worthwhile looking at:
>> https://github.com/gluster/glusterfs/pull/3854
>>
>
> I'll try to take a look at this one also.
>
>
>> I wish we could get rid of old, unsupported versions:
>> https://github.com/gluster/glusterfs/pull/3544
>> (there's more to do, in different patches, but it's a start)
>>
>
> This one is mostly ok, but I think we can't release a new version without
> an explicit check for unsupported versions at least at the beginning, to
> avoid problems when users upgrade directly from 3.x to 11.x.
>
>
>> None of them is critical for release 11, though I'm unsure if I'll have
>> the ability to complete them later.
>>
>>
>> - The lack of EL9 official support (inc. testing infra.) is regrettable,
>> and I think something worth fixing *before* release 11 - adding sanity
>> on newer OS releases, which will use io_uring for example, is something we
>> should definitely consider.
>>

Re: [Gluster-Maintainers] [Gluster-devel] Release 11: Revisting our proposed timeline and features

2022-10-16 Thread Amar Tumballi
Here is my honest take on this one.

On Tue, Oct 11, 2022 at 3:06 PM Shwetha Acharya  wrote:

> It is time to evaluate the fulfillment of our committed
> features/improvements and the feasibility of the proposed deadlines as per 
> Release
> 11 tracker .
>
>
> Currently our timeline is as follows:
>
> Code Freeze: 31-Oct-2022
> RC : 30-Nov-2022
> GA : 10-JAN-2023
>
> *Please evaluate the following and reply to this thread if you want to
> convey anything important:*
>
> - Can we ensure to fulfill all the proposed requirements by the Code
> Freeze?
> - Do we need to add any more changes to accommodate any shortcomings or
> improvements?
> - Are we all good to go with the proposed timeline?
>
>
We have delayed the release already by more than 1year, and that is a
significant delay for any project. If the changes we work on is not getting
released frequently, the feedback loop for the project is delayed and hence
the further improvements. So, regardless of any pending promised things, we
should go ahead with the code-freeze and release on these dates.

It is crucial for any projects / companies dependent on the project to plan
accordingly. There may be already few others who would have planned their
product release around these dates. Lets keep the same dates, and try to
achieve the tasks we have planned in these dates.

Regards,
Amar

> Regards,
> Shwetha
> ___
> maintainers mailing list
> maintainers@gluster.org
> https://lists.gluster.org/mailman/listinfo/maintainers
>


-- 
--
https://kadalu.io
Container Storage made easy!
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [IMP] Requesting a fast-tracked minor update for release-10

2022-09-16 Thread Amar Tumballi
On Fri, Sep 16, 2022 at 1:59 PM Shwetha Acharya  wrote:

> Hi Anoop,
>
> Release 10.3 was scheduled for August. However, as there were not enough
> PRs tagged for release 10, we had to defer the release for 3 more months.
> Considering that there is a basic functionality breakage scenario with
> Samba and considerable number of PRs already tagged and merged for release
> 10 at this point of time, I believe with Release and Build perspective, we
> are okay to initiate the release soon.
>
>
I personally see SMB, NFS-Ganesha, QEMU communities as very key partner
communities, and if there is a request like this, we should surely make
changes and releases to comply with these communities.

This is the exact reason we have a provision for security fix releases,
which needn't wait for particular time, but can be done by particular event
(which in this case, merging of all critical PRs).

Would be great to see the release happening soon.


If there are any important PRs that are to be merged to release 10, I
> request maintainers to back port and tag them.
>
> We will wait for any further responses from the members in this list.
>
>
Also I guess as I mentioned in another discussion, we need to soon plan for
release-11 branching and release date.

-Amar


> Regards,
> Shwetha
>
>
> On Fri, Sep 16, 2022 at 12:39 PM Anoop C S  wrote:
>
>> Hello everyone,
>>
>> With GA of latest major stable release(v4.17)[1] for Samba, there is a
>> basic functionality breakage(see below) with GlusterFS backed SMB
>> shares via libgfapi. Necessary patches for libgfapi were already
>> identified and are now merged[2][3] with 'release-10' branch as
>> backports from 'devel' branch.
>>
>> Samba packages(v4.17)[4] for Fedora rawhide are already in stable
>> repositories and will soon be the case for other distributions. Looking
>> at the tentative date for v10 update we are concerned as it will be a
>> long 2 month gap where GlusterFS integration with latest(and greatest)
>> Samba will stay broken.
>>
>> Thus we are requesting a soonish release to avoid such a long broken
>> window where a lot more screams on this breakage are expected.
>>
>>
>> Thanks,
>> Anoop C S.
>>
>>
>> Functionality breakage in a nutshell:
>> "As a consumer of libgfapi, Samba used to differentiate between file
>> and directory OPENs based on stat information(and thereby setting
>> O_DIRECTORY flag) acquired prior to actual OPEN. Recent improvements in
>> Samba took out the need for O_DIRECTORY in OPEN flags causing
>> failure(with EISDIR) to connect to GlusterFS backed shares via
>> libgfapi."
>>
>> [1] https://www.samba.org/samba/history/samba-4.17.0.html
>> [2] https://github.com/gluster/glusterfs/pull/3755
>> [3] https://github.com/gluster/glusterfs/pull/3756
>> [4] https://bodhi.fedoraproject.org/updates/FEDORA-2022-4555909843
>> ___
>> maintainers mailing list
>> maintainers@gluster.org
>> https://lists.gluster.org/mailman/listinfo/maintainers
>>
>> ___
> maintainers mailing list
> maintainers@gluster.org
> https://lists.gluster.org/mailman/listinfo/maintainers
>


-- 
--
https://kadalu.io
Container Storage made easy!
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Build failed in Jenkins: centos8-regression #182

2021-01-11 Thread Amar Tumballi
On Mon, Jan 11, 2021 at 12:47 PM Deepshikha Khandelwal 
wrote:

> Can someone please take a look at this failing
> tests/bugs/nfs/bug-1053579.t test.
>
>
*23:38:59*  (508 / 782)
*23:38:59* [18:08:59] Running
tests in file ./tests/bugs/nfs/bug-1053579.t*23:42:20* Logs preserved
in tarball bug-1053579-iteration-1.tar*23:42:20*
./tests/bugs/nfs/bug-1053579.t timed out after 200 seconds*23:42:20*
./tests/bugs/nfs/bug-1053579.t: bad status 124*23:42:20* *23:42:20*
**23:42:20**
REGRESSION FAILED   **23:42:20** Retrying failed tests in
case **23:42:20** we got some spurious failures *

*23:42:20* *

Looks like we can give it a try with little extra time? say make it
300seconds and check?


> On Mon, Jan 11, 2021 at 1:08 AM  wrote:
>
>> See <
>> https://build.gluster.org/job/centos8-regression/182/display/redirect>
>>
>> Changes:
>>
>>
>> --
>> [...truncated 4.14 MB...]
>> ./tests/basic/afr/ta-read.t  -  9 second
>> ./tests/basic/afr/root-squash-self-heal.t  -  9 second
>> ./tests/basic/afr/halo.t  -  9 second
>> ./tests/basic/afr/afr-up.t  -  9 second
>> ./tests/features/readdir-ahead.t  -  8 second
>> ./tests/bugs/upcall/bug-1369430.t  -  8 second
>> ./tests/bugs/transport/bug-873367.t  -  8 second
>> ./tests/bugs/snapshot/bug-1064768.t  -  8 second
>> ./tests/bugs/shard/shard-inode-refcount-test.t  -  8 second
>> ./tests/bugs/replicate/bug-986905.t  -  8 second
>> ./tests/bugs/quota/bug-1243798.t  -  8 second
>> ./tests/bugs/quota/bug-1104692.t  -  8 second
>> ./tests/bugs/protocol/bug-1321578.t  -  8 second
>> ./tests/bugs/posix/bug-1034716.t  -  8 second
>> ./tests/bugs/nfs/bug-1143880-fix-gNFSd-auth-crash.t  -  8 second
>> ./tests/bugs/io-cache/bug-858242.t  -  8 second
>> ./tests/bugs/glusterfs/bug-861015-index.t  -  8 second
>> ./tests/bugs/glusterd/bug-948729/bug-948729-force.t  -  8 second
>> ./tests/bugs/fuse/bug-985074.t  -  8 second
>> ./tests/bugs/distribute/bug-1088231.t  -  8 second
>> ./tests/bugs/core/bug-1699025-brick-mux-detach-brick-fd-issue.t  -  8
>> second
>> ./tests/bugs/bitrot/bug-1229134-bitd-not-support-vol-set.t  -  8 second
>> ./tests/bugs/bitrot/1207029-bitrot-daemon-should-start-on-valid-node.t
>> -  8 second
>> ./tests/bitrot/br-stub.t  -  8 second
>> ./tests/basic/md-cache/bug-1317785.t  -  8 second
>> ./tests/basic/glusterd/arbiter-volume-probe.t  -  8 second
>> ./tests/basic/ctime/ctime-ec-heal.t  -  8 second
>> ./tests/basic/changelog/changelog-rename.t  -  8 second
>> ./tests/basic/afr/ta-write-on-bad-brick.t  -  8 second
>> ./tests/basic/afr/ta-shd.t  -  8 second
>> ./tests/basic/afr/stale-file-lookup.t  -  8 second
>> ./tests/basic/afr/split-brain-open.t  -  8 second
>> ./tests/basic/afr/gfid-heal.t  -  8 second
>> ./tests/basic/afr/afr-read-hash-mode.t  -  8 second
>> ./tests/000-flaky/bugs_glusterd_quorum-value-check.t  -  8 second
>> ./tests/bugs/shard/bug-1260637.t  -  7 second
>> ./tests/bugs/shard/bug-1259651.t  -  7 second
>> ./tests/bugs/replicate/mdata-heal-no-xattrs.t  -  7 second
>> ./tests/bugs/replicate/bug-1626994-info-split-brain.t  -  7 second
>> ./tests/bugs/replicate/bug-1561129-enospc.t  -  7 second
>> ./tests/bugs/replicate/bug-1221481-allow-fops-on-dir-split-brain.t  -  7
>> second
>> ./tests/bugs/quota/bug-1287996.t  -  7 second
>> ./tests/bugs/posix/bug-1175711.t  -  7 second
>> ./tests/bugs/nfs/bug-877885.t  -  7 second
>> ./tests/bugs/md-cache/setxattr-prepoststat.t  -  7 second
>> ./tests/bugs/md-cache/bug-1211863_unlink.t  -  7 second
>> ./tests/bugs/md-cache/afr-stale-read.t  -  7 second
>> ./tests/bugs/glusterfs/bug-872923.t  -  7 second
>> ./tests/bugs/glusterfs/bug-848251.t  -  7 second
>> ./tests/bugs/glusterd/bug-948729/bug-948729.t  -  7 second
>> ./tests/bugs/glusterd/bug-1482906-peer-file-blank-line.t  -  7 second
>> ./tests/bugs/ec/bug-1227869.t  -  7 second
>> ./tests/bugs/ec/bug-1161621.t  -  7 second
>> ./tests/bugs/distribute/bug-912564.t  -  7 second
>> ./tests/bugs/distribute/bug-1368012.t  -  7 second
>> ./tests/bugs/core/bug-1168803-snapd-option-validation-fix.t  -  7 second
>> ./tests/bugs/bug-1371806_1.t  -  7 second
>> ./tests/bugs/bug-1258069.t  -  7 second
>> ./tests/bitrot/bug-1221914.t  -  7 second
>> ./tests/basic/gfapi/libgfapi-fini-hang.t  -  7 second
>> ./tests/basic/fencing/fencing-crash-conistency.t  -  7 second
>> ./tests/basic/ec/nfs.t  -  7 second
>> ./tests/basic/ec/ec-anonymous-fd.t  -  7 second
>> ./tests/basic/ctime/ctime-utimesat.t  -  7 second
>> ./tests/basic/afr/ta.t  -  7 second
>> ./tests/basic/afr/arbiter-remove-brick.t  -  7 second
>> ./tests/bugs/upcall/bug-upcall-stat.t  -  6 second
>> ./tests/bugs/upcall/bug-1422776.t  -  6 second
>> ./tests/bugs/snapshot/bug-1178079.t  -  6 second
>> ./tests/bugs/shard/issue-1243.t  -  6 second
>> ./tests/bugs/shard/bug-1342298.t  -  6 second
>> 

Re: [Gluster-Maintainers] [Gluster-infra] [Gluster-devel] ACTION REQUESTED: Migrate your glusterfs patches from Gerrit to GitHub

2020-10-12 Thread Amar Tumballi
On Mon, 12 Oct, 2020, 8:08 pm sankarshan, 
wrote:

> It is perhaps on Amar to send the PR with the changes - but that would
> kind of make the approval/merge process a bit muddled? How about a PR
> being sent for review and then merged in?
>
> On Mon, 12 Oct 2020 at 19:22, Kaleb Keithley  wrote:
> >
> >
> >
> > On Thu, Oct 8, 2020 at 8:10 AM Kaleb Keithley 
> wrote:
> >>
> >> On Wed, Oct 7, 2020 at 7:33 AM Sunil Kumar Heggodu Gopala Acharya <
> shegg...@redhat.com> wrote:
> >>>
> >>>
> >>> Regards,
> >>>
> >>> Sunil kumar Acharya
> >>>
> >>>
> >>>
> >>>
> >>> On Wed, Oct 7, 2020 at 4:54 PM Kaleb Keithley 
> wrote:
> 
> 
> 
>  On Wed, Oct 7, 2020 at 5:46 AM Deepshikha Khandelwal <
> dkhan...@redhat.com> wrote:
> >
> >
> > - The "regression" tests would be triggered by a comment "/run
> regression" from anyone in the gluster-maintainers[4] github group. To run
> full regression, maintainers need to comment "/run full regression"
> >
> > [4] https://github.com/orgs/gluster/teams/gluster-maintainers
> 
> 
>  There are a lot of people in that group that haven't been involved
> with Gluster for a long time.
> >>>
> >>> Also there are new contributors, time to update!
> >>
> >>
> >> Who is going to do this? I don't have the necessary privs.
> >
> >
> > Anyone?
>

I will volunteer to do it to match with content with Maintainers file. IMO,
we should also fix non existing emails from maintainers file.

Will take a look sometime tomorrow!

Regards
Amar


> >
> > --
> >
> > Kaleb
> > ___
> > maintainers mailing list
> > maintainers@gluster.org
> > https://lists.gluster.org/mailman/listinfo/maintainers
>
>
>
> --
> sankarshan mukhopadhyay
> 
> ___
> Gluster-infra mailing list
> gluster-in...@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-infra
>
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] ACTION REQUESTED: Migrate your glusterfs patches from Gerrit to GitHub

2020-10-07 Thread Amar Tumballi
Thanks for getting this done, Deepshika and Michael, and Infra team members.

Thanks everyone for valuable feedback during this process!

I personally hope this change will help glusterfs project to attract more
developers, and we can engage more closely with our developers!

Note that this would result in weeks of confusions, questions, and bugs in
workflow! We are trying to tune the workflow accordingly! Please try it
out, and give feedback! Once we start using we may figure out new things,
so jump in, and give it a try!

In case of any issues, raise a github issue,  or find some of us on
gluster.slack.com

Regards,
Amar



On Wed, Oct 7, 2020 at 3:16 PM Deepshikha Khandelwal 
wrote:

> Hi folks,
>
> We have initiated the migration process today. All the patch owners are
> requested to move their existing patches from Gerrit[1] to Github[2].
>
> The changes we brought in with this migration:
>
> - The 'devel' branch[3] is the new default branch on GitHub to get away
> from master/slave language.
>
> - This 'devel' branch is the result of the merge of the current branch and
> the historic repository, thus requiring a new clone. It helps in getting
> the complete idea of tracing any changes properly to its origin to
> understand the intentions behind the code.
>
> - We have switched the glusterfs repo on gerrit to readonly state. So you
> will not be able to merge the patches on Gerrit from now onwards. Though we
> are not deprecating gerrit right now, we will work with the remaining
> users/projects to move to github as well.
>
> - Changes in the development workflow:
> - All the required smoke tests would be auto-triggered on submitting a
> PR.
> - Developers can retrigger the smoke tests using "/recheck smoke" as
> comment.
> - The "regression" tests would be triggered by a comment "/run
> regression" from anyone in the gluster-maintainers[4] github group. To run
> full regression, maintainers need to comment "/run full regression"
>
> For more information you can go through the contribution guidelines listed
> in CONTRIBUTING.md[5]
>
> [1] https://review.gluster.org/#/q/status:open+project:glusterfs
> [2] https://github.com/gluster/glusterfs
> [3] https://github.com/gluster/glusterfs/tree/devel
> [4] https://github.com/orgs/gluster/teams/gluster-maintainers
> [5] https://github.com/gluster/glusterfs/blob/master/CONTRIBUTING.md
>
> Please reach out to us if you have any queries.
>
> Thanks,
> Gluster-infra team
>


-- 
--
https://kadalu.io
Container Storage made easy!
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Flaky Regression tests again?

2020-08-15 Thread Amar Tumballi
If I look at the recent regression runs (
https://build.gluster.org/job/centos7-regression/), there is more than 50%
failure in tests.

At least 90% of the failures are not due to the patch itself. Considering
regression tests are very critical for our patches to get merged, and takes
almost 6-7 hours now a days to complete, how can we make sure we are
passing regression with 100% certainty ?

Again, out of this, there are only a few tests which keep failing, should
we revisit the tests and see why it is failing? or Should we mark them as
'Good if it passes, but don't fail regression if the tests fail' condition?

Some tests I have listed here from recent failures:

tests/bugs/core/multiplex-limit-issue-151.t
tests/bugs/distribute/bug-1122443.t +++
tests/bugs/distribute/bug-1117851.t
tests/bugs/glusterd/bug-857330/normal.t +
tests/basic/mount-nfs-auth.t +
tests/basic/changelog/changelog-snapshot.t
tests/basic/afr/split-brain-favorite-child-policy.t
tests/basic/distribute/rebal-all-nodes-migrate.t
tests/bugs/glusterd/quorum-value-check.t
tests/features/lock-migration/lkmigration-set-option.t
tests/bugs/nfs/bug-1116503.t
tests/basic/ec/ec-quorum-count-partial-failure.t

Considering these are just 12 of 750+ tests we run, Should we even consider
marking them bad till they are fixed to be 100% consistent?

Any thoughts on how we should go ahead?

Regards,
Amar

(+) indicates a count, so more + you see against the file, more times that
failed.
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-users] [Gluster-devel] Announcing Gluster release 7.7

2020-08-03 Thread Amar Tumballi
Hi Hu Bert,

Thanks for letting us know about improvements.

Now to check on possible reason, the question is, from which version did
you upgrade to 7.7?

Thanks


On Mon, Aug 3, 2020 at 12:16 PM Hu Bert  wrote:

> Hi there,
>
> just wanted to say thanks to all the developers, maintainers etc. This
> release (7) has brought us a small but nice performance improvement.
> Utilization and IOs per disk decreased, latency dropped. See attached
> images.
>
> I read the release notes but couldn't identify the specific
> changes/features for this improvement. Maybe someone could point to
> them - but no hurry... :-)
>
>
> Best regards,
> Hubert
>
> Am Mi., 22. Juli 2020 um 18:27 Uhr schrieb Rinku Kothiya <
> rkoth...@redhat.com>:
> >
> > Hi,
> >
> > The Gluster community is pleased to announce the release of Gluster7.7
> (packages available at [1]).
> > Release notes for the release can be found at [2].
> >
> > Major changes, features and limitations addressed in this release:
> > None
> >
> > Please Note: Some of the packages are unavailable and we are working on
> it. We will release them soon.
> >
> > Thanks,
> > Gluster community
> >
> > References:
> >
> > [1] Packages for 7.7:
> > https://download.gluster.org/pub/gluster/glusterfs/7/7.7/
> >
> > [2] Release notes for 7.7:
> > https://docs.gluster.org/en/latest/release-notes/7.7/
> > 
> >
> >
> >
> > Community Meeting Calendar:
> >
> > Schedule -
> > Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> > Bridge: https://bluejeans.com/441850968
> >
> > Gluster-users mailing list
> > gluster-us...@gluster.org
> > https://lists.gluster.org/mailman/listinfo/gluster-users
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://bluejeans.com/441850968
>
> Gluster-users mailing list
> gluster-us...@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>


-- 
--
https://kadalu.io
Container Storage made easy!
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Updating the repository's actual 'ACTIVE'ness status

2020-03-25 Thread Amar Tumballi
Hi all,

We have 101 repositories in gluster org in github. Only handful of them are
being actively managed, and progressing.

After seeing https://github.com/gluster/gluster-kubernetes/issues/644, I
feel we should at least keep the status of the project up-to-date in the
repository, so the users can move on to other repos if not maintained.
Saves time for them, and they wouldn't form a opinion on gluster project.
But if they spend time on setting it up, and later find that its not
working, and is not maintained, they would feel bad about the overall
project itself.

So my request to all repository maintainers are to mark a repository as
'Archived'. And update the README (or description) to reflect the same.

In any case, On April 1st week, we should actively mark them inactive if no
activity is found in last 15+ months. For other repos, maintainer's can
take appropriate action.

Regards,
Amar
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Fwd: [gluster/glusterfs] Disperse volume only showing one third of the available space on client (#1131)

2020-03-25 Thread Amar Tumballi
I am thinking of disabling this option altogether. This was developed to
make sure we have better reporting when same disk gets shared by multiple
volumes (was an option discussed for container use-cases, and +1 scaling
feature.

But considering the amount of bugs and confusion it got-in, and as we are
bit far from +1 scaling, how about disabling this option ?

-Amar


-- Forwarded message -
From: M. 
Date: Wed, Mar 25, 2020 at 12:08 PM
Subject: Re: [gluster/glusterfs] Disperse volume only showing one third of
the available space on client (#1131)
To: gluster/glusterfs 
Cc: Subscribed 


Hello and thanks for your help. Enclosed you will find the
shared-brick-count settings across all bricks across all nodes (with the
correct names this time):

Node 1:

/var/lib/glusterd/vols/glusterpoc/glusterpoc.glusterpoc01.mnt-gluster-sdb-brick.vol:
   option shared-brick-count 3
/var/lib/glusterd/vols/glusterpoc/glusterpoc.glusterpoc01.mnt-gluster-sdc-brick.vol:
   option shared-brick-count 3
/var/lib/glusterd/vols/glusterpoc/glusterpoc.glusterpoc01.mnt-gluster-sdd-brick.vol:
   option shared-brick-count 3
/var/lib/glusterd/vols/glusterpoc/glusterpoc.glusterpoc02.mnt-gluster-sdb-brick.vol:
   option shared-brick-count 0
/var/lib/glusterd/vols/glusterpoc/glusterpoc.glusterpoc02.mnt-gluster-sdc-brick.vol:
   option shared-brick-count 0
/var/lib/glusterd/vols/glusterpoc/glusterpoc.glusterpoc02.mnt-gluster-sdd-brick.vol:
   option shared-brick-count 0
/var/lib/glusterd/vols/glusterpoc/glusterpoc.glusterpoc03.mnt-gluster-sdb-brick.vol:
   option shared-brick-count 0
/var/lib/glusterd/vols/glusterpoc/glusterpoc.glusterpoc03.mnt-gluster-sdc-brick.vol:
   option shared-brick-count 0
/var/lib/glusterd/vols/glusterpoc/glusterpoc.glusterpoc03.mnt-gluster-sde-brick.vol:
   option shared-brick-count 0

Node 2:

/var/lib/glusterd/vols/glusterpoc/glusterpoc.glusterpoc01.mnt-gluster-sdb-brick.vol:
   option shared-brick-count 0
/var/lib/glusterd/vols/glusterpoc/glusterpoc.glusterpoc01.mnt-gluster-sdc-brick.vol:
   option shared-brick-count 0
/var/lib/glusterd/vols/glusterpoc/glusterpoc.glusterpoc01.mnt-gluster-sdd-brick.vol:
   option shared-brick-count 0
/var/lib/glusterd/vols/glusterpoc/glusterpoc.glusterpoc02.mnt-gluster-sdb-brick.vol:
   option shared-brick-count 2
/var/lib/glusterd/vols/glusterpoc/glusterpoc.glusterpoc02.mnt-gluster-sdc-brick.vol:
   option shared-brick-count 2
/var/lib/glusterd/vols/glusterpoc/glusterpoc.glusterpoc02.mnt-gluster-sdd-brick.vol:
   option shared-brick-count 1
/var/lib/glusterd/vols/glusterpoc/glusterpoc.glusterpoc03.mnt-gluster-sdb-brick.vol:
   option shared-brick-count 0
/var/lib/glusterd/vols/glusterpoc/glusterpoc.glusterpoc03.mnt-gluster-sdc-brick.vol:
   option shared-brick-count 0
/var/lib/glusterd/vols/glusterpoc/glusterpoc.glusterpoc03.mnt-gluster-sde-brick.vol:
   option shared-brick-count 0

Node 3:

/var/lib/glusterd/vols/glusterpoc/glusterpoc.glusterpoc01.mnt-gluster-sdb-brick.vol:
   option shared-brick-count 0
/var/lib/glusterd/vols/glusterpoc/glusterpoc.glusterpoc01.mnt-gluster-sdc-brick.vol:
   option shared-brick-count 0
/var/lib/glusterd/vols/glusterpoc/glusterpoc.glusterpoc01.mnt-gluster-sdd-brick.vol:
   option shared-brick-count 0
/var/lib/glusterd/vols/glusterpoc/glusterpoc.glusterpoc02.mnt-gluster-sdb-brick.vol:
   option shared-brick-count 0
/var/lib/glusterd/vols/glusterpoc/glusterpoc.glusterpoc02.mnt-gluster-sdc-brick.vol:
   option shared-brick-count 0
/var/lib/glusterd/vols/glusterpoc/glusterpoc.glusterpoc02.mnt-gluster-sdd-brick.vol:
   option shared-brick-count 0
/var/lib/glusterd/vols/glusterpoc/glusterpoc.glusterpoc03.mnt-gluster-sdb-brick.vol:
   option shared-brick-count 1
/var/lib/glusterd/vols/glusterpoc/glusterpoc.glusterpoc03.mnt-gluster-sdc-brick.vol:
   option shared-brick-count 1
/var/lib/glusterd/vols/glusterpoc/glusterpoc.glusterpoc03.mnt-gluster-sde-brick.vol:
   option shared-brick-count 1

—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
,
or unsubscribe

.


-- 
--
https://kadalu.io
Container Storage made easy!
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [automated-testing] seeking an update on the plans around automated tests and test framework for Gluster

2020-02-05 Thread Amar Tumballi
Hi Sankarshan, and others,

Some updates inline.

On Tue, Feb 4, 2020 at 12:11 PM sankarshan  wrote:

> This is a good set of updates. If we wanted to discuss the impact of
> this activity in terms of how they have been improving the user
> experience for (a) specific workflows or, (b) specific workloads - how
> can we measure the robustness of the current test library (or, batches
> of tests) in Glusto? I'd like to see a conversation - perhaps at an
> upcoming Community Meeting - from this perspective. The kind of
> workloads for which Gluster is used is more or less well known by now.
> So, being able to validate the readiness and extensive consumption of
> Glusto tests from that perspective would be good to see.
>
>
Agree! I do see a lot of activity on Glusto-tests repository, but doesn't
see where to find the run, or the result of the run. Also, haven't seen any
bugs/issues raised from the testing using glusto.  While the number of
patches is great, the value we need is surely through the ability to find
use-case specific bugs which we couldn't test through regression framework
in glusterfs repository.


> On Tue, 4 Feb 2020 at 11:47, Bala Konda Reddy Mekala 
> wrote:
> >
> > Hi Sankarshan,
> > Glusto as a framework is definitely matured and glusto-tests
> > where it is closely related to Glusterfs.
> > Enhancements to glusto-tests is an on-going process.
> >
> > Below are the details of work being done in the last three months.
> >
> > Currently glusto tests is compatible with python 3 a total
> > of 28 PR's were sent [1]
> > A total of 58 PR's were merged among them 20+
> > were the new test cases and the rest are library fixes. [2]
> > Libraries for geo-rep and brick-mux are in review.
> > Optimization of existing code for reducing test time
> > is being carried out and the few patches were merged
> >  NFS Ganesha tests were tested and merged as part of
> > glusto-tests.
> > Now User can run the tests using tox.
> > New test cases and library fixes are being sent and are in
> > review. [3]
> >
> > Regards,
> > Bala
> > [1]
> https://review.gluster.org/#/q/status:merged+project:glusto-tests+topic:py2to3
> [2] https://review.gluster.org/#/q/status:merged+project:glusto-tests,75
> > [3]https://review.gluster.org/#/q/project:glusto-tests++status:open
> >
> >
> > On Mon, Feb 3, 2020 at 9:40 PM sankarshan  wrote:
> >>
> >> I have not seen any meaningful updates about the automated tests and
> >> testing framework in the recent times. I'd like to understand whether
> >> forward looking plans towards enhancements and sustaining a viable
> >> test framework (in this case, Glusto) have been discussed?
> >>
> >> I'd like to understand whether the Glusto framework has evolved and
> >> matured over the period of time it has been part of Gluster and the
> >> maintenance lifecycle and governance around the project.
> >>
> >> /s
>
>
While it is not directly related, Aravinda did propose a testing framework
 called Binnacle
 (a project to test kadalu.io), which
by design can also in future work for glusterfs testing too. As we don't
have anything more than proposal yet on binnacle, wouldn't be good to say
this is ready to be considered alternative, our idea is, through binnacle,
we can test **any** distributed project. Would be good for people to keep a
watch!

-Amar


> --
> sankars...@kadalu.io | TZ: UTC+0530
> kadalu.io : Making it easy to provision storage in k8s!
>
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Happy Holidays and New Calendar Year for you all

2019-12-22 Thread Amar Tumballi
All,

I want to wish you all, and your family, Happy holidays and new year
(2020). Let the new year bring you new hope, and growth and happiness.

I would like your continued enthusiastic participation in Gluster project
development and improvement. Lets plan to make 2020 a great year for
Gluster project.

Thanks you and Regards,
Amar
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Expect delay in reviews and merges (from me)

2019-12-01 Thread Amar Tumballi
Hi all,

I am taking some breaks this month, and expect delays in reviews and
merges. Should be able to respond to emails (if any).

Continue with merging patches in your component as they may deem fit. You
can write an email directly for anything else.

Please consider sending your proposals for Release-8 in the meantime, Lets
all try to make good enhancements (stability, tools etc) for the release.

Regards,
Amar
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Modifying gluster's logging mechanism

2019-11-26 Thread Amar Tumballi
Hi Barak,

My replies inline.

On Thu, Nov 21, 2019 at 6:34 PM Barak Sason Rofman 
wrote:

> Hello Gluster community,
>
> My name is Barak and I’ve joined RH gluster development in August.
> Shortly after my arrival, I’ve identified a potential problem with
> gluster’s logging mechanism and I’d like to bring the matter up for
> discussion.
>
> The general concept of the current mechanism is that every worker thread
> that needs to log a message has to contend for a mutex which guards the log
> file, write the message and, flush the data and then release the mutex.
> I see two design / implementation problems with that mechanism:
>
>1.
>
>The mutex that guards the log file is likely under constant contention.
>2.
>
>The fact that each worker thread perform the IO by himself, thus
>slowing his "real" work.
>
>
Both  above points are true, and can have an impact when there is lot of
logging. While some of us would say we knew the impact of it, we had not
picked this up as a priority item to fix for below reasons.

* First of all, when we looked at log very early in project's life, our
idea was based mostly on kernel logs (/var/log/messages). We decided, as a
file-system, because it is very active with I/Os and should run for years
together without failing, there should be NO log messages when the system
is healthy, which should be 99%+ time.

* Now, if there are no logs when everything is healthy, and most of the
things are healthy 99% of the time, naturally the focus was not
'performance' of logging infra, but the correctness. This is where, the
strict ordering through locks to preserve the timestamps of logs, and have
it organized came by.


> Initial tests, done by *removing logging from the regression testing,
> shows an improvement of about 20% in run time*. This indicates we’re
> taking a pretty heavy performance hit just because of the logging activity.
>
>
That is interesting observation. For this alone, can we have an option to
disable all logging during regression? That would fasten up things for
normal runs immediately.


> In addition to these problems, the logging module is due for an upgrade:
>
>1.
>
>There are dozens of APIs in the logger, much of them are deprecated -
>this makes it very hard for new developers to keep evolving the project.
>2.
>
>One of the key points for Gluster-X, presented in October at
>Bangalore, is the switch to a structured logging all across gluster.
>
>
>
+1


> Given these points, I believe we’re in a position that allows us to
> upgrade the logging mechanism by both switching to structured logging
> across the project AND replacing the logging system itself, thus “killing
> two birds with one stone”.
>
> Moreover, if the upgrade is successful, the new logger mechanism might be
> adopted by other teams in Red Hat, which lead to uniform logging activity
> across different products.
>
>
This, in my opinion is a good reason to undertake this activity. Mainly
because we should be having our logging infra similar with other tools, and
one shouldn't be having a learning curve to understand gluster's logging.


> I’d like to propose a logging utility I’ve been working on for the past
> few weeks.
> This project is still a work in progress (and still much work needs to be
> done in it), but I’d like to bring this matter up now so if the community
> will want to advance on that front, we could collaborate and shape the
> logger to best suit the community’s needs.
>
> An overview of the system:
>
> The logger provides several (number and size are user-defined)
> pre-allocated buffers which threads can 'register' to and receive a private
> buffer. In addition, a single, shared buffer is also pre-allocated (size is
> user-defined). The number of buffers and their size is modifiable at
> runtime (not yet implemented).
>
> Worker threads write messages in one of 3 ways that will be described
> next, and an internal logger threads constantly iterates the existing
> buffers and drains the data to the log file.
>
> As all allocations are allocated at the initialization stage, no special
> treatment it needed for "out of memory" cases.
>
> The following writing levels exist:
>
>1.
>
>Level 1 - Lockless writing: Lockless writing is achieved by assigning
>each thread a private ring buffer. A worker threads write to that buffer
>and the logger thread drains that buffer into a log file.
>
> In case the private ring buffer is full and not yet drained, or in case
> the worker thread has not registered for a private buffer, we fall down to
> the following writing methods:
>
>1.
>
>Level 2 - Shared buffer writing: The worker thread will write it's
>data into a buffer that's shared across all threads. This is done in a
>synchronized manner.
>
> In case the private ring buffer is full and not yet drained AND the shared
> ring buffer is full and not yet drained, or in case the worker thread has
> not registered for a 

Re: [Gluster-Maintainers] Proposal to change gNFS status

2019-11-25 Thread Amar Tumballi
Responses inline.

On Fri, Nov 22, 2019 at 6:04 PM Niels de Vos  wrote:

> On Thu, Nov 21, 2019 at 04:01:23PM +0530, Amar Tumballi wrote:
> > Hi All,
> >
> > As per the discussion on https://review.gluster.org/23645, recently we
> > changed the status of gNFS (gluster's native NFSv3 support) feature to
> > 'Depricated / Orphan' state. (ref:
> > https://github.com/gluster/glusterfs/blob/master/MAINTAINERS#L185..L189
> ).
> > With this email, I am proposing to change the status again to 'Odd Fixes'
> > (ref: https://github.com/gluster/glusterfs/blob/master/MAINTAINERS#L22)
>
> I'd recommend against re-surrecting gNFS. The server is not very
> extensible and adding new features is pretty tricky without breaking
> other (mostly undocumented) use-cases.


I too am against adding the features/enhancements to gNFS. It doesn't make
sense. We are removing features from glusterfs itself, adding features to
gNFS after 3 years wouldn't even be feasible.

I guess you missed the intention of my proposal. It was not about
'resurrecting' gNFS to 'Maintained' or 'Supported' status. It was about
taking it out of 'Orphan' status, because there are still users who are
'happy' with it. Hence I picked the status as 'Odd Fixes' (as per
MAINTAINERS file, there was nothing else which would give meaning of *'This
feature is still shipped, but we are not adding any features or not
actively maintaining it'. *



> Eventhough NFSv3 is stateless,
> the actual usage of NFSv3, mounting and locking is definitely not. The
> server keeps track of which clients have an export mounted, and which
> clients received grants for locks. These things are currently not very
> reliable in combination with high-availability. And there is also the by
> default disabled duplicate-reply-cache (DRC) that has always been very
> buggy (and neither cluster-aware).
>
> If we enable gNFS by default again, we're sending out an incorrect
> message to our users. gNFS works fine for certain workloads and
> environments, but it should not be advertised as 'clustered NFS'.
>
>
I didn't talk or was intending to go this route. I am not even talking
about making gNFS 'default' enable. That would take away our focus on
glusterfs, and different things we can solve with Gluster alone. Not sure
why my email was taken as there would be focus on gNFS.


> Instead of going the gNFS route, I suggest to make it easier to deploy
> NFS-Ganesha as that is a more featured, well maintained and can be
> configured for much more reliable high-availability than gNFS.
>
>
I believe this is critical, and we surely need to work on it. But doesn't
come in the way of doing 1-2 bug fixes in gNFS (if any) in a release.


> If someone really wants to maintain gNFS, I won't object much, but they
> should know that previous maintainers have had many difficulties just
> keeping it working well while other components evolved. Addressing some
> of the bugs/limitations will be extremely difficult and may require
> large rewrites of parts of gNFS.
>

Yes, that awareness is critical, and it should exist.


> Until now, I have not read convincing arguments in this thread that gNFS
> is stable enough to be consumed by anyone in the community. Users should
> be aware of its limitations and be careful what workloads to run on it.
>

In this thread, Xie mentioned that he is managing gNFS on 1000+ servers
with 2000+ clients (more than 24 gluster cluster overall) for more than 2
years now. If that doesn't sound as 'stability', not sure what sounds as.

I agree that the users should be careful about the proper usecase to use
gNFS. I am even open to say we should add a warning or console log in
gluster CLI when 'gluster volume set  nfs.disable false' is performed,
saying it is advised to move to NFS-Ganesha based approach, and give a URL
link in that message. But the whole point is, when we make a release, we
should still ship gNFS as there are some users, very happy with gNFS, and
their usecases are properly handled by gNFS in its current form itself. Why
make them unhappy, or shift to other projects?

End of the day, as developers it is our duty to make sure we suggest the
best technologies to users, but the intentions should always be to make
sure we solve problems. If there are already solved problems, why resurface
them in the name of better technology?

So, again, my proposal is, to keep gNFS in the codebase (not as Orphan),
and continue to make releases with gNFS binary shipped when we make
release, not to make the focus of project to start working on enhancements
of gNFS.

Happy to answer if anyone has further queries.

I have sent a patch https://review.gluster.org/23738 for the same, and I
see people commenting already on that. I agree that Xie's contributions to
Gluster may need to increase (specifically in gNFS component) to be called
as MAINTAINER

[Gluster-Maintainers] Proposal to change gNFS status

2019-11-21 Thread Amar Tumballi
Hi All,

As per the discussion on https://review.gluster.org/23645, recently we
changed the status of gNFS (gluster's native NFSv3 support) feature to
'Depricated / Orphan' state. (ref:
https://github.com/gluster/glusterfs/blob/master/MAINTAINERS#L185..L189).
With this email, I am proposing to change the status again to 'Odd Fixes'
(ref: https://github.com/gluster/glusterfs/blob/master/MAINTAINERS#L22)

TL;DR;

I understand the current maintainers are not able to focus on maintaining
it as the focus of the project, as earlier described, is keeping
NFS-Ganesha based integration with glusterfs. But, I am volunteering along
with Xie Changlong (currently working at Chinamobile), to keep the feature
running as it used to in previous versions. Hence the status of 'Odd
Fixes'.

Before sending the patch to make these changes, I am proposing it here now,
as gNFS is not even shipped with latest glusterfs-7.0 releases. I have
heard from some users that it was working great for them with earlier
releases, as all they wanted was NFS v3 support, and not much of features
from gNFS. Also note that, even though the packages are not built, none of
the regression tests using gNFS are stopped with latest master, so it is
working same from at least last 2 years.

I request the package maintainers to please add '--with gnfs' (or
--enable-gnfs) back to their release script through this email, so those
users wanting to use gNFS happily can continue to use it. Also points to
users/admins is that, the status is 'Odd Fixes', so don't expect any
'enhancements' on the features provided by gNFS.

Happy to hear feedback, if any.

Regards,
Amar
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Time to review the MAINTAINERS file

2019-10-26 Thread Amar Tumballi
Hello,

It is been 28 months since we last committed our major changes to
MAINTAINERS file. Lot of waters have flown in all the rivers since then.
New people joined, and some people left to find more interesting projects
from Gluster Project.

In my opinion, we surely should be having a practice of reviewing our
maintainer list for successful progress of project. It should reflect the
recent active contributors in the list, and ideally (if you compare to any
other active projects), we should be reviewing the list *every year*. But 2
years is not a bad thing considering the time we took to change things for
v2.0 (from v1.0).

I am attaching the proposed patch (can be broken into different patches per
component if one wants). I am planning to send this to review next week
with everyone of the maintainers (everyone whose name shifts places). But I
thought letting the Maintainers know about this through email first is a
good idea.

Feel free to agree, disagree or ignore. Please make sure you have reasons
why some changes are not valid. Please open the patch, where I have tried
to capture details too.

Would be great if we close on this soon. I say lets time box it to 15 days,
for everyone to raise objections, if any. After which, it would get merged
(15 days from the date of submission of patch).

Regards,
Amar
From 7e6b35db9a0da72dd393610eeba13dfd70571ec3 Mon Sep 17 00:00:00 2001
From: Amar Tumballi 
Date: Fri, 25 Oct 2019 22:57:04 +0530
Subject: [PATCH] MAINTAINERS: revised the maintainers list after 2 years

The last major change to the file happened 2 years 5 months back.
For the better health of the project, and to motivate the active
contributors, and keeping the file up-to-date with who is working
on what, it is critical for us to keep refreshing the file once
every 2 years (at least).

This is one such effort.

Highlighted changes:

* Moving both Jeff and Vijay from Maintainers to 'Special Thanks'
  section.
* Moving Amar and Xavi as Maintainers, and Atin as Peer, mainly
  looking at the activities (patches, reviews, merges) across
  the codebase, and also the contributions in discussions.
* Moving Shyam and Niels out of Peer list highlighting the 6+
  months of changed priorities.

* Changed Xavi's contact from Datalab's to Red Hat's
* Changed Amar's contact from Red Hat's to his Personal.

* Removed:
  - Block Device (BD)
  - Experimental (RIO / JBR)
  - Gluster Object
  - Gluster Hadoop Plugin
  - Nagios Monitoring

  - Marked 'glusterd2' as Deprecated.
(and renamed glusterd1 to glusterd)

* Moved few people, who stopped major contribution, mainly
  because of changing companies, changing projects inside
  their own company etc, to 'Special Thanks' section.

* Additions:
  - Sunny Kumar added as Peer in Geo-Replication
  - Kotresh added as peer in Posix
  - Nithya Added as peer in readdir-ahead
  - Raghavendra Gowdappa added as peer in FUSE Bridge
  - Hari Gowtham as Peer in Quota
  - Yaniv Kaul for xxhash

Updates: bz#1193929
Change-Id: I0d6eccfee4306e26cdbc2b94f43ac493e2c25a61
Signed-off-by: Amar Tumballi 
---
 MAINTAINERS | 111 +---
 1 file changed, 35 insertions(+), 76 deletions(-)

diff --git a/MAINTAINERS b/MAINTAINERS
index b50c998c3..5366c4d3c 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -51,15 +51,11 @@ Descriptions of section entries:
 
 General Project Architects
 --
-M: Jeff Darcy 
-M: Vijay Bellur 
-P: Amar Tumballi 
+M: Amar Tumballi 
+M: Xavier Hernandez  
 P: Pranith Karampuri 
 P: Raghavendra Gowdappa 
-P: Shyamsundar Ranganathan 
-P: Niels de Vos 
-P: Xavier Hernandez  
-
+P: Atin Mukherjee 
 
 xlators:
 
@@ -87,10 +83,6 @@ P: Atin Mukherjee 
 S: Maintained
 F: xlators/features/barrier
 
-Block Device
-S: Orphan
-F: xlators/storage/bd/
-
 BitRot
 M: Kotresh HR 
 P: Raghavendra Bhat 
@@ -112,9 +104,8 @@ F: xlators/cluster/dht/
 
 Erasure Coding
 M: Pranith Karampuri 
-M: Xavier Hernandez  
+M: Xavier Hernandez  
 P: Ashish Pandey 
-P: Sunil Kumar Acharya 
 S: Maintained
 F: xlators/cluster/ec/
 
@@ -124,8 +115,9 @@ S: Maintained
 F: xlators/debug/error-gen/
 
 FUSE Bridge
-M: Niels de Vos 
-P: Csaba Henk 
+M: Csaba Henk 
+P: Raghavendra Gowdappa 
+P: Niels de Vos 
 S: Maintained
 F: xlators/mount/
 
@@ -163,13 +155,13 @@ F: xlators/features/leases/
 
 Locks
 M: Krutika Dhananjay 
+P: Xavier Hernandez  
 S: Maintained
 F: xlators/features/locks/
 
 Marker
 M: Raghavendra Gowdappa 
 M: Kotresh HR 
-P: Sanoj Unnikrishnan 
 S: Maintained
 F: xlators/features/marker/
 
@@ -191,36 +183,32 @@ S: Maintained
 F: xlators/performance/nl-cache/
 
 NFS
-M: Shreyas Siravara 
-M: Jeff Darcy 
-P: Jiffin Tony Thottan 
-P: Soumya Koduri 
+M: Jiffin Tony Thottan 
+M: Soumya Koduri 
 S: Maintained
 F: xlators/nfs/server/
 
 Open-behind
 M: Raghavendra Gowdappa 
-P: Milind Changire 
 S: Maintained
 F: xlators/performance/open-behind/
 
 Posix:
 M: Raghavendra Bhat 
+P: Kotresh HR 
 P: Krutika Dhananjay 
-P: Jiffin Tony

Re: [Gluster-Maintainers] [Gluster-devel] Proposal: move glusterfs development to github workflow, completely

2019-10-22 Thread Amar Tumballi
Thanks for the email Misc. My reasons inline.

On Mon, Oct 21, 2019 at 4:44 PM Michael Scherer  wrote:

> Le lundi 14 octobre 2019 à 20:30 +0530, Amar Tumballi a écrit :
> > On Mon, 14 Oct, 2019, 5:37 PM Niels de Vos, 
> > wrote:
> >
> > > On Mon, Oct 14, 2019 at 03:52:30PM +0530, Amar Tumballi wrote:
> > > > Any thoughts on this?
> > > >
> > > > I tried a basic .travis.yml for the unified glusterfs repo I am
> > > > maintaining, and it is good enough for getting most of the tests.
> > > > Considering we are very close to glusterfs-7.0 release, it is
> > > > good to
> > >
> > > time
> > > > this after 7.0 release.
> > >
> > > Is there a reason to move to Travis? GitHub does offer integration
> > > with
> > > Jenkins, so we should be able to keep using our existing CI, I
> > > think?
> > >
> >
> > Yes, that's true. I tried Travis because I don't have complete idea
> > of
> > Jenkins infra and trying Travis needed just basic permissions from me
> > on
> > repo (it was tried on my personal repo)
>
> Travis is limited to 1 builder per project with the free version..
> So since the regression test last 4h, I am not sure exactly what is the
> plan there.
>
>
We can't regress from our current testing coverage when we migrate. So, My
take is, we should start with surely using existing Jenkins itself from
github. And eventually see if there are any better options, or else at
least remain with this CI.


> Now, on the whole migration stuff, I do have a few questions:
>
> - what will happen to the history of the project (aka, the old
> review.gluster.org server). I would be in favor of dropping it if we
> move out, but then, we would lose all informations there (the review
> content itself).
>
>
I would like to see it hosted somewhere (ie, in same URL preferably).

But depending on sponsorship for the hosting charges, if we had to decide
to shutting the service down, my take is, we can make the DB content made
available for public download. Happy to provide a 'how to view patches'
guide so one can setup Gerrit locally and see the details.

- what happen to existing proposed patches, do they need to be migrated
> one by one (and if so, who is going to script that part)
>
>
I checked that we have < 50 patches active on master branch, and other than
Yaniv, no one has more than 5 patches active in review queue. So, I propose
people can take up their own patches and post it to GitHub. For those who
are not willing to do that extra work, or not active in project now,  I am
happy to help them migrate the patch to PR.



> - can we, while we are on it, force 2FA for the whole org on github ?
> before, I didn't push too hard because this wasn't critical, but if
> there is a migration, that would be much more important.
>
>
Yes. I believe that is totally fine, specifically for those who are admins
of the org, and those who can merge.


> - what is the plan to force to enforce the various policies ?
> (like the fact that commit need to be sign, in a DCO like fashion, or
> to decide who can merge, who can give +2, and how we trigger build only
> when someone has said "this is verified")
>
>
About people, two options IMO:
1. Provide access to same set of people who have access in Gerrit.
or 2. Look at the activity list in last 1 year, and see who has actually
reviewed AND merged any patch from the above list to have access.

About policies on how to trigger build, and merge I prefer to use tools
like mergify.io which is also used by many open source projects, and also
friends @ Ceph project use the same. That way, there would be no human
pressing merge, but policy based patches would be merged.

About what strings, commands to use for triggering builds (/run smoke, /run
regression etc), I am happy to work with someone to get this done.


> - can we define also some goals on why to migrate ?
>

Sure, will list below.


> the thread do not really explain why, except "that's what everybody is
> doing". Based on previous migrations for different contexts, that's
> usually not sufficient, and we get the exact same amount of
> contribution no matter what (like static blog vs wordpress vs static
> blog), except that someone (usually me) has to do lots of work.
>
>
I agree, and sorry about causing lot of work for you :-/ None of this
intentional. We all thrive and look for better way as they (and we) evolve.
It is good to recheck whether we are using right tools, right processes or
not every 2 yrs at least.


> So could someone give some estimate that can be measured on what is
> going to be improved, along a timeframe for the estimated improvement ?
> (so like in 6 months, 

Re: [Gluster-Maintainers] [Gluster-devel] Proposal: move glusterfs development to github workflow, completely

2019-10-14 Thread Amar Tumballi
On Mon, 14 Oct, 2019, 5:37 PM Niels de Vos,  wrote:

> On Mon, Oct 14, 2019 at 03:52:30PM +0530, Amar Tumballi wrote:
> > Any thoughts on this?
> >
> > I tried a basic .travis.yml for the unified glusterfs repo I am
> > maintaining, and it is good enough for getting most of the tests.
> > Considering we are very close to glusterfs-7.0 release, it is good to
> time
> > this after 7.0 release.
>
> Is there a reason to move to Travis? GitHub does offer integration with
> Jenkins, so we should be able to keep using our existing CI, I think?
>

Yes, that's true. I tried Travis because I don't have complete idea of
Jenkins infra and trying Travis needed just basic permissions from me on
repo (it was tried on my personal repo)

Happy to get some help here.

Regards,
Amar


> Niels
>
>
> >
> > -Amar
> >
> > On Thu, Sep 5, 2019 at 5:13 PM Amar Tumballi  wrote:
> >
> > > Going through the thread, I see in general positive responses for the
> > > same, with few points on review system, and not loosing information
> when
> > > merging the patches.
> > >
> > > While we are working on that, we need to see and understand how our
> CI/CD
> > > looks like with github migration. We surely need suggestion and
> volunteers
> > > here to get this going.
> > >
> > > Regards,
> > > Amar
> > >
> > >
> > > On Wed, Aug 28, 2019 at 12:38 PM Niels de Vos 
> wrote:
> > >
> > >> On Tue, Aug 27, 2019 at 06:57:14AM +0530, Amar Tumballi Suryanarayan
> > >> wrote:
> > >> > On Tue, Aug 27, 2019 at 12:10 AM Niels de Vos 
> > >> wrote:
> > >> >
> > >> > > On Mon, Aug 26, 2019 at 08:36:30PM +0530, Aravinda Vishwanathapura
> > >> Krishna
> > >> > > Murthy wrote:
> > >> > > > On Mon, Aug 26, 2019 at 7:49 PM Joe Julian <
> j...@julianfamily.org>
> > >> wrote:
> > >> > > >
> > >> > > > > > Comparing the changes between revisions is something
> > >> > > > > that GitHub does not support...
> > >> > > > >
> > >> > > > > It does support that,
> > >> > > > > actually.___
> > >> > > > >
> > >> > > >
> > >> > > > Yes, it does support. We need to use Squash merge after all
> review
> > >> is
> > >> > > done.
> > >> > >
> > >> > > Squash merge would also combine multiple commits that are
> intended to
> > >> > > stay separate. This is really bad :-(
> > >> > >
> > >> > >
> > >> > We should treat 1 patch in gerrit as 1 PR in github, then squash
> merge
> > >> > works same as how reviews in gerrit are done.  Or we can come up
> with
> > >> > label, upon which we can actually do 'rebase and merge' option,
> which
> > >> can
> > >> > preserve the commits as is.
> > >>
> > >> Something like that would be good. For many things, including commit
> > >> message update squashing patches is just loosing details. We dont do
> > >> that with Gerrit now, and we should not do that when using GitHub PRs.
> > >> Proper documenting changes is still very important to me, the details
> of
> > >> patches should be explained in commit messages. This only works well
> > >> when developers 'force push' to the branch holding the PR.
> > >>
> > >> Niels
> > >> ___
> > >>
> > >> Community Meeting Calendar:
> > >>
> > >> APAC Schedule -
> > >> Every 2nd and 4th Tuesday at 11:30 AM IST
> > >> Bridge: https://bluejeans.com/836554017
> > >>
> > >> NA/EMEA Schedule -
> > >> Every 1st and 3rd Tuesday at 01:00 PM EDT
> > >> Bridge: https://bluejeans.com/486278655
> > >>
> > >> Gluster-devel mailing list
> > >> gluster-de...@gluster.org
> > >> https://lists.gluster.org/mailman/listinfo/gluster-devel
> > >>
> > >>
>
> > ___
> >
> > Community Meeting Calendar:
> >
> > APAC Schedule -
> > Every 2nd and 4th Tuesday at 11:30 AM IST
> > Bridge: https://bluejeans.com/118564314
> >
> > NA/EMEA Schedule -
> > Every 1st and 3rd Tuesday at 01:00 PM EDT
> > Bridge: https://bluejeans.com/118564314
> >
> > Gluster-devel mailing list
> > gluster-de...@gluster.org
> > https://lists.gluster.org/mailman/listinfo/gluster-devel
> >
>
>
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Proposal: move glusterfs development to github workflow, completely

2019-10-14 Thread Amar Tumballi
Any thoughts on this?

I tried a basic .travis.yml for the unified glusterfs repo I am
maintaining, and it is good enough for getting most of the tests.
Considering we are very close to glusterfs-7.0 release, it is good to time
this after 7.0 release.

-Amar

On Thu, Sep 5, 2019 at 5:13 PM Amar Tumballi  wrote:

> Going through the thread, I see in general positive responses for the
> same, with few points on review system, and not loosing information when
> merging the patches.
>
> While we are working on that, we need to see and understand how our CI/CD
> looks like with github migration. We surely need suggestion and volunteers
> here to get this going.
>
> Regards,
> Amar
>
>
> On Wed, Aug 28, 2019 at 12:38 PM Niels de Vos  wrote:
>
>> On Tue, Aug 27, 2019 at 06:57:14AM +0530, Amar Tumballi Suryanarayan
>> wrote:
>> > On Tue, Aug 27, 2019 at 12:10 AM Niels de Vos 
>> wrote:
>> >
>> > > On Mon, Aug 26, 2019 at 08:36:30PM +0530, Aravinda Vishwanathapura
>> Krishna
>> > > Murthy wrote:
>> > > > On Mon, Aug 26, 2019 at 7:49 PM Joe Julian 
>> wrote:
>> > > >
>> > > > > > Comparing the changes between revisions is something
>> > > > > that GitHub does not support...
>> > > > >
>> > > > > It does support that,
>> > > > > actually.___
>> > > > >
>> > > >
>> > > > Yes, it does support. We need to use Squash merge after all review
>> is
>> > > done.
>> > >
>> > > Squash merge would also combine multiple commits that are intended to
>> > > stay separate. This is really bad :-(
>> > >
>> > >
>> > We should treat 1 patch in gerrit as 1 PR in github, then squash merge
>> > works same as how reviews in gerrit are done.  Or we can come up with
>> > label, upon which we can actually do 'rebase and merge' option, which
>> can
>> > preserve the commits as is.
>>
>> Something like that would be good. For many things, including commit
>> message update squashing patches is just loosing details. We dont do
>> that with Gerrit now, and we should not do that when using GitHub PRs.
>> Proper documenting changes is still very important to me, the details of
>> patches should be explained in commit messages. This only works well
>> when developers 'force push' to the branch holding the PR.
>>
>> Niels
>> ___
>>
>> Community Meeting Calendar:
>>
>> APAC Schedule -
>> Every 2nd and 4th Tuesday at 11:30 AM IST
>> Bridge: https://bluejeans.com/836554017
>>
>> NA/EMEA Schedule -
>> Every 1st and 3rd Tuesday at 01:00 PM EDT
>> Bridge: https://bluejeans.com/486278655
>>
>> Gluster-devel mailing list
>> gluster-de...@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-devel
>>
>>
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Proposal: move glusterfs development to github workflow, completely

2019-09-05 Thread Amar Tumballi
Going through the thread, I see in general positive responses for the same,
with few points on review system, and not loosing information when merging
the patches.

While we are working on that, we need to see and understand how our CI/CD
looks like with github migration. We surely need suggestion and volunteers
here to get this going.

Regards,
Amar


On Wed, Aug 28, 2019 at 12:38 PM Niels de Vos  wrote:

> On Tue, Aug 27, 2019 at 06:57:14AM +0530, Amar Tumballi Suryanarayan wrote:
> > On Tue, Aug 27, 2019 at 12:10 AM Niels de Vos  wrote:
> >
> > > On Mon, Aug 26, 2019 at 08:36:30PM +0530, Aravinda Vishwanathapura
> Krishna
> > > Murthy wrote:
> > > > On Mon, Aug 26, 2019 at 7:49 PM Joe Julian 
> wrote:
> > > >
> > > > > > Comparing the changes between revisions is something
> > > > > that GitHub does not support...
> > > > >
> > > > > It does support that,
> > > > > actually.___
> > > > >
> > > >
> > > > Yes, it does support. We need to use Squash merge after all review is
> > > done.
> > >
> > > Squash merge would also combine multiple commits that are intended to
> > > stay separate. This is really bad :-(
> > >
> > >
> > We should treat 1 patch in gerrit as 1 PR in github, then squash merge
> > works same as how reviews in gerrit are done.  Or we can come up with
> > label, upon which we can actually do 'rebase and merge' option, which can
> > preserve the commits as is.
>
> Something like that would be good. For many things, including commit
> message update squashing patches is just loosing details. We dont do
> that with Gerrit now, and we should not do that when using GitHub PRs.
> Proper documenting changes is still very important to me, the details of
> patches should be explained in commit messages. This only works well
> when developers 'force push' to the branch holding the PR.
>
> Niels
> ___
>
> Community Meeting Calendar:
>
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/836554017
>
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/486278655
>
> Gluster-devel mailing list
> gluster-de...@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
>
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Proposal: move glusterfs development to github workflow, completely

2019-08-26 Thread Amar Tumballi Suryanarayan
On Tue, Aug 27, 2019 at 12:10 AM Niels de Vos  wrote:

> On Mon, Aug 26, 2019 at 08:36:30PM +0530, Aravinda Vishwanathapura Krishna
> Murthy wrote:
> > On Mon, Aug 26, 2019 at 7:49 PM Joe Julian  wrote:
> >
> > > > Comparing the changes between revisions is something
> > > that GitHub does not support...
> > >
> > > It does support that,
> > > actually.___
> > >
> >
> > Yes, it does support. We need to use Squash merge after all review is
> done.
>
> Squash merge would also combine multiple commits that are intended to
> stay separate. This is really bad :-(
>
>
We should treat 1 patch in gerrit as 1 PR in github, then squash merge
works same as how reviews in gerrit are done.  Or we can come up with
label, upon which we can actually do 'rebase and merge' option, which can
preserve the commits as is.

-Amar


> Niels
> ___
> maintainers mailing list
> maintainers@gluster.org
> https://lists.gluster.org/mailman/listinfo/maintainers
>


-- 
Amar Tumballi (amarts)
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Proposal: move glusterfs development to github workflow, completely

2019-08-23 Thread Amar Tumballi
Hi developers,

With this email, I want to understand what is the general feeling around
this topic.

We from gluster org (in github.com/gluster) have many projects which follow
complete github workflow, where as there are few, specially the main one
'glusterfs', which uses 'Gerrit'.

While this has worked all these years, currently, there is a huge set of
brain-share on github workflow as many other top projects, and similar
projects use only github as the place to develop, track and run tests etc.
As it is possible to have all of the tools required for this project in
github itself (code, PR, issues, CI/CD, docs), lets look at how we are
structured today:

Gerrit - glusterfs code + Review system
Bugzilla - For bugs
Github - For feature requests
Trello - (not very much used) for tracking project development.
CI/CD - CentOS-ci / Jenkins, etc but maintained from different repo.
Docs - glusterdocs - different repo.
Metrics - Nothing (other than github itself tracking contributors).

While it may cause a minor glitch for many long time developers who are
used to the flow, moving to github would bring all these in single place,
makes getting new users easy, and uniform development practices for all
gluster org repositories.

As it is just the proposal, I would like to hear people's thought on this,
and conclude on this another month, so by glusterfs-8 development time, we
are clear about this.

Can we decide on this before September 30th? Please voice your concerns.

Regards,
Amar
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] [Announcement] Gluster Community Update

2019-07-09 Thread Amar Tumballi Suryanarayan
Hello Gluster community,



Today marks a new day in the 26-year history of Red Hat. IBM has finalized
its acquisition of Red Hat
<https://www.redhat.com/en/about/press-releases/ibm-closes-landmark-acquisition-red-hat-34-billion-defines-open-hybrid-cloud-future>,
which will operate as a distinct unit within IBM moving forward.



What does this mean for Red Hat’s contributions to the Gluster project?



In short, nothing.



Red Hat always has and will continue to be a champion for open source and
projects like Gluster. IBM is committed to Red Hat’s independence and role
in open source software communities so that we can continue this work
without interruption or changes.



Our mission, governance, and objectives remain the same. We will continue
to execute the existing project roadmap. Red Hat associates will continue
to contribute to the upstream in the same ways they have been. And, as
always, we will continue to help upstream projects be successful and
contribute to welcoming new members and maintaining the project.



We will do this together, with the community, as we always have.



If you have questions or would like to learn more about today’s news, I
encourage you to review the list of materials below. Red Hat CTO Chris
Wright will host an online Q session in the coming days where you can ask
questions you may have about what the acquisition means for Red Hat and our
involvement in open source communities. Details will be announced on the Red
Hat blog <https://www.redhat.com/en/blog>.



   -

   Press release
   
<https://www.redhat.com/en/about/press-releases/ibm-closes-landmark-acquisition-red-hat-34-billion-defines-open-hybrid-cloud-future>
   -

   Chris Wright blog - Red Hat and IBM: Accelerating the adoption of open
   source
   
<https://www.redhat.com/en/blog/red-hat-and-ibm-accelerating-adoption-open-source>
   -

   FAQ on Red Hat Community Blog
   <https://community.redhat.com/blog/2019/07/faq-for-communities/>



Amar Tumballi,

Maintainer, Lead,

Gluster Community.
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] GlusterFS 8+: Roadmap ahead - Open for discussion

2019-07-08 Thread Amar Tumballi
Hello everyone,

This email is long, and I request each one of you to participate and give
comments. We want your collaboration in this big step.

TL;DR;

We are at an interesting time in Gluster project’s development roadmap. In
the last  year, we have taken some hard decisions to not focus on features
and focus all our energies to stabilize the project, and if you notice as a
result of that, we did really well with many regards. With most of the
stabilization work getting into the glusterfs-7 branch, we feel the time is
good for discussing the future.

Now, it is the time for us to start addressing the most common concerns of
the project, Performance and related improvements. While many of our users
and customers have faced problems with not so great performance, please
note that there is no one silver bullet which will solve all performance
problems in one step, especially with a distributed storage solution like
GlusterFS.

Over the years, we have noticed that there are a lot of factors which
contribute to the performance issues in Gluster, and it is not ‘easy’ to
tell which one of the ‘known’ issue caused the particular problem.
Sometimes, even to debug where is the bottleneck, we face the challenge of
lack of instrumentation in many parts of the codebase. Hence, one of the
major activities we want to pick as immediate roadmap is, work on this area.

Instead of discussing on the email thread, and losing context soon, I
prefer, this time, we can take our discussion to hackmd with comments.
Would like each of you to participate and let us know what are your
priorities, what you need, how you can help etc.

Link to hackmd URL here: https://hackmd.io/JtfYZr49QeGaNIlTvQNsaA After the
meeting, I will share the updates as a blog, and once its final, will
update the ML with an email.

Along with this, from the Gluster project, in the last couple of years, we
have noticed increased interest in 2 major use cases.

First is using Gluster in container use cases, and the second is using it
as a storage for VMs, especially with oVirt project, and also as
hyperconverged storage in some cases.

We see more stability and performance improvements should help our usecases
with VMs. For container storage, Gluster’s official solution involved
‘Heketi’  project as the frontend to
handle k8s APIs and provide storage from Gluster. We did try to come up
with a new age management solution with GD2
, but haven’t got enough
contributions on it to take it to completion. There were a couple of
different approaches attempted too, gluster-subvol
 and piragua
. But neither of them have seen major
contributions. From the activity in github and other places, we see that
there is still a major need for a proper solution.

We are happy to discuss on this too. Please suggest your ideas.





Another topic while we are at Roadmap is, the discussion on github vs
gerrit. There are some opinions in the group, saying that, we are not
getting not many new developers because our project is hosted on gerrit,
and most of the developer community is on github. We surely want your
opinion on this.

Lets use Doc:
https://docs.google.com/document/d/16a-EyPRySPlJR3ioRgZRNohq7lM-2EmavulfDxlid_M/edit?usp=sharing
for discussing on this.



This email is to kick start a discussion focused on our roadmap, discuss
the priorities, look into what we can quickly do, and what we can achieve
long term. We can have discussions about this in our community meeting, so
we can cover most of the time-zones. If we need more time to finalize on
things, then we can schedule a few more slots based on people’s preference.
Maintainers, please send your preferences for the components you maintain
as part of this discussion too.

Again, we are planning to use collaborative tool hackmd (
https://hackmd.io/JtfYZr49QeGaNIlTvQNsaA) to capture the notes, and will
publish it in a blog form once the meetings conclude. The actionable tasks
will move to github issues from there.

Looking for your active participation.

Regards,

Amar  (@tumballi)
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Release 7: Gentle Reminder, Regression health for release-6.next and release-7

2019-06-18 Thread Amar Tumballi Suryanarayan
On Tue, Jun 18, 2019 at 12:07 PM Rinku Kothiya  wrote:

> Hi Team,
>
> We need to branch for release-7, but nightly builds failures are blocking
> this activity. Please find test failures and respective test links below :
>
> The top tests that are failing are as below and need attention,
>
>
> ./tests/bugs/snapshot/bug-1482023-snpashot-issue-with-other-processes-accessing-mounted-path.t
> ./tests/bugs/gfapi/bug-1319374-THIS-crash.t
>

Still an issue with many tests.


> ./tests/basic/distribute/non-root-unlink-stale-linkto.t
>

Looks like this got fixed after https://review.gluster.org/22847


> ./tests/bugs/posix/bug-1040275-brick-uid-reset-on-volume-restart.t
> ./tests/features/subdir-mount.t
>

Got fixed with https://review.gluster.org/22877


> ./tests/basic/ec/self-heal.t
> ./tests/basic/afr/tarissue.t
>

I see random failures on this, not yet sure if this is setup issue, or a
actual regression issue.


> ./tests/basic/all_squash.t
> ./tests/basic/ec/nfs.t
> ./tests/00-geo-rep/00-georep-verify-setup.t
>

Most of the times, it fails if 'setup' is not complete to run geo-rep.


> ./tests/basic/quota-rename.t
> ./tests/basic/volume-snapshot-clone.t
>
> Nightly build for this month :
> https://build.gluster.org/job/nightly-master/
>
> Gluster test failure tracker :
> https://fstat.gluster.org/summary?start_date=2019-06-15_date=2019-06-18
>
> Please file a bug if needed against the test case and report the same
> here, in case a problem is already addressed, then do send back the
> patch details that addresses this issue as a response to this mail.
>
>
Thanks!


> Regards
> Rinku
>
>
> On Fri, Jun 14, 2019 at 9:08 PM Rinku Kothiya  wrote:
>
>> Hi Team,
>>
>> As part of branching preparation next week for release-7, please find
>> test failures and respective test links here.
>>
>> The top tests that are failing are as below and need attention,
>>
>> ./tests/bugs/gfapi/bug-1319374-THIS-crash.t
>> ./tests/basic/uss.t
>> ./tests/basic/volfile-sanity.t
>> ./tests/basic/quick-read-with-upcall.t
>> ./tests/basic/afr/tarissue.t
>> ./tests/features/subdir-mount.t
>> ./tests/basic/ec/self-heal.t
>>
>> ./tests/bugs/snapshot/bug-1482023-snpashot-issue-with-other-processes-accessing-mounted-path.t
>> ./tests/bugs/glusterd/optimized-basic-testcases-in-cluster.t
>> ./tests/basic/afr/split-brain-favorite-child-policy.t
>> ./tests/basic/distribute/non-root-unlink-stale-linkto.t
>> ./tests/bugs/protocol/bug-1433815-auth-allow.t
>> ./tests/basic/afr/arbiter-mount.t
>> ./tests/basic/all_squash.t
>>
>> ./tests/bugs/glusterd/mgmt-handshake-and-volume-sync-post-glusterd-restart.t
>> ./tests/basic/volume-snapshot-clone.t
>> ./tests/bugs/glusterd/serialize-shd-manager-glusterd-restart.t
>> ./tests/basic/gfapi/upcall-register-api.t
>>
>>
>> Nightly build for this month :
>> https://build.gluster.org/job/nightly-master/
>>
>> Gluster test failure tracker :
>>
>> https://fstat.gluster.org/summary?start_date=2019-05-15_date=2019-06-14
>>
>> Please file a bug if needed against the test case and report the same
>> here, in case a problem is already addressed, then do send back the
>> patch details that addresses this issue as a response to this mail.
>>
>> Regards
>> Rinku
>>
> ___
> maintainers mailing list
> maintainers@gluster.org
> https://lists.gluster.org/mailman/listinfo/maintainers
>


-- 
Amar Tumballi (amarts)
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Fwd: Build failed in Jenkins: regression-test-with-multiplex #1359

2019-06-06 Thread Amar Tumballi Suryanarayan
Got time to test subdir-mount.t failing in brick-mux scenario.

I noticed some issues, where I need further help from glusterd team.

subdir-mount.t expects 'hook' script to run after add-brick to make sure
the required subdirectories are healed and are present in new bricks. This
is important as subdir mount expects the subdirs to exist for successful
mount.

But in case of brick-mux setup, I see that in some cases (6/10), hook
script (add-brick/post-hook/S13-create-subdir-mount.sh) started getting
executed after 20second of finishing the add-brick command. Due to this,
the mount which we execute after add-brick failed.

My question is, what is making post hook script to run so late ??

I can recreate the issues locally on my laptop too.


On Sat, Jun 1, 2019 at 4:55 PM Atin Mukherjee  wrote:

> subdir-mount.t has started failing in brick mux regression nightly. This
> needs to be fixed.
>
> Raghavendra - did we manage to get any further clue on uss.t failure?
>
> -- Forwarded message -
> From: 
> Date: Fri, 31 May 2019 at 23:34
> Subject: [Gluster-Maintainers] Build failed in Jenkins:
> regression-test-with-multiplex #1359
> To: , , ,
> , 
>
>
> See <
> https://build.gluster.org/job/regression-test-with-multiplex/1359/display/redirect?page=changes
> >
>
> Changes:
>
> [atin] glusterd: add an op-version check
>
> [atin] glusterd/svc: glusterd_svcs_stop should call individual wrapper
> function
>
> [atin] glusterd/svc: Stop stale process using the glusterd_proc_stop
>
> [Amar Tumballi] lcov: more coverage to shard, old-protocol, sdfs
>
> [Kotresh H R] tests/geo-rep: Add EC volume test case
>
> [Amar Tumballi] glusterfsd/cleanup: Protect graph object under a lock
>
> [Mohammed Rafi KC] glusterd/shd: Optimize the glustershd manager to send
> reconfigure
>
> [Kotresh H R] tests/geo-rep: Add tests to cover glusterd geo-rep
>
> [atin] glusterd: Optimize code to copy dictionary in handshake code path
>
> --
> [...truncated 3.18 MB...]
> ./tests/basic/afr/stale-file-lookup.t  -  9 second
> ./tests/basic/afr/granular-esh/replace-brick.t  -  9 second
> ./tests/basic/afr/granular-esh/add-brick.t  -  9 second
> ./tests/basic/afr/gfid-mismatch.t  -  9 second
> ./tests/performance/open-behind.t  -  8 second
> ./tests/features/ssl-authz.t  -  8 second
> ./tests/features/readdir-ahead.t  -  8 second
> ./tests/bugs/upcall/bug-1458127.t  -  8 second
> ./tests/bugs/transport/bug-873367.t  -  8 second
> ./tests/bugs/replicate/bug-1498570-client-iot-graph-check.t  -  8 second
> ./tests/bugs/replicate/bug-1132102.t  -  8 second
> ./tests/bugs/quota/bug-1250582-volume-reset-should-not-remove-quota-quota-deem-statfs.t
> -  8 second
> ./tests/bugs/quota/bug-1104692.t  -  8 second
> ./tests/bugs/posix/bug-1360679.t  -  8 second
> ./tests/bugs/posix/bug-1122028.t  -  8 second
> ./tests/bugs/nfs/bug-1157223-symlink-mounting.t  -  8 second
> ./tests/bugs/glusterfs/bug-861015-log.t  -  8 second
> ./tests/bugs/glusterd/sync-post-glusterd-restart.t  -  8 second
> ./tests/bugs/glusterd/bug-1696046.t  -  8 second
> ./tests/bugs/fuse/bug-983477.t  -  8 second
> ./tests/bugs/ec/bug-1227869.t  -  8 second
> ./tests/bugs/distribute/bug-1088231.t  -  8 second
> ./tests/bugs/distribute/bug-1086228.t  -  8 second
> ./tests/bugs/cli/bug-1087487.t  -  8 second
> ./tests/bugs/cli/bug-1022905.t  -  8 second
> ./tests/bugs/bug-1258069.t  -  8 second
> ./tests/bugs/bitrot/1209752-volume-status-should-show-bitrot-scrub-info.t
> -  8 second
> ./tests/basic/xlator-pass-through-sanity.t  -  8 second
> ./tests/basic/quota-nfs.t  -  8 second
> ./tests/basic/glusterd/arbiter-volume.t  -  8 second
> ./tests/basic/ctime/ctime-noatime.t  -  8 second
> ./tests/line-coverage/cli-peer-and-volume-operations.t  -  7 second
> ./tests/gfid2path/get-gfid-to-path.t  -  7 second
> ./tests/bugs/upcall/bug-1369430.t  -  7 second
> ./tests/bugs/snapshot/bug-1260848.t  -  7 second
> ./tests/bugs/shard/shard-inode-refcount-test.t  -  7 second
> ./tests/bugs/shard/bug-1258334.t  -  7 second
> ./tests/bugs/replicate/bug-767585-gfid.t  -  7 second
> ./tests/bugs/replicate/bug-1448804-check-quorum-type-values.t  -  7 second
> ./tests/bugs/replicate/bug-1250170-fsync.t  -  7 second
> ./tests/bugs/posix/bug-1175711.t  -  7 second
> ./tests/bugs/nfs/bug-915280.t  -  7 second
> ./tests/bugs/md-cache/setxattr-prepoststat.t  -  7 second
> ./tests/bugs/md-cache/bug-1211863_unlink.t  -  7 second
> ./tests/bugs/glusterfs/bug-848251.t  -  7 second
> ./tests/bugs/distribute/bug-1122443.t  -  7 second
> ./tests/bugs/changelog/bug-1208470.t  -  7 second
> ./tests/bugs/bug-1702299.t  -  7 second
> ./tests/bugs/bug-1371806_2.t  -  7 second
> ./tests/

Re: [Gluster-Maintainers] Build failed in Jenkins: experimental-periodic #631

2019-05-20 Thread Amar Tumballi Suryanarayan
>> #3  0x7fbb9dab1ead in clone () from /lib64/libc.so.6
>> No symbol table info available.
>>
>> Thread 3 (Thread 0x7fbb4bfff700 (LWP 32734)):
>> #0  0x7fbb9da78e2d in nanosleep () from /lib64/libc.so.6
>> No symbol table info available.
>> #1  0x7fbb9da78cc4 in sleep () from /lib64/libc.so.6
>> No symbol table info available.
>> #2  0x7fbb91546e6e in posix_health_check_thread_proc
>> (data=0x7fbb60001aa0) at <
>> https://build.gluster.org/job/experimental-periodic/ws/xlators/storage/posix/src/posix-helpers.c
>> >:2105
>> this = 0x7fbb60001aa0
>> priv = 0x7fbb60076590
>> interval = 30
>> ret = -1
>> top = 0x0
>> victim = 0x0
>> trav_p = 0x0
>> count = 0
>> victim_found = false
>> ctx = 0x2160010
>> __FUNCTION__ = "posix_health_check_thread_proc"
>> #3  0x7fbb9e1eadd5 in start_thread () from /lib64/libpthread.so.0
>> No symbol table info available.
>> #4  0x7fbb9dab1ead in clone () from /lib64/libc.so.6
>> No symbol table info available.
>>
>> Thread 2 (Thread 0x7fbb58df9700 (LWP 32733)):
>> #0  0x7fbb9da78e2d in nanosleep () from /lib64/libc.so.6
>> No symbol table info available.
>> #1  0x7fbb9da78cc4 in sleep () from /lib64/libc.so.6
>> No symbol table info available.
>> #2  0x7fbb91547821 in posix_disk_space_check_thread_proc
>> (data=0x7fbb60001aa0) at <
>> https://build.gluster.org/job/experimental-periodic/ws/xlators/storage/posix/src/posix-helpers.c
>> >:2288
>> this = 0x7fbb60001aa0
>> priv = 0x7fbb60076590
>> interval = 5
>> ret = -1
>> __FUNCTION__ = "posix_disk_space_check_thread_proc"
>> #3  0x7fbb9e1eadd5 in start_thread () from /lib64/libpthread.so.0
>> No symbol table info available.
>> #4  0x7fbb9dab1ead in clone () from /lib64/libc.so.6
>> No symbol table info available.
>>
>> Thread 1 (Thread 0x7fbb91fa0700 (LWP 32413)):
>> #0  0x7fbb9e1ecc30 in pthread_mutex_lock () from
>> /lib64/libpthread.so.0
>> No symbol table info available.
>> #1  0x0040be2f in ?? ()
>> No symbol table info available.
>> #2  0x7fbb84000f40 in ?? ()
>> No symbol table info available.
>> #3  0x7fbb8c038270 in ?? ()
>> No symbol table info available.
>> #4  0x in ?? ()
>> No symbol table info available.
>> =
>>   Finish backtrace
>>  program name : /build/install/sbin/glusterfsd
>>  corefile : /glfs_epoll001-32403.core
>> =
>>
>> + rm -f /build/install/cores/gdbout.txt
>> + sort /build/install/cores/liblist.txt
>> + uniq
>> + cat /build/install/cores/liblist.txt.tmp
>> + grep -v /build/install
>> + tar -cf
>> /archives/archived_builds/build-install-experimental-periodic-631.tar
>> /build/install/sbin /build/install/bin /build/install/lib
>> /build/install/libexec /build/install/cores
>> tar: Removing leading `/' from member names
>> + tar -rhf
>> /archives/archived_builds/build-install-experimental-periodic-631.tar -T
>> /build/install/cores/liblist.txt
>> tar: Removing leading `/' from member names
>> + bzip2
>> /archives/archived_builds/build-install-experimental-periodic-631.tar
>> + rm -f /build/install/cores/liblist.txt
>> + rm -f /build/install/cores/liblist.txt.tmp
>> + find /archives -size +1G -delete -type f
>> + [[ builder201.int.aws.gluster.org == *\a\w\s* ]]
>> + scp -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i ''
>> /archives/archived_builds/build-install-experimental-periodic-631.tar.bz2
>> _logs-collec...@logs.aws.gluster.org:
>> /var/www/glusterfs-logs/experimental-periodic-631.tgz
>> Warning: Identity file  not accessible: No such file or directory.
>> Warning: Permanently added 'logs.aws.gluster.org,18.219.45.211' (ECDSA)
>> to the list of known hosts.
>> Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
>> lost connection
>> + true
>> + echo 'Cores and builds archived in
>> https://logs.aws.gluster.org/experimental-periodic-631.tgz'
>> Cores and builds archived in
>> https://logs.aws.gluster.org/experimental-periodic-631.tgz
>> + echo 'Open core using the following command to get a proper stack'
>> Open core using the following command to get a proper stack
>> + echo 'Example: From root of extracted tarball'
>> Example: From root of extracted tarball
>> + echo '\t\tgdb -ex '\''set sysroot ./'\'' -ex '\''core-file
>> ./build/install/cores/xxx.core'\'' > ./build/install/sbin/glusterd>'
>> \t\tgdb -ex 'set sysroot ./' -ex 'core-file
>> ./build/install/cores/xxx.core' 
>> + RET=1
>> + '[' 1 -ne 0 ']'
>> + tar -czf <
>> https://build.gluster.org/job/experimental-periodic/ws/glusterfs-logs.tgz>
>> /var/log/glusterfs /var/log/messages /var/log/messages-20190428
>> /var/log/messages-20190505 /var/log/messages-20190512
>> /var/log/messages-20190519
>> tar: Removing leading `/' from member names
>> + case $(uname -s) in
>> ++ uname -s
>> + /sbin/sysctl -w kernel.core_pattern=/%e-%p.core
>> kernel.core_pattern = /%e-%p.core
>> + exit 1
>> Build step 'Execute shell' marked build as failure
>> ___
>> maintainers mailing list
>> maintainers@gluster.org
>> https://lists.gluster.org/mailman/listinfo/maintainers
>>
>

-- 
Amar Tumballi (amarts)
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Release 6.1: Expected tagging on April 10th

2019-04-16 Thread Amar Tumballi Suryanarayan
t;> tests/basic/sdfs-sanity.t ..
>>>> 1..7
>>>> ok 1, LINENUM:8
>>>> ok 2, LINENUM:9
>>>> ok 3, LINENUM:11
>>>> ok 4, LINENUM:12
>>>> ok 5, LINENUM:13
>>>> ok 6, LINENUM:16
>>>> mkdir: cannot create directory ‘/mnt/glusterfs/1/coverage’: Invalid
>>>> argument
>>>> stat: cannot stat '/mnt/glusterfs/1/coverage/dir': Invalid argument
>>>> tests/basic/rpc-coverage.sh: line 61: test: ==: unary operator expected
>>>> not ok 7 , LINENUM:20
>>>> FAILED COMMAND: tests/basic/rpc-coverage.sh /mnt/glusterfs/1
>>>> Failed 1/7 subtests
>>>>
>>>> Test Summary Report
>>>> ---
>>>> tests/basic/sdfs-sanity.t (Wstat: 0 Tests: 7 Failed: 1)
>>>>   Failed test:  7
>>>> Files=1, Tests=7, 14 wallclock secs ( 0.02 usr  0.00 sys +  0.58 cusr
>>>> 0.67 csys =  1.27 CPU)
>>>> Result: FAIL
>>>>
>>>>
>>>>>
>>>>> Following patches will not be taken in if CentOS regression does not
>>>>> pass by tomorrow morning Eastern TZ,
>>>>> (Pranith/KingLongMee) - cluster-syncop: avoid duplicate unlock of
>>>>> inodelk/entrylk
>>>>>   https://review.gluster.org/c/glusterfs/+/22385
>>>>> (Aravinda) - geo-rep: IPv6 support
>>>>>   https://review.gluster.org/c/glusterfs/+/22488
>>>>> (Aravinda) - geo-rep: fix integer config validation
>>>>>   https://review.gluster.org/c/glusterfs/+/22489
>>>>>
>>>>> Tracker bug status:
>>>>> (Ravi) - Bug 1693155 - Excessive AFR messages from gluster showing in
>>>>> RHGSWA.
>>>>>   All patches are merged, but none of the patches adds the "Fixes"
>>>>> keyword, assume this is an oversight and that the bug is fixed in this
>>>>> release.
>>>>>
>>>>> (Atin) - Bug 1698131 - multiple glusterfsd processes being launched for
>>>>> the same brick, causing transport endpoint not connected
>>>>>   No work has occurred post logs upload to bug, restart of bircks and
>>>>> possibly glusterd is the existing workaround when the bug is hit.
>>>>> Moving
>>>>> this out of the tracker for 6.1.
>>>>>
>>>>> (Xavi) - Bug 1699917 - I/O error on writes to a disperse volume when
>>>>> replace-brick is executed
>>>>>   Very recent bug (15th April), does not seem to have any critical data
>>>>> corruption or service availability issues, planning on not waiting for
>>>>> the fix in 6.1
>>>>>
>>>>> - Shyam
>>>>> On 4/6/19 4:38 AM, Atin Mukherjee wrote:
>>>>> > Hi Mohit,
>>>>> >
>>>>> > https://review.gluster.org/22495 should get into 6.1 as it’s a
>>>>> > regression. Can you please attach the respective bug to the tracker
>>>>> Ravi
>>>>> > pointed out?
>>>>> >
>>>>> >
>>>>> > On Sat, 6 Apr 2019 at 12:00, Ravishankar N >>>> > <mailto:ravishan...@redhat.com>> wrote:
>>>>> >
>>>>> > Tracker bug is
>>>>> https://bugzilla.redhat.com/show_bug.cgi?id=1692394, in
>>>>> > case anyone wants to add blocker bugs.
>>>>> >
>>>>> >
>>>>> > On 05/04/19 8:03 PM, Shyam Ranganathan wrote:
>>>>> > > Hi,
>>>>> > >
>>>>> > > Expected tagging date for release-6.1 is on April, 10th, 2019.
>>>>> > >
>>>>> > > Please ensure required patches are backported and also are
>>>>> passing
>>>>> > > regressions and are appropriately reviewed for easy merging and
>>>>> > tagging
>>>>> > > on the date.
>>>>> > >
>>>>> > > Thanks,
>>>>> > > Shyam
>>>>> > > ___
>>>>> > > Gluster-devel mailing list
>>>>> > > gluster-de...@gluster.org <mailto:gluster-de...@gluster.org>
>>>>> > > https://lists.gluster.org/mailman/listinfo/gluster-devel
>>>>> > ___
>>>>> > Gluster-devel mailing list
>>>>> > gluster-de...@gluster.org <mailto:gluster-de...@gluster.org>
>>>>> > https://lists.gluster.org/mailman/listinfo/gluster-devel
>>>>> >
>>>>> >
>>>>> > --
>>>>> > - Atin (atinm)
>>>>> >
>>>>> > ___
>>>>> > Gluster-devel mailing list
>>>>> > gluster-de...@gluster.org
>>>>> > https://lists.gluster.org/mailman/listinfo/gluster-devel
>>>>> >
>>>>> ___
>>>>> Gluster-devel mailing list
>>>>> gluster-de...@gluster.org
>>>>> https://lists.gluster.org/mailman/listinfo/gluster-devel
>>>>
>>>>
>>
>> --
>> Pranith
>>
>

-- 
Amar Tumballi (amarts)
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-users] Proposal: Changes in Gluster Community meetings

2019-04-11 Thread Amar Tumballi Suryanarayan
Hi All,

Below is the final details of our community meeting, and I will be sending
invites to mailing list following this email. You can add Gluster Community
Calendar so you can get notifications on the meetings.

We are starting the meetings from next week. For the first meeting, we need
1 volunteer from users to discuss the use case / what went well, and what
went bad, etc. preferrably in APAC region.  NA/EMEA region, next week.

Draft Content: https://hackmd.io/OqZbh7gfQe6uvVUXUVKJ5g

Gluster Community Meeting
<https://hackmd.io/OqZbh7gfQe6uvVUXUVKJ5g?both#Previous-Meeting-minutes>Previous
Meeting minutes:

   - http://github.com/gluster/community

<https://hackmd.io/OqZbh7gfQe6uvVUXUVKJ5g?both#DateTime-Check-the-community-calendar>Date/Time:
Check the community calendar
<https://calendar.google.com/calendar/b/1?cid=dmViajVibDBrbnNiOWQwY205ZWg5cGJsaTRAZ3JvdXAuY2FsZW5kYXIuZ29vZ2xlLmNvbQ>
<https://hackmd.io/OqZbh7gfQe6uvVUXUVKJ5g?both#Bridge>Bridge

   - APAC friendly hours
  - Bridge: https://bluejeans.com/836554017
   - NA/EMEA
  - Bridge: https://bluejeans.com/486278655

--
<https://hackmd.io/OqZbh7gfQe6uvVUXUVKJ5g?both#Attendance>Attendance

   - Name, Company

<https://hackmd.io/OqZbh7gfQe6uvVUXUVKJ5g?both#Host>Host

   - Who will host next meeting?
  - Host will need to send out the agenda 24hr - 12hrs in advance to
  mailing list, and also make sure to send the meeting minutes.
  - Host will need to reach out to one user at least who can talk about
  their usecase, their experience, and their needs.
  - Host needs to send meeting minutes as PR to
  http://github.com/gluster/community

<https://hackmd.io/OqZbh7gfQe6uvVUXUVKJ5g?both#User-stories>User stories

   - Discuss 1 usecase from a user.
  - How was the architecture derived, what volume type used, options,
  etc?
  - What were the major issues faced ? How to improve them?
  - What worked good?
  - How can we all collaborate well, so it is win-win for the community
  and the user? How can we

<https://hackmd.io/OqZbh7gfQe6uvVUXUVKJ5g?both#Community>Community

   -

   Any release updates?
   -

   Blocker issues across the project?
   -

   Metrics
   - Number of new bugs since previous meeting. How many are not triaged?
  - Number of emails, anything unanswered?

<https://hackmd.io/OqZbh7gfQe6uvVUXUVKJ5g?both#Conferences--Meetups>Conferences
/ Meetups

   - Any conference in next 1 month where gluster-developers are going?
   gluster-users are going? So we can meet and discuss.

<https://hackmd.io/OqZbh7gfQe6uvVUXUVKJ5g?both#Developer-focus>Developer
focus

   -

   Any design specs to discuss?
   -

   Metrics of the week?
   - Coverity
  - Clang-Scan
  - Number of patches from new developers.
  - Did we increase test coverage?
  - [Atin] Also talk about most frequent test failures in the CI and
  carve out an AI to get them fixed.

<https://hackmd.io/OqZbh7gfQe6uvVUXUVKJ5g?both#RoundTable>RoundTable

   - 



Regards,
Amar

On Mon, Mar 25, 2019 at 8:53 PM Amar Tumballi Suryanarayan <
atumb...@redhat.com> wrote:

> Thanks for the feedback Darrell,
>
> The new proposal is to have one in North America 'morning' time. (10AM
> PST), And another in ASIA day time, which is evening 7pm/6pm in Australia,
> 9pm Newzealand, 5pm Tokyo, 4pm Beijing.
>
> For example, if we choose Every other Tuesday for meeting, and 1st of the
> month is Tuesday, we would have North America time for 1st, and on 15th it
> would be ASIA/Pacific time.
>
> Hopefully, this way, we can cover all the timezones, and meeting minutes
> would be committed to github repo, so that way, it will be easier for
> everyone to be aware of what is happening.
>
> Regards,
> Amar
>
> On Mon, Mar 25, 2019 at 8:40 PM Darrell Budic 
> wrote:
>
>> As a user, I’d like to visit more of these, but the time slot is my 3AM.
>> Any possibility for a rolling schedule (move meeting +6 hours each week
>> with rolling attendance from maintainers?) or an occasional regional
>> meeting 12 hours opposed to the one you’re proposing?
>>
>>   -Darrell
>>
>> On Mar 25, 2019, at 4:25 AM, Amar Tumballi Suryanarayan <
>> atumb...@redhat.com> wrote:
>>
>> All,
>>
>> We currently have 3 meetings which are public:
>>
>> 1. Maintainer's Meeting
>>
>> - Runs once in 2 weeks (on Mondays), and current attendance is around 3-5
>> on an avg, and not much is discussed.
>> - Without majority attendance, we can't take any decisions too.
>>
>> 2. Community meeting
>>
>> - Supposed to happen on #gluster-meeting, every 2 weeks, and is the only
>> meeting which is for 'Community/Users'. Others are for developers as of
>> now.
>

Re: [Gluster-Maintainers] [Gluster-users] Proposal: Changes in Gluster Community meetings

2019-03-25 Thread Amar Tumballi Suryanarayan
Thanks for the feedback Darrell,

The new proposal is to have one in North America 'morning' time. (10AM
PST), And another in ASIA day time, which is evening 7pm/6pm in Australia,
9pm Newzealand, 5pm Tokyo, 4pm Beijing.

For example, if we choose Every other Tuesday for meeting, and 1st of the
month is Tuesday, we would have North America time for 1st, and on 15th it
would be ASIA/Pacific time.

Hopefully, this way, we can cover all the timezones, and meeting minutes
would be committed to github repo, so that way, it will be easier for
everyone to be aware of what is happening.

Regards,
Amar

On Mon, Mar 25, 2019 at 8:40 PM Darrell Budic 
wrote:

> As a user, I’d like to visit more of these, but the time slot is my 3AM.
> Any possibility for a rolling schedule (move meeting +6 hours each week
> with rolling attendance from maintainers?) or an occasional regional
> meeting 12 hours opposed to the one you’re proposing?
>
>   -Darrell
>
> On Mar 25, 2019, at 4:25 AM, Amar Tumballi Suryanarayan <
> atumb...@redhat.com> wrote:
>
> All,
>
> We currently have 3 meetings which are public:
>
> 1. Maintainer's Meeting
>
> - Runs once in 2 weeks (on Mondays), and current attendance is around 3-5
> on an avg, and not much is discussed.
> - Without majority attendance, we can't take any decisions too.
>
> 2. Community meeting
>
> - Supposed to happen on #gluster-meeting, every 2 weeks, and is the only
> meeting which is for 'Community/Users'. Others are for developers as of
> now.
> Sadly attendance is getting closer to 0 in recent times.
>
> 3. GCS meeting
>
> - We started it as an effort inside Red Hat gluster team, and opened it up
> for community from Jan 2019, but the attendance was always from RHT
> members, and haven't seen any traction from wider group.
>
> So, I have a proposal to call out for cancelling all these meeting, and
> keeping just 1 weekly 'Community' meeting, where even topics related to
> maintainers and GCS and other projects can be discussed.
>
> I have a template of a draft template @
> https://hackmd.io/OqZbh7gfQe6uvVUXUVKJ5g
>
> Please feel free to suggest improvements, both in agenda and in timings.
> So, we can have more participation from members of community, which allows
> more user - developer interactions, and hence quality of project.
>
> Waiting for feedbacks,
>
> Regards,
> Amar
>
>
> ___
> Gluster-users mailing list
> gluster-us...@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>

-- 
Amar Tumballi (amarts)
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Proposal: Changes in Gluster Community meetings

2019-03-25 Thread Amar Tumballi Suryanarayan
All,

We currently have 3 meetings which are public:

1. Maintainer's Meeting

- Runs once in 2 weeks (on Mondays), and current attendance is around 3-5
on an avg, and not much is discussed.
- Without majority attendance, we can't take any decisions too.

2. Community meeting

- Supposed to happen on #gluster-meeting, every 2 weeks, and is the only
meeting which is for 'Community/Users'. Others are for developers as of now.
Sadly attendance is getting closer to 0 in recent times.

3. GCS meeting

- We started it as an effort inside Red Hat gluster team, and opened it up
for community from Jan 2019, but the attendance was always from RHT
members, and haven't seen any traction from wider group.

So, I have a proposal to call out for cancelling all these meeting, and
keeping just 1 weekly 'Community' meeting, where even topics related to
maintainers and GCS and other projects can be discussed.

I have a template of a draft template @
https://hackmd.io/OqZbh7gfQe6uvVUXUVKJ5g

Please feel free to suggest improvements, both in agenda and in timings.
So, we can have more participation from members of community, which allows
more user - developer interactions, and hence quality of project.

Waiting for feedbacks,

Regards,
Amar
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [gluster-packaging] glusterfs-5.5 released

2019-03-19 Thread Amar Tumballi Suryanarayan
Team,

How can we get debug info package for suse? This helps us to debug a crash
on 5.5


On Tue, 19 Mar, 2019, 8:30 PM Shyam Ranganathan, 
wrote:

> On 3/16/19 2:03 AM, Kaleb Keithley wrote:
> > Packages for the CentOS Storage SIG are now available for testing.
> > Please try them out and report test results on this list.
> >
> >   # yum install centos-release-gluster
> >   # yum install --enablerepo=centos-gluster5-test glusterfs-server
>
> The buildlogs servers do not yet have the RPMs for 5.5 to test. I did
> try to go and use the build artifacts from
> https://cbs.centos.org/koji/buildinfo?buildID=25417 but as there is no
> repo file, unable to install pointing to this source as the repo.
>
> Can this be fixed, or some alternate provided, so that the packages can
> be tested and reported back for publishing?
>
> Thanks,
> Shyam
> ___
> maintainers mailing list
> maintainers@gluster.org
> https://lists.gluster.org/mailman/listinfo/maintainers
>
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-users] Proposal to mark few features as Deprecated / SunSet from Version 5.0

2019-03-19 Thread Amar Tumballi Suryanarayan
Hi Jim,

On Tue, Mar 19, 2019 at 6:21 PM Jim Kinney  wrote:

>
> Issues with glusterfs fuse mounts cause issues with python file open for
> write. We have to use nfs to avoid this.
>
> Really want to see better back-end tools to facilitate cleaning up of
> glusterfs failures. If system is going to use hard linked ID, need a
> mapping of id to file to fix things. That option is now on for all exports.
> It should be the default If a host is down and users delete files by the
> thousands, gluster _never_ catches up. Finding path names for ids across
> even a 40TB mount, much less the 200+TB one, is a slow process. A network
> outage of 2 minutes and one system didn't get the call to recursively
> delete several dozen directories each with several thousand files.
>
>
Are you talking about some issues in geo-replication module or some other
application using native mount? Happy to take the discussion forward about
these issues.

Are there any bugs open on this?

Thanks,
Amar


>
>
> nfs
> On March 19, 2019 8:09:01 AM EDT, Hans Henrik Happe  wrote:
>>
>> Hi,
>>
>> Looking into something else I fell over this proposal. Being a shop that
>> are going into "Leaving GlusterFS" mode, I thought I would give my two
>> cents.
>>
>> While being partially an HPC shop with a few Lustre filesystems,  we
>> chose GlusterFS for an archiving solution (2-3 PB), because we could find
>> files in the underlying ZFS filesystems if GlusterFS went sour.
>>
>> We have used the access to the underlying files plenty, because of the
>> continuous instability of GlusterFS'. Meanwhile, Lustre have been almost
>> effortless to run and mainly for that reason we are planning to move away
>> from GlusterFS.
>>
>> Reading this proposal kind of underlined that "Leaving GluserFS" is the
>> right thing to do. While I never understood why GlusterFS has been in
>> feature crazy mode instead of stabilizing mode, taking away crucial
>> features I don't get. With RoCE, RDMA is getting mainstream. Quotas are
>> very useful, even though the current implementation are not perfect.
>> Tiering also makes so much sense, but, for large files, not on a per-file
>> level.
>>
>> To be honest we only use quotas. We got scared of trying out new
>> performance features that potentially would open up a new back of issues.
>>
>> Sorry for being such a buzzkill. I really wanted it to be different.
>>
>> Cheers,
>> Hans Henrik
>> On 19/07/2018 08.56, Amar Tumballi wrote:
>>
>>
>> * Hi all, Over last 12 years of Gluster, we have developed many features,
>> and continue to support most of it till now. But along the way, we have
>> figured out better methods of doing things. Also we are not actively
>> maintaining some of these features. We are now thinking of cleaning up some
>> of these ‘unsupported’ features, and mark them as ‘SunSet’ (i.e., would be
>> totally taken out of codebase in following releases) in next upcoming
>> release, v5.0. The release notes will provide options for smoothly
>> migrating to the supported configurations. If you are using any of these
>> features, do let us know, so that we can help you with ‘migration’.. Also,
>> we are happy to guide new developers to work on those components which are
>> not actively being maintained by current set of developers. List of
>> features hitting sunset: ‘cluster/stripe’ translator: This translator was
>> developed very early in the evolution of GlusterFS, and addressed one of
>> the very common question of Distributed FS, which is “What happens if one
>> of my file is bigger than the available brick. Say, I have 2 TB hard drive,
>> exported in glusterfs, my file is 3 TB”. While it solved the purpose, it
>> was very hard to handle failure scenarios, and give a real good experience
>> to our users with this feature. Over the time, Gluster solved the problem
>> with it’s ‘Shard’ feature, which solves the problem in much better way, and
>> provides much better solution with existing well supported stack. Hence the
>> proposal for Deprecation. If you are using this feature, then do write to
>> us, as it needs a proper migration from existing volume to a new full
>> supported volume type before you upgrade. ‘storage/bd’ translator: This
>> feature got into the code base 5 years back with this patch
>> <http://review.gluster.org/4809>[1]. Plan was to use a block device
>> directly as a brick, which would help to handle disk-image storage much
>> easily in glusterfs. As the feature is not getting more contribution, and
>> we are not seeing any user traction on 

Re: [Gluster-Maintainers] [Gluster-users] Proposal to mark few features as Deprecated / SunSet from Version 5.0

2019-03-19 Thread Amar Tumballi Suryanarayan
Hi Hans,

Thanks for the honest feedback. Appreciate this.

On Tue, Mar 19, 2019 at 5:39 PM Hans Henrik Happe  wrote:

> Hi,
>
> Looking into something else I fell over this proposal. Being a shop that
> are going into "Leaving GlusterFS" mode, I thought I would give my two
> cents.
>
> While being partially an HPC shop with a few Lustre filesystems,  we chose
> GlusterFS for an archiving solution (2-3 PB), because we could find files
> in the underlying ZFS filesystems if GlusterFS went sour.
>
> We have used the access to the underlying files plenty, because of the
> continuous instability of GlusterFS'. Meanwhile, Lustre have been almost
> effortless to run and mainly for that reason we are planning to move away
> from GlusterFS.
>
> Reading this proposal kind of underlined that "Leaving GluserFS" is the
> right thing to do. While I never understood why GlusterFS has been in
> feature crazy mode instead of stabilizing mode, taking away crucial
> features I don't get. With RoCE, RDMA is getting mainstream. Quotas are
> very useful, even though the current implementation are not perfect.
> Tiering also makes so much sense, but, for large files, not on a per-file
> level.
>
>
It is a right concern to raise, and removing the existing features is not a
good thing most of the times. But, one thing we noticed over the years is,
the features which we develop, and not take to completion cause the major
heart-burn. People think it is present, and it is already few years since
its introduced, but if the developers are not working on it, users would
always feel that the product doesn't work, because that one feature didn't
work.

Other than Quota in the proposal email, for all other features, even though
we have *some* users, we are inclined towards deprecating them, considering
projects overall goals of stability in the longer run.


> To be honest we only use quotas. We got scared of trying out new
> performance features that potentially would open up a new back of issues.
>
> About Quota, we heard enough voices, so we will make sure we keep it. The
original email was 'Proposal', and hence these opinions matter for decision.

Sorry for being such a buzzkill. I really wanted it to be different.
>
> We hear you. Please let us know one thing, which were the versions you
tried ?

We hope in coming months, our recent focus on Stability and Technical debt
reduction will help you to re-look at Gluster after sometime.


> Cheers,
> Hans Henrik
> On 19/07/2018 08.56, Amar Tumballi wrote:
>
>
> * Hi all, Over last 12 years of Gluster, we have developed many features,
> and continue to support most of it till now. But along the way, we have
> figured out better methods of doing things. Also we are not actively
> maintaining some of these features. We are now thinking of cleaning up some
> of these ‘unsupported’ features, and mark them as ‘SunSet’ (i.e., would be
> totally taken out of codebase in following releases) in next upcoming
> release, v5.0. The release notes will provide options for smoothly
> migrating to the supported configurations. If you are using any of these
> features, do let us know, so that we can help you with ‘migration’.. Also,
> we are happy to guide new developers to work on those components which are
> not actively being maintained by current set of developers. List of
> features hitting sunset: ‘cluster/stripe’ translator: This translator was
> developed very early in the evolution of GlusterFS, and addressed one of
> the very common question of Distributed FS, which is “What happens if one
> of my file is bigger than the available brick. Say, I have 2 TB hard drive,
> exported in glusterfs, my file is 3 TB”. While it solved the purpose, it
> was very hard to handle failure scenarios, and give a real good experience
> to our users with this feature. Over the time, Gluster solved the problem
> with it’s ‘Shard’ feature, which solves the problem in much better way, and
> provides much better solution with existing well supported stack. Hence the
> proposal for Deprecation. If you are using this feature, then do write to
> us, as it needs a proper migration from existing volume to a new full
> supported volume type before you upgrade. ‘storage/bd’ translator: This
> feature got into the code base 5 years back with this patch
> <http://review.gluster.org/4809>[1]. Plan was to use a block device
> directly as a brick, which would help to handle disk-image storage much
> easily in glusterfs. As the feature is not getting more contribution, and
> we are not seeing any user traction on this, would like to propose for
> Deprecation. If you are using the feature, plan to move to a supported
> gluster volume configuration, and have your setup ‘supported’ before
> upgrading to y

Re: [Gluster-Maintainers] [gluster-packaging] glusterfs-5.4 released

2019-03-13 Thread Amar Tumballi Suryanarayan
I am totally fine with v5.5, my suggestion for moving the tag was if we
consider calling 5.4 with these two patches.

Calling the release as 5.5 is totally OK, and we call it out specifically
in our version numbering scheme, as if something is very serious, we can
break 'release date' train.

-Amar

On Wed, Mar 13, 2019 at 6:13 PM Kaleb Keithley  wrote:

> The Version tag should be (considered) immutable. Please don't move it.
>
> If you want to add another tag to help us remember this issue that's fine.
>
> The other option which Shyam and I discussed was tagging v5.5.
>
>
> On Wed, Mar 13, 2019 at 8:32 AM Amar Tumballi Suryanarayan <
> atumb...@redhat.com> wrote:
>
>> We need to tag different commit may be? So the 'git checkout v5.4' points
>> to the correct commit?
>>
>> On Wed, 13 Mar, 2019, 4:40 PM Shyam Ranganathan, 
>> wrote:
>>
>>> Niels, Kaleb,
>>>
>>> We need to respin 5.4 with the 2 additional commits as follows,
>>>
>>> commit a00953ed212a7071b152c4afccd35b92fa5a682a (HEAD -> release-5,
>>> core: make compute_cksum function op_version compatible
>>>
>>> commit 8fb4631c65f28dd0a5e0304386efff3c807e64a4
>>> dict: handle STR_OLD data type in xdr conversions
>>>
>>> As the current build breaks rolling upgrades, we had held back on
>>> announcing 5.4 and are now ready with the fixes that can be used to
>>> respin 5.4.
>>>
>>> Let me know if I need to do anything more from my end for help with the
>>> packaging.
>>>
>>> Once the build is ready, we would be testing it out as usual.
>>>
>>> NOTE: As some users have picked up 5.4 the announce would also carry a
>>> notice, that they need to do a downserver upgrade to the latest bits
>>> owing to the patches that have landed in addition to the existing
>>> content.
>>>
>>> Thanks,
>>> Shyam
>>>
>>> On 3/5/19 8:59 AM, Shyam Ranganathan wrote:
>>> > On 2/27/19 5:19 AM, Niels de Vos wrote:
>>> >> On Tue, Feb 26, 2019 at 02:47:30PM +, jenk...@build.gluster.org
>>> wrote:
>>> >>> SRC:
>>> https://build.gluster.org/job/release-new/80/artifact/glusterfs-5.4.tar.gz
>>> >>> HASH:
>>> https://build.gluster.org/job/release-new/80/artifact/glusterfs-5.4.sha512sum
>>> >>
>>> >> Packages for the CentOS Storage SIG are now available for testing.
>>> >> Please try them out and report test results on this list.
>>> >>
>>> >>   # yum install centos-release-gluster
>>> >>   # yum install --enablerepo=centos-gluster5-test glusterfs-server
>>> >
>>> > Due to patch [1] upgrades are broken, so we are awaiting a fix or
>>> revert
>>> > of the same before requesting a new build of 5.4.
>>> >
>>> > The current RPMs should hence not be published.
>>> >
>>> > Sanju/Hari, are we reverting this patch so that we can release 5.4, or
>>> > are we expecting the fix to land in 5.4 (as in [2])?
>>> >
>>> > Thanks,
>>> > Shyam
>>> >
>>> > [1] Patch causing regression:
>>> https://review.gluster.org/c/glusterfs/+/22148
>>> >
>>> > [2] Proposed fix on master:
>>> https://review.gluster.org/c/glusterfs/+/22297/
>>> > ___
>>> > maintainers mailing list
>>> > maintainers@gluster.org
>>> > https://lists.gluster.org/mailman/listinfo/maintainers
>>> >
>>> ___
>>> maintainers mailing list
>>> maintainers@gluster.org
>>> https://lists.gluster.org/mailman/listinfo/maintainers
>>>
>> ___
>> maintainers mailing list
>> maintainers@gluster.org
>> https://lists.gluster.org/mailman/listinfo/maintainers
>>
>

-- 
Amar Tumballi (amarts)
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [gluster-packaging] glusterfs-5.4 released

2019-03-13 Thread Amar Tumballi Suryanarayan
We need to tag different commit may be? So the 'git checkout v5.4' points
to the correct commit?

On Wed, 13 Mar, 2019, 4:40 PM Shyam Ranganathan, 
wrote:

> Niels, Kaleb,
>
> We need to respin 5.4 with the 2 additional commits as follows,
>
> commit a00953ed212a7071b152c4afccd35b92fa5a682a (HEAD -> release-5,
> core: make compute_cksum function op_version compatible
>
> commit 8fb4631c65f28dd0a5e0304386efff3c807e64a4
> dict: handle STR_OLD data type in xdr conversions
>
> As the current build breaks rolling upgrades, we had held back on
> announcing 5.4 and are now ready with the fixes that can be used to
> respin 5.4.
>
> Let me know if I need to do anything more from my end for help with the
> packaging.
>
> Once the build is ready, we would be testing it out as usual.
>
> NOTE: As some users have picked up 5.4 the announce would also carry a
> notice, that they need to do a downserver upgrade to the latest bits
> owing to the patches that have landed in addition to the existing content.
>
> Thanks,
> Shyam
>
> On 3/5/19 8:59 AM, Shyam Ranganathan wrote:
> > On 2/27/19 5:19 AM, Niels de Vos wrote:
> >> On Tue, Feb 26, 2019 at 02:47:30PM +, jenk...@build.gluster.org
> wrote:
> >>> SRC:
> https://build.gluster.org/job/release-new/80/artifact/glusterfs-5.4.tar.gz
> >>> HASH:
> https://build.gluster.org/job/release-new/80/artifact/glusterfs-5.4.sha512sum
> >>
> >> Packages for the CentOS Storage SIG are now available for testing.
> >> Please try them out and report test results on this list.
> >>
> >>   # yum install centos-release-gluster
> >>   # yum install --enablerepo=centos-gluster5-test glusterfs-server
> >
> > Due to patch [1] upgrades are broken, so we are awaiting a fix or revert
> > of the same before requesting a new build of 5.4.
> >
> > The current RPMs should hence not be published.
> >
> > Sanju/Hari, are we reverting this patch so that we can release 5.4, or
> > are we expecting the fix to land in 5.4 (as in [2])?
> >
> > Thanks,
> > Shyam
> >
> > [1] Patch causing regression:
> https://review.gluster.org/c/glusterfs/+/22148
> >
> > [2] Proposed fix on master:
> https://review.gluster.org/c/glusterfs/+/22297/
> > ___
> > maintainers mailing list
> > maintainers@gluster.org
> > https://lists.gluster.org/mailman/listinfo/maintainers
> >
> ___
> maintainers mailing list
> maintainers@gluster.org
> https://lists.gluster.org/mailman/listinfo/maintainers
>
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] GlusterFS - 6.0RC - Test days (27th, 28th Feb)

2019-03-04 Thread Amar Tumballi Suryanarayan
Thanks to those who participated.

Update at present:

We found 3 blocker bugs in upgrade scenarios, and hence have marked release
as pending upon them. We will keep these lists updated about progress.

-Amar

On Mon, Feb 25, 2019 at 11:41 PM Amar Tumballi Suryanarayan <
atumb...@redhat.com> wrote:

> Hi all,
>
> We are calling out our users, and developers to contribute in validating
> ‘glusterfs-6.0rc’ build in their usecase. Specially for the cases of
> upgrade, stability, and performance.
>
> Some of the key highlights of the release are listed in release-notes
> draft
> <https://github.com/gluster/glusterfs/blob/release-6/doc/release-notes/6.0.md>.
> Please note that there are some of the features which are being dropped out
> of this release, and hence making sure your setup is not going to have an
> issue is critical. Also the default lru-limit option in fuse mount for
> Inodes should help to control the memory usage of client processes. All the
> good reason to give it a shot in your test setup.
>
> If you are developer using gfapi interface to integrate with other
> projects, you also have some signature changes, so please make sure your
> project would work with latest release. Or even if you are using a project
> which depends on gfapi, report the error with new RPMs (if any). We will
> help fix it.
>
> As part of test days, we want to focus on testing the latest upcoming
> release i.e. GlusterFS-6, and one or the other gluster volunteers would be
> there on #gluster channel on freenode to assist the people. Some of the key
> things we are looking as bug reports are:
>
>-
>
>See if upgrade from your current version to 6.0rc is smooth, and works
>as documented.
>- Report bugs in process, or in documentation if you find mismatch.
>-
>
>Functionality is all as expected for your usecase.
>- No issues with actual application you would run on production etc.
>-
>
>Performance has not degraded in your usecase.
>- While we have added some performance options to the code, not all of
>   them are turned on, as they have to be done based on usecases.
>   - Make sure the default setup is at least same as your current
>   version
>   - Try out few options mentioned in release notes (especially,
>   --auto-invalidation=no) and see if it helps performance.
>-
>
>While doing all the above, check below:
>- see if the log files are making sense, and not flooding with some
>   “for developer only” type of messages.
>   - get ‘profile info’ output from old and now, and see if there is
>   anything which is out of normal expectation. Check with us on the 
> numbers.
>   - get a ‘statedump’ when there are some issues. Try to make sense
>   of it, and raise a bug if you don’t understand it completely.
>
>
> <https://hackmd.io/YB60uRCMQRC90xhNt4r6gA?both#Process-expected-on-test-days>Process
> expected on test days.
>
>-
>
>We have a tracker bug
><https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-6.0>[0]
>- We will attach all the ‘blocker’ bugs to this bug.
>-
>
>Use this link to report bugs, so that we have more metadata around
>given bugzilla.
>- Click Here
>   
> <https://bugzilla.redhat.com/enter_bug.cgi?blocked=1672818_severity=high=core=high=GlusterFS_whiteboard=gluster-test-day=6>
>   [1]
>-
>
>The test cases which are to be tested are listed here in this sheet
>
> <https://docs.google.com/spreadsheets/d/1AS-tDiJmAr9skK535MbLJGe_RfqDQ3j1abX1wtjwpL4/edit?usp=sharing>[2],
>please add, update, and keep it up-to-date to reduce duplicate efforts.
>
> Lets together make this release a success.
>
> Also check if we covered some of the open issues from Weekly untriaged
> bugs
> <https://lists.gluster.org/pipermail/gluster-devel/2019-February/055874.html>
> [3]
>
> For details on build and RPMs check this email
> <https://lists.gluster.org/pipermail/gluster-devel/2019-February/055875.html>
> [4]
>
> Finally, the dates :-)
>
>- Wednesday - Feb 27th, and
>    - Thursday - Feb 28th
>
> Note that our goal is to identify as many issues as possible in upgrade
> and stability scenarios, and if any blockers are found, want to make sure
> we release with the fix for same. So each of you, Gluster users, feel
> comfortable to upgrade to 6.0 version.
>
> Regards,
> Gluster Ants.
>
> --
> Amar Tumballi (amarts)
>


-- 
Amar Tumballi (amarts)
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Various upgrades are Broken

2019-03-04 Thread Amar Tumballi Suryanarayan
Thanks for testing this Hari.

On Mon, Mar 4, 2019 at 5:42 PM Hari Gowtham  wrote:

> Hi,
>
> With the patch https://review.gluster.org/#/c/glusterfs/+/21838/ the
> upgrade from 3.12 to 6, 4.1 to 6 and 5 to 6 is broken.
>
> The above patch is available in release 6 and has been back-ported to 4.1
> and 5.
> Though there isn't any release made with this patch on 4.1 and 5, if
> made there are a number of scenarios that will fail. Few are mentioned
> below:
>

Considering there is no release with this patch in, lets not consider
backporting at all.


> 3.12 to 4.1 with patch
> 3.12 to 5 with patch
> 4.1 to 4.1 with patch
> 4.1 to any higher versions with patch.
> 5 to 5 or higher version with patch.
>
> The fix is being worked on. Until then, its a request to stop making
> releases to avoid more complication.
>
>
Also, we can revert this patch in release-6 right away, as this fix is
supposed to help AFR configs with gNFS. Ravi, you know more history about
the patch, any thing more we should be considering?


>
> --
> Regards,
> Hari Gowtham.
>


-- 
Amar Tumballi (amarts)
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] GlusterFS - 6.0RC - Test days (27th, 28th Feb)

2019-02-25 Thread Amar Tumballi Suryanarayan
Hi all,

We are calling out our users, and developers to contribute in validating
‘glusterfs-6.0rc’ build in their usecase. Specially for the cases of
upgrade, stability, and performance.

Some of the key highlights of the release are listed in release-notes draft
<https://github.com/gluster/glusterfs/blob/release-6/doc/release-notes/6.0.md>.
Please note that there are some of the features which are being dropped out
of this release, and hence making sure your setup is not going to have an
issue is critical. Also the default lru-limit option in fuse mount for
Inodes should help to control the memory usage of client processes. All the
good reason to give it a shot in your test setup.

If you are developer using gfapi interface to integrate with other
projects, you also have some signature changes, so please make sure your
project would work with latest release. Or even if you are using a project
which depends on gfapi, report the error with new RPMs (if any). We will
help fix it.

As part of test days, we want to focus on testing the latest upcoming
release i.e. GlusterFS-6, and one or the other gluster volunteers would be
there on #gluster channel on freenode to assist the people. Some of the key
things we are looking as bug reports are:

   -

   See if upgrade from your current version to 6.0rc is smooth, and works
   as documented.
   - Report bugs in process, or in documentation if you find mismatch.
   -

   Functionality is all as expected for your usecase.
   - No issues with actual application you would run on production etc.
   -

   Performance has not degraded in your usecase.
   - While we have added some performance options to the code, not all of
  them are turned on, as they have to be done based on usecases.
  - Make sure the default setup is at least same as your current version
  - Try out few options mentioned in release notes (especially,
  --auto-invalidation=no) and see if it helps performance.
   -

   While doing all the above, check below:
   - see if the log files are making sense, and not flooding with some “for
  developer only” type of messages.
  - get ‘profile info’ output from old and now, and see if there is
  anything which is out of normal expectation. Check with us on the numbers.
  - get a ‘statedump’ when there are some issues. Try to make sense of
  it, and raise a bug if you don’t understand it completely.

<https://hackmd.io/YB60uRCMQRC90xhNt4r6gA?both#Process-expected-on-test-days>Process
expected on test days.

   -

   We have a tracker bug
   <https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-6.0>[0]
   - We will attach all the ‘blocker’ bugs to this bug.
   -

   Use this link to report bugs, so that we have more metadata around given
   bugzilla.
   - Click Here
  
<https://bugzilla.redhat.com/enter_bug.cgi?blocked=1672818_severity=high=core=high=GlusterFS_whiteboard=gluster-test-day=6>
  [1]
   -

   The test cases which are to be tested are listed here in this sheet
   
<https://docs.google.com/spreadsheets/d/1AS-tDiJmAr9skK535MbLJGe_RfqDQ3j1abX1wtjwpL4/edit?usp=sharing>[2],
   please add, update, and keep it up-to-date to reduce duplicate efforts.

Lets together make this release a success.

Also check if we covered some of the open issues from Weekly untriaged bugs
<https://lists.gluster.org/pipermail/gluster-devel/2019-February/055874.html>
[3]

For details on build and RPMs check this email
<https://lists.gluster.org/pipermail/gluster-devel/2019-February/055875.html>
[4]

Finally, the dates :-)

   - Wednesday - Feb 27th, and
   - Thursday - Feb 28th

Note that our goal is to identify as many issues as possible in upgrade and
stability scenarios, and if any blockers are found, want to make sure we
release with the fix for same. So each of you, Gluster users, feel
comfortable to upgrade to 6.0 version.

Regards,
Gluster Ants.

-- 
Amar Tumballi (amarts)
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Release 6: Branched and next steps

2019-02-20 Thread Amar Tumballi Suryanarayan
On Tue, Feb 19, 2019 at 1:37 AM Shyam Ranganathan 
wrote:

> In preparation for RC0 I have put up an intial patch for the release
> notes [1]. Request the following actions on the same (either a followup
> patchset, or a dependent one),
>
> - Please review!
> - Required GD2 section updated to latest GD2 status
>

I am inclined to drop the GD2 section for 'standalone' users. As the team
worked with goals of making GD2 invisible with containers (GCS) in mind.
So, should we call out any features of GD2 at all?

Anyways, as per my previous email on GCS release updates, we are planning
to have a container available with gd2 and glusterfs, which can be used by
people who are trying out options with GD2.


> - Require notes on "Reduce the number or threads used in the brick
> process" and the actual status of the same in the notes
>
>
This work is still in progress, and we are treating it as a bug fix for
'brick-multiplex' usecase, which is mainly required in scaled volume number
usecase in container world. My guess is, we won't have much content to add
for glusterfs-6.0 at the moment.


> RC0 build target would be tomorrow or by Wednesday.
>
>
Thanks, I was testing for few upgrade and different version clusters
support. With 4.1.6 and latest release-6.0 branch, things works fine. I
haven't done much of a load testing yet.

Requesting people to support in upgrade testing. From different volume
options, and different usecase scenarios.

Regards,
Amar



> Thanks,
> Shyam
>
> [1] Release notes patch: https://review.gluster.org/c/glusterfs/+/6
>
> On 2/5/19 8:25 PM, Shyam Ranganathan wrote:
> > Hi,
> >
> > Release 6 is branched, and tracker bug for 6.0 is created [1].
> >
> > Do mark blockers for the release against [1].
> >
> > As of now we are only tracking [2] "core: implement a global thread pool
> > " for a backport as a feature into the release.
> >
> > We expect to create RC0 tag and builds for upgrade and other testing
> > close to mid-week next week (around 13th Feb), and the release is slated
> > for the first week of March for GA.
> >
> > I will post updates to this thread around release notes and other
> > related activity.
> >
> > Thanks,
> > Shyam
> >
> > [1] Tracker: https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-6.0
> >
> > [2] Patches tracked for a backport:
> >   - https://review.gluster.org/c/glusterfs/+/20636
> > ___
> > Gluster-devel mailing list
> > gluster-de...@gluster.org
> > https://lists.gluster.org/mailman/listinfo/gluster-devel
> >
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
>
>

-- 
Amar Tumballi (amarts)
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Meeting minutes: Feb 04th, 2019

2019-02-05 Thread Amar Tumballi Suryanarayan
BJ Link

   - Bridge: https://bluejeans.com/217609845
   - Watch: https://bluejeans.com/s/37SS6/

<https://hackmd.io/yTC-un5XT6KUB9V37LG6OQ?both#Attendance>Attendance

   - Nigel Babu
   - Sunil Heggodu
   - Amar Tumballi
   - Aravinda VK
   - Atin Mukherjee

<https://hackmd.io/yTC-un5XT6KUB9V37LG6OQ?both#Agenda>Agenda

   -

   Gluster Performance Runs (on Master):
   - Some regression since 3.12 compared to current master.
  - Few operations had major regresions.
  - Entry serialization (SDFS) feature caused regression. We have
  disable it by default, plan to ask users to turn it on for edge cases.
  - Some patches are currently being reviewed for perf improvements
  which are not enabled by default.
  - See Xavi’s email for perf improvements
  
<https://lists.gluster.org/pipermail/gluster-devel/2019-January/055807.html>
in
  self-heal. This can cause some regression on sequential IO.
  - [Nigel]Can we publish posts on 3.12 perf and our machine specs.
  Then we can do a follow up post after 6 release.
  - Yes. This is a release highlight that we want to talk about.
   -

   GlusterFS 6.0 branching:
   - upgrade tests, specially with some removed volume types and options.
 - [Atin] I’ve started testing some of the upgrade tests
 (glusterfs-5 to latest master), have some observations around
some of the
 tiering related options which are leading to peer rejection issue post
 upgrade, we need changes to avoid the peer rejection
failures. GlusterD
 team will focus on this testing in coming days.
  - performance patches - Discussed earlier
  - shd-mux
 - [Atin] Shyam highlighted concern in accepting this big change
 such late and near to branching timelines, so most likely not
going to make
 into 6.0.
 - A risk because of the timeline. We will currently keep testing
 it on master and once stable we could do an exception to merge it to
 release-6
 - The changes are glusterd heavy, so we want to make sure it’s
 thoroughly tested so we don’t cause regressions.
  -

   GCS - v1.0
   - Can we announce it, yet?
 - [Atin] Hit a blocker issue in GD2,
 https://github.com/gluster/gcs/issues/129 , root cause is in
 progress. Testing of https://github.com/gluster/gcs/pull/130 is
 blocked because of this. We are still postive to nail this
down by tomorrow
 and call out GCS 1.0 by tomorrow.
  - GCS has a website now - https://gluster.github.io/gcs/Contribute by
  sending patches to the gh-pages branch on github.com/gluster/gcs repo.
  - What does it take to run the containers from Gluster (CSI/GD2 etc)
  on ARM architecture host machines?
 - It should theoretically work given Gluster has been known to
 work on ARM. And we know that k8s on ARM is something that people do.
 - Might be useful to kick it off on a Raspberry pi and see what
 breaks.
  -

   We need more content on website, and in general on internet. How to
   motivate developers to write blogs?
   - New theme is proposed for upstream documentation via the pull request
  https://github.com/gluster/glusterdocs/pull/454
  - Test website: https://my-doc-sunil.readthedocs.io/en/latest/
   -

   Round Table:
   - Nigel: AWS migration will happen this week and regressions will be a
  little flakey. Please bear with us.


-- 
Amar Tumballi (amarts)
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Maintainer's meeting: Jan 21st, 2019

2019-01-22 Thread Amar Tumballi Suryanarayan
BJ Link

   - Bridge: https://bluejeans.com/217609845
   - Watch: https://bluejeans.com/s/PAnE5

<https://hackmd.io/yTC-un5XT6KUB9V37LG6OQ?both#Attendance>Attendance

   - Nigel Babu, Amar, Nithya, Shyam, Sunny, Milind (joined late).

<https://hackmd.io/yTC-un5XT6KUB9V37LG6OQ?both#Agenda>Agenda

   -

   GlusterFS - v6.0 - Are we ready for branching?
   - Can we consider getting https://review.gluster.org/20636 (lock free
  thread pool) as an option in the code, so we can have it?
 - Lets try to keep it as an option, and backport it, if not ready
 by end of this week.
  - Fencing? - Most probable to make it.
  - python3 support for glusterfind -
  https://review.gluster.org/#/c/glusterfs/+/21845/
  - Self-heal daemon multiplexing?
  - Reflink?
  - Any other performance enhancements?
   -

   Infra Updates
   - Moving to new cloud vendor this week. Expect some flakiness. This is
  on a timeline we do not control and already quite significantly delayed.
  - Going to delete old master builds from
  http://artifacts.ci.centos.org/gluster/nightly/
  - Not deleting the release branch artifacts.
   -

   Performance regression test bed
   - Have machines, can we get started with bare minimum tests
  - All we need is the result to be out in public
  - Basic tests are present. Some more test failures, so resolving that
  should be good enough.
  - Will be picked up after above changes.
   -

   Round Table
   - Have a look at website and suggest what more is required.


-- 
Amar Tumballi (amarts)
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Gluster Maintainer's meeting: 7th Jan, 2019 - Meeting minutes

2019-01-08 Thread Amar Tumballi Suryanarayan
Meeting date: 2019-01-07 18:30 IST, 13:00 UTC, 08:00 EDT
<https://hackmd.io/yTC-un5XT6KUB9V37LG6OQ?both#BJ-Link>BJ Link

   - Bridge: https://bluejeans.com/217609845
   - Watch: https://bluejeans.com/s/sGFpa

<https://hackmd.io/yTC-un5XT6KUB9V37LG6OQ?both#Attendance>Attendance
<https://hackmd.io/yTC-un5XT6KUB9V37LG6OQ?both#Agenda>Agenda

   -

   Welcome 2019: New goals / Discuss:
   - https://hackmd.io/OiQId65pStuBa_BPPazcmA
  - Give it a week and take it to mailing list, discuss and agree upon
  - [Nigel] Some of the above points are threads of its own. May need
  separate thread.
   -

   Progress with GCS
   -

  Email about GCS in community.
  -

  RWX:
  - Scale testing showing GD2 can scale to 1000s of PVs (each is a
 gluster volume)
 - Bricks with LVM
 - Some delete issues seen, specially with LV command scale. Patch
 sent.
 - Create rate: 500 PVs / 12mins
 - More details by end of the week, including delete numbers.
  -

  RWO:
  - new CSI for gluster-block showing good scale numbers, which is
 reaching higher than current 1k RWO PV per cluster, but need
to iron out
 few things. (https://github.com/gluster/gluster-csi-driver/pull/105
 )
 - 280 pods in 3 hosts, 1-1 Pod->PV ratio: leaner graph.
 - 1080 PVs with 1-12 ratio on 3 machines
 - Working on 3000+ PVC on just 3 hosts, will update by another 2
 days.
 - Poornima is coming up with steps and details about the
 PR/version used etc.
  -

   Static Analyzers:
   - glusterfs:
 - Coverity - 63 open
 - https://scan.coverity.com/projects/gluster-glusterfs?tab=overview
 - clang-scan - 32 open
 -
 
https://build.gluster.org/job/clang-scan/lastCompletedBuild/clangScanBuildBugs/
  - gluster-block:
 -
 https://scan.coverity.com/projects/gluster-gluster-block?tab=overview
 - coverity: 1 open (66 last week)
  -

   GlusterFS-6:
   - Any priority review needed?
 - Fencing patches
 - Reducing threads (GH Issue: 475)
 - glfs-api statx patches [merged]
  - What are the critical areas need focus?
 - Asan Build ? Currently not green
 - Some java errors, machine offline. Need to look into this.
  - How to make glusto automated tests become blocker for the release?
  - Upgrade tests, need to start early.
  - Schedule as called out in the mail
  
<https://lists.gluster.org/pipermail/gluster-devel/2018-December/055721.html>
  NOTE: Working backwards on the schedule, here’s what we have:
 - Announcement: Week of Mar 4th, 2019
 - GA tagging: Mar-01-2019
 - RC1: On demand before GA
 - RC0: Feb-04-2019
 - Late features cut-off: Week of Jan-21st, 2018
 - Branching (feature cutoff date): Jan-14-2018 (~45 days prior to
 branching)
 - Feature/scope proposal for the release (end date): Dec-12-2018
  -

   Round Table?
   - [Sunny] Meetup in BLR this weekend. Please do come (at least those who
  are in BLR)
  - [Susant] Softserve has 4hrs timeout, which can’t get full
  regression cycle. Can we get at least 2 more hours added, so full
  regression can be run.


---

On Mon, Jan 7, 2019 at 9:04 AM Amar Tumballi Suryanarayan <
atumb...@redhat.com> wrote:

>
> Meeting date: 2019-01-07 18:30 IST, 13:00 UTC, 08:00 EDTBJ Link
>
>- Bridge: https://bluejeans.com/217609845
>
> <https://hackmd.io/yTC-un5XT6KUB9V37LG6OQ?both#Attendance>Attendance
> <https://hackmd.io/yTC-un5XT6KUB9V37LG6OQ?both#Agenda>Agenda
>
>-
>
>Welcome 2019: Discuss about goals :
>- https://hackmd.io/OiQId65pStuBa_BPPazcmA
>-
>
>Progress with GCS
>- Scale testing showing GD2 can scale to 1000s of PVs (each is a
>   gluster volume, in RWX mode)
>   - new CSI for gluster-block showing good scale numbers, which is
>   reaching higher than current 1k RWO PV per cluster, but need to iron out
>   few things. (https://github.com/gluster/gluster-csi-driver/pull/105)
>-
>
>Performance focus:
>- Any update? What are the patch in progress?
>   - How to measure the perf of a patch, is there any hardware?
>-
>
>Static Analyzers:
>- glusterfs:
>  - coverity - 63 open
>  - clang-scan - 32 open (with many false-positives).
>   - gluster-block:
>  - coverity: 1 open (66 last week)
>   -
>
>GlusterFS-6:
>- Any priority review needed?
>   - What are the critical areas need focus?
>   - How to make glusto automated tests become blocker for the release?
>   - Upgrade tests, need to start early.
>   - Schedule as called out in the mail
>   
> <https://

[Gluster-Maintainers] Gluster Maintainer's meeting: 7th Jan, 2019 - Agenda

2019-01-06 Thread Amar Tumballi Suryanarayan
Meeting date: 2019-01-07 18:30 IST, 13:00 UTC, 08:00 EDTBJ Link

   - Bridge: https://bluejeans.com/217609845

<https://hackmd.io/yTC-un5XT6KUB9V37LG6OQ?both#Attendance>Attendance
<https://hackmd.io/yTC-un5XT6KUB9V37LG6OQ?both#Agenda>Agenda

   -

   Welcome 2019: Discuss about goals :
   - https://hackmd.io/OiQId65pStuBa_BPPazcmA
   -

   Progress with GCS
   - Scale testing showing GD2 can scale to 1000s of PVs (each is a gluster
  volume, in RWX mode)
  - new CSI for gluster-block showing good scale numbers, which is
  reaching higher than current 1k RWO PV per cluster, but need to iron out
  few things. (https://github.com/gluster/gluster-csi-driver/pull/105)
   -

   Performance focus:
   - Any update? What are the patch in progress?
  - How to measure the perf of a patch, is there any hardware?
   -

   Static Analyzers:
   - glusterfs:
 - coverity - 63 open
 - clang-scan - 32 open (with many false-positives).
  - gluster-block:
 - coverity: 1 open (66 last week)
  -

   GlusterFS-6:
   - Any priority review needed?
  - What are the critical areas need focus?
  - How to make glusto automated tests become blocker for the release?
  - Upgrade tests, need to start early.
  - Schedule as called out in the mail
  
<https://lists.gluster.org/pipermail/gluster-devel/2018-December/055721.html>
  NOTE: Working backwards on the schedule, here’s what we have:
 - Announcement: Week of Mar 4th, 2019
 - GA tagging: Mar-01-2019
 - RC1: On demand before GA
 - RC0: Feb-04-2019
 - Late features cut-off: Week of Jan-21st, 2018
 - Branching (feature cutoff date): Jan-14-2018 (~45 days prior to
 branching)
 - Feature/scope proposal for the release (end date): Dec-12-2018
  -

   Round Table?

=

Feel free to add your topic into :
https://hackmd.io/yTC-un5XT6KUB9V37LG6OQ?edit


-- 
Amar Tumballi (amarts)
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Maintainer meeting minutes: 12th Nov, 2018

2018-11-13 Thread Amar Tumballi
BJ Link

   - Bridge: https://bluejeans.com/217609845
   - Watch: https://bluejeans.com/s/CDuce

<https://hackmd.io/yTC-un5XT6KUB9V37LG6OQ?both#Attendance>Attendance

   - Amar
   - Kaleb
   - Nithya
   - Nigel
   - Deepshikha
   - Rafi
   - Vijay

<https://hackmd.io/yTC-un5XT6KUB9V37LG6OQ?both#Agenda>Agenda

   -

   AI from previous week
   - Emails on timelines of GlusterFS6 - DONE
   -

   Automatically close bugs with CLOSED->NEXT_RELEASE instead of MODIFIED
   state.
   - MODIFIED state makes sense for a product, which would then go with
  ON_QA status, with a dedicated team testing bugs, etc.
  - Considering that on upstream patches, there are no active testing
  effort, makes sense to move the bug to NEXT_RELEASE status.
  - Helps in keeping ‘time-to-close’ record properly.
  - More on it @ Maintainer’s ML
  
<https://lists.gluster.org/pipermail/maintainers/2018-November/005252.html>…
  check full conversation @ archive
  <https://lists.gluster.org/pipermail/maintainers/2018-November/>
  - AI: Raise an infra bug to get this implemented.
   -

   Handling Github links on specs repo
   - Slightly tricky since we tend to not spam other repos with bot
  comments.
  - specs repo does not have an issue tracker.
  - Ideally spec should be approved and at that point there will be an
  external tracker reference.
  - AI: send email in ML, default closure is when merged, patch will
  take action acording to github commit. (like fixes: gluster/glusterfs#NNN
  etc.)
   -

   GlusterFS-6 : Anything more to add?
   -

  Discussion @ https://hackmd.io/sP5GsZ-uQpqnmGZmFKuWIg
  -

  How to use removed xlators in future? Will there be any
  experimentation at all?
  - All the removed xlators would be available in another repository,
 and would be provided with a RPM for users, every release.
 - Users can get them in the volume graph using GD2, as it now
 supports template based volfile generation.
  -

  VDSM had concerns about versioning scheme
  - AI: Need to understand more about version dependency.
 - Understanding is that, not every project picks tag, but
 picks .latest tag.
  -

  There is a need to understand how different projects are dependent on
  Gluster, and what are the nature of these dependencies.
  -

  ASan: Start picking the tests from beginning.
  -

  GD2 : Will it ever be default?
  -

 Needs to get tests running against it first to consider the
 proposal.
 -

 GCS is a good candidate to get GD2 out of the way.
 -

 Should Aravinda’s effort be revived? It can fail initially, but
 putting effort there would get us started on this.
 -

 Should we consider handling it like Glusto, where they are making
 changes in library level, and FS tests remain same.
 -

   GCS Status update:
   - Automated builds for CSI driver, updates.
  - Gluster Prometheus is now part of GCS.
  - Gluster-block is TBD
   -

   Round table:
   - All good!


-- 
Amar Tumballi (amarts)
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Bug state change proposal based on the conversation on bz 1630368

2018-11-05 Thread Amar Tumballi
On Mon, Nov 5, 2018 at 8:59 PM Shyam Ranganathan 
wrote:

> On 11/05/2018 09:43 AM, Yaniv Kaul wrote:
> >
> >
> > On Mon, Nov 5, 2018 at 4:28 PM Niels de Vos  > <mailto:nde...@redhat.com>> wrote:
> >
> > On Mon, Nov 05, 2018 at 05:31:26PM +0530, Pranith Kumar Karampuri
> wrote:
> > > hi,
> > > When we create a bz on master and clone it to the next
> > release(In my
> > > case it was release-5.0), after that release happens can we close
> > the bz on
> > > master with CLOSED NEXTRELEASE?
> >
> >
> > Since no one is going to verify it (right now, but I'm hopeful this will
> > change in the future!), no point in keeping it open.
> > You could keep it open and move it along the process, and then close it
> > properly when you release the next release.
> > It's kinda pointless if no one's going to do anything with it between
> > MODIFIED to CLOSED.
> > I mean - assuming you move it to ON_QA - who's going to do the
> verification?
>
> The link provided by Niels is the "proper" process, but there are a few
> gotchas here (which are noted in the comments provided in this mail as
> well),
>
> - Moving from MODIFIED to ON_QA assumes/needs packages to be made
> available, these are made available only when we prepare for the
> release, else bug reporters or QE need to use nightly builds to verify
> the same
>
> - Further, once on ON_QA we are not getting these verified as Yaniv
> states, so moving this out of the ON_QA state would not happen, and the
> bug would stay in limbo here till the release is made with the
> unverified(?) fix
>
> Here is what happens automatically at present,
>
> - Bugs move to POST and MODIFIED states as patches against the same are
> posted and then merged (with the patch commit message stating it "Fixes"
> and not just "Updates" the bug)
>
> - From here on, when the bug lands in a release and the release notes
> are prepared to notify that the said bugs are fixed, these bugs are
> moved to CLOSED-CURRENTRELEASE (using the release tools scripts [2])
>
> The tool moving the bug to the CLOSED state, is in reality to catch any
> bugs that are not in the right state, ideally it would be correct to
> only move those bugs that are VERIFIED to the closed state, but again as
> stated, current manner of dealing with the bugs does not include a
> verification step.
>
> So the time a bug spends between MODIFIED to CLOSED, states that it is
> merged (into the said branch against which the bug is filed) and
> awaiting a release.
>
> Instead the suggestion is to reflect that state more clearly as
> CLOSED-NEXTRELEASE.
>
> The automation hence can change to the following,
>
> - Do not move to MODIFIED when the patch is merged, but move it to
> CLOSED-NEXTRELEASE
>
> - The release tools would change these to CLOSED-CURRENTRELEASE with the
> "fixed in" version set right, when the release is made
>
> The change would be constant for bugs against master and against release
> branches. If we need to specialize this for bugs on master to move to
> only MODIFIED till it is merged into a release branch, that calls for
> more/changed automation and also a definition of what NEXTRELEASE means
> when a bug is filed against a branch.
>
> IMO, a bug on master marked NEXTRELEASE, means it lands when a next
> major release is made, and a bug on a branch marked NEXTRELEASE is when
> the next major (time between branching and GA/.0 of the branch) or,
> minor release is made.
>
> If we go with the above, the only automation change is not to move bugs
> to MODIFIED, but just push it to CLOSED-NEXTRELEASE instead.
>
> Based on the current state of lacking verification, this change is
> possible.
>
>
Yes, the above was in my mind.

Change the current script to update bug to CLOSED-NEXT_RELEASE instead of
MODIFIED.

-Amar


> Thoughts?
>
> >
> > In oVirt, QE actually verifies upstream bugs, so there is value. They
> > are also all appear in the release notes, with their status and so on.
> > Y.
> >
> >
> > Yes, I think that can be done. Not sure what the advantage is, an
> > explanation for this suggestion would be nice :)
> >
> > I am guessing it will be a little clearer for users that find the
> > CLOSED/NEXTRELEASE bug? It would need the next major version in the
> > "fixed in version" field too though (or 'git describe' after
> merging).
> >
> > If this gets done, someone will need to update the bug report
> lifecycle
> > at
> >
> https://docs.gluster.org/en/latest/Contributors-Guide/Bug-report-Life-Cycle/
> >
> > Uhmm, actually, that page already mentions CLOSED/NEXTRELEASE!
> >
> > Niels
> >
> ___
> maintainers mailing list
> maintainers@gluster.org
> https://lists.gluster.org/mailman/listinfo/maintainers
>


-- 
Amar Tumballi (amarts)
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Bug state change proposal based on the conversation on bz 1630368

2018-11-05 Thread Amar Tumballi
I am in favor of this suggestion. Mainly because CLOSING a bug means some
metrics (like time to CLOSE etc) would look much better, and appropriate.
This way, we can reduce the clutter in bugzilla too, and while the script
runs after release, it can actually change the status to
CLOSED->NEXTRELEASE to CLOSED->CURRENT_RELEASE with actual release string.

-Amar

On Mon, Nov 5, 2018 at 7:58 PM Niels de Vos  wrote:

> On Mon, Nov 05, 2018 at 05:31:26PM +0530, Pranith Kumar Karampuri wrote:
> > hi,
> > When we create a bz on master and clone it to the next release(In my
> > case it was release-5.0), after that release happens can we close the bz
> on
> > master with CLOSED NEXTRELEASE?
>
> Yes, I think that can be done. Not sure what the advantage is, an
> explanation for this suggestion would be nice :)
>
> I am guessing it will be a little clearer for users that find the
> CLOSED/NEXTRELEASE bug? It would need the next major version in the
> "fixed in version" field too though (or 'git describe' after merging).
>
> If this gets done, someone will need to update the bug report lifecycle
> at
> https://docs.gluster.org/en/latest/Contributors-Guide/Bug-report-Life-Cycle/
>
> Uhmm, actually, that page already mentions CLOSED/NEXTRELEASE!
>
> Niels
> ___
> maintainers mailing list
> maintainers@gluster.org
> https://lists.gluster.org/mailman/listinfo/maintainers
>


-- 
Amar Tumballi (amarts)
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Maintainer's meeting minutes : 29th October, 2018

2018-10-29 Thread Amar Tumballi
BJ Link

   - Bridge: https://bluejeans.com/217609845
   - Watch: https://bluejeans.com/s/zH9eH

<https://hackmd.io/yTC-un5XT6KUB9V37LG6OQ?both#Attendance>Attendance

   - Nigel, Kaleb, Nithya, Shyam, Amar, Raghavendra Bhat, Atin, Sunny,
   Vijay Bellur, Vijay Baskar

<https://hackmd.io/yTC-un5XT6KUB9V37LG6OQ?both#Agenda>Agenda

   -

   AI from previous week?
   -

   [Nigel] Status of glusterfs-selinux
   - Currently waiting for it to be tested by our friends in QE.
  -
   -

   GlusterFS 6 ?
   - What to focus?
 - Draft @ https://hackmd.io/sP5GsZ-uQpqnmGZmFKuWIg
 - Need people to focus on this.
 - Can we also estimate time and dependencies
 - This is a good time to fix a lot of infra that Gluster uses.
 - Getting things to completion is more important than doing all
 the items in the list.
 - We should pick what we can finish.
 - The infra is ready - the xlator stack etc. But these are not
 useful unless used.

=

Note that most of discussions happened about GlusterFS-6 release, so the
discussions, and notes are captured in the link for the 'Draft', which will
be sent as separate emails to the group.

Regards,
Amar

-- 
Amar Tumballi (amarts)
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-users] Maintainer meeting minutes : 15th Oct, 2018

2018-10-23 Thread Amar Tumballi
On Mon, Oct 15, 2018 at 11:57 PM Shyam Ranganathan 
wrote:

> ### BJ Link
> * Bridge: https://bluejeans.com/217609845
> * Watch: 
>

Link here: https://bluejeans.com/s/GKbUD



>
> ### Attendance
> * Nigel, Nithya, Deepshikha, Akarsha, Kaleb, Shyam, Sunny
>
> ### Agenda
> * AI from previous meeting:
>   - Glusto-Test completion on release-5 branch - On Glusto team
>   - Vijay will take this on.
>   - He will be focusing it on next week.
>   - Glusto for 5 may not be happening before release, but we'll do
> it right after release it looks like.
>
> - Release 6 Scope
> - Will be sending out an email today/tomorrow for scope of release 6.
> - Send a biweekly email with focus on glusterfs release focus areas.
>
> - GCS scope into release-6 scope and get issues marked against the same
> - For release-6 we want a thinner stack. This means we'd be removing
> xlators from the code that Amar has already sent an email about.
> - Locking support for gluster-block. Design still WIP. One of the
> big ticket items that should make it to release 6. Includes reflink
> support and enough locking support to ensure snapshots are consistent.
> - GD1 vs GD2. We've been talking about it since release-4.0. We need
> to call this out and understand if we will have GD2 as default. This is
> call out for a plan for when we want to make this transition.
>
> - Round Table
> - [Nigel] Minimum build and CI health for all projects (including
> sub-projects).
> - This was primarily driven for GCS
> - But, we need this even otherwise to sustain quality of projects
> - AI: Call out on lists around release 6 scope, with a possible
> list of sub-projects
> - [Kaleb] SELinux package status
> - Waiting on testing to understand if this is done right
> - Can be released when required, as it is a separate package
> - Release-5 the SELinux policies are with Fedora packages
> - Need to coordinate with Fedora release, as content is in 2
> packages
> - AI: Nigel to follow up and get updates by the next meeting
>
> ___
> Gluster-users mailing list
> gluster-us...@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>

-- 
Amar Tumballi (amarts)
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] GlusterFS sosreport plugin - what's the state of freshness of it?

2018-10-10 Thread Amar Tumballi
Sorry! It got missed out earlier!

IMO, sos-report is still a good way to collect the summary of system state
while debugging GlusterFS issues.

Gluster Plugin in sos takes statedump, and collects log files, which is
what most of developers who are debugging the issues want. So, this is
fine.

There was a proposal for keeping the sos plugin within glusterfs repo, and
install it at right place when glusterfs package gets installed (if
sosreport is already installed).

That way, we would be keeping it up-to-date with code. I will check if
there are any bugs for it, and see what we can do about this.

Regards,
Amar

On Tue, Oct 9, 2018 at 11:11 AM Sankarshan Mukhopadhyay <
sankarshan.mukhopadh...@gmail.com> wrote:

> It has been a while and I haven't seen any responses - is sosreport
> plugin the preferred way to collect reports/logs?
>
>
>
> On Wed, Oct 3, 2018 at 8:26 PM Sankarshan Mukhopadhyay
>  wrote:
> >
> > I am assuming that
> > <https://github.com/sosreport/sos/blob/master/sos/plugins/gluster.py>
> > is the one which is relevant to GlusterFS
> >
> > Can the maintainers take a bit of time to review and ascertain whether
> > this is sufficient in the output it archives to aid the
> > trouble-shooting? Is this kept current with the changes in the
> > components across the releases? What is missing here?
> >
> > Please reply to this thread here and then perhaps we can see what the
> > state of things are.
>
>
>
> --
> sankarshan mukhopadhyay
> <https://about.me/sankarshan.mukhopadhyay>
> ___
> maintainers mailing list
> maintainers@gluster.org
> https://lists.gluster.org/mailman/listinfo/maintainers
>


-- 
Amar Tumballi (amarts)
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Maintainer meeting minutes : 1st Oct, 2018

2018-10-01 Thread Amar Tumballi
BJ Link

   - Bridge: https://bluejeans.com/217609845
   - Watch: https://bluejeans.com/s/eNNfZ

<https://hackmd.io/yTC-un5XT6KUB9V37LG6OQ?both#Attendance>Attendance

   - Jonathan (loadtheacc), Vijay Baskar, Amar Tumballi, Deepshikha,
   Raghavendra Bhat, Shyam, Kaleb, Akarsha Rai,

<https://hackmd.io/yTC-un5XT6KUB9V37LG6OQ?both#Agenda>Agenda

   -

   AI from previous week
   - Status check on bi-weekly Bugzilla/Github issues tracker ? Any
  progress?
  - Glusto Status and update?
   -

   [added by sankarshan; possibly requires Glusto and Glusto-test
   maintainers] Status of Test Automation
   -
  - Glusto and tests updates:
   - Glusto focus in Py3 port, alpha/beta branch for py3 possibly by next
  week, around 75% complete (ball park)
 - client and server need to be the same version of py (2 or 3)
 - Documentation and minor tweaks/polishing work
  - New feature in py3 would be the log format, would be configurable
  - Couple of issues:
 - scandir needing gcc, fixed upstream for this issue
 - cartplex
 
<https://github.com/loadtheaccumulator/glusto/commit/b4874812886d3727a800111ad6b5579860d56f72>
  - Glusto tests:
 - Priority is fixing the test cases, that are failing to get
 Glusto passing upstream
 - When running the tests as a suite, some tests are failing due to
 cleanup not correct in all test cases
 - Working on the above on a per component basis
 - Next up is py2->3, currently blocked on Glusto for some parts,
 certain other parts like IO modules are ported to py3
 - Post Glusto, porting of tests would take 2 months
 - Next: GD2 libraries, target Dec. to complete the work
 - Testing Glusto against release-5:
- Can we run against RC0 for release-5?
- Requires a setup, can we use the same setup used by the
Glusto-tests team?
- AI: Glusto-tests team to run against release-5 and help
provide results to the lists
 - Some components are being deprecated, so priorotization of
 correcting tests per component can leverage the same
- AI: Vijay to sync with Amar on the list
 - Can we port Glusto and tests in parallel?
- Interface to Gluto remains the same, and hence the port can
start in parallel, but cannot run till Glusto port is
ready, but first cut
Glusto should be available in week, so work can start.
 -

   Release 5:
   - Py3 patches need to be accomodated
 - Needs backports from master
  - noatime needs to be merged
  - gbench initial runs complete (using CentOS packages)
  - glusto, upgrade tests, release notes pending
  - Regression nightly issue handled on Friday, need to check health
  - Mid-Oct release looks feasible as of now, with an RC1 by end of
  this week or Monday next week
   -

   GD2 release:
   - Can there be a production release, which just supports volume and peer
  life-cycle, but no features?
  - This may get us more users trying out, and giving feedback.
  - Not every users are using all the glusterd features like geo-rep,
  snapshot or quota.
  - Can we take it 1 step at a time, and treat it as a new project, and
  not a replacement ?
  - Ref: gluster-user email on gd2 status
  
<https://lists.gluster.org/pipermail//gluster-users/2018-October/035012.html>
  .
  - AI: Take the discussion to mailing list
  - [Vijay] If we make it as separate releases, will it impact release
  of GCS?
 - [Amar] I don’t think so, it would be more milestones for the
 project, instead of just 1 big goal.
  -

   Distributed-regression testing:
   - A few of the test is taking more time then expected.
  - One of them is tests/bugs/index/bug-1559004-EMLINK-handling.t is
  taking around 14 mins which adds up the overall time of running
test suite
  (not just in distributed environment but also in centos7-regression).
  - Author of test or maintainer needs to look at it.
   -

   New Peer/Maintainers proposals ?
   -

   Round Table
   - [Kaleb] CVE in golang packages, so we need to update the bundle of GD2.


-- 
Amar Tumballi (amarts)
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Release 5: New option noatime

2018-09-27 Thread Amar Tumballi
On Thu, Sep 27, 2018 at 6:40 PM Shyam Ranganathan 
wrote:

> On 09/27/2018 09:08 AM, Shyam Ranganathan wrote:
> > Writing this to solicit opinions on merging this [1] change that
> > introduces an option late in the release cycle.
> >
> > I went through the code, and most changes are standard option handling
> > and basic xlator scaffolding, other than the change in posix xlator code
> > that handles the flag to not set atime and the code in utime xlator that
> > conditionally sets the flag. (of which IMO the latter is more important
> > than the former, as posix is just acting on the flag).
> >
> > The option if enabled would hence not update atime for the following
> > FOPs, opendir, open, read, and would continue updating atime on the
> > following FOPs fallocate and zerofill (which also update mtime, so the
> > AFR self heal on time change would kick in anyways).
> >
> > As the option suggests, with it enables atime is almost meaningless and
> > hence it almost does not matter where we update it and where not. Just
> > considering the problem where atime changes cause AFR to trigger a heal,
> > and the FOPs above that strictly only change atime handled with this
> > option, I am looking at this as functionally workable.
> >
>

Thanks for all these details, Shyam! Helps many to understand what the
feature is.


> > So IMO we can accept this even though it is late, but would like to hear
> > from others if this needs to be deferred till release-6.
> >
>

I am all for accepting this for glusterfs-5.0! Two reasons, in one of the
quick setup we tried, it helped to get elastic search run smoothly on
glusterfs mounts. 2nd, we did hear from Anuradha/Ram in another email
thread (Cloudsync with AFR) that it helped them in solving the issue.

This particular patch makes the overhead of ctime feature much much lesser!

-Amar


> > Shyam
>
> [1] Patch under review: https://review.gluster.org/c/glusterfs/+/21281
> ___
> maintainers mailing list
> maintainers@gluster.org
> https://lists.gluster.org/mailman/listinfo/maintainers
>


-- 
Amar Tumballi (amarts)
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Lock down period merge process

2018-09-27 Thread Amar Tumballi
Top posting as I am not trying to answer any individual points!

It is my wish that we don't get into lock down state! But, there may be
times when it is needed! My take is, we will go with an approach which
works for majority of the cases, and when we get to it 1-2 times, lets do
another retrospective of events happened during the time when there was a
lock-down, and then improve further. Planning too much for future won't get
us any value at this time. We have bi-weekly maintainer meetings, where we
can propose changes, and get to solutions. None of this is written in
stone, so lets move on :-)

-Amar


On Thu, Sep 27, 2018 at 8:18 PM Shyam Ranganathan 
wrote:

> On 09/27/2018 10:05 AM, Atin Mukherjee wrote:
> > Now does this mean we block commit rights for component Y till
> > we have the root cause?
> >
> >
> > It was a way of making it someone's priority. If you have another
> > way to make it someone's priority that is better than this, please
> > suggest and we can have a discussion around it and agree on it :-).
> >
> >
> > This is what I can think of:
> >
> > 1. Component peers/maintainers take a first triage of the test failure.
> > Do the initial debugging and (a) point to the component which needs
> > further debugging or (b) seek for help at gluster-devel ML for
> > additional insight for identifying the problem and narrowing down to a
> > component.
> > 2. If it’s (1 a) then we already know the component and the owner. If
> > it’s (2 b) at this juncture, it’s all maintainers responsibility to
> > ensure the email is well understood and based on the available details
> > the ownership is picked up by respective maintainers. It might be also
> > needed that multiple maintainers might have to be involved and this is
> > why I focus on this as a group effort than individual one.
>
> In my thinking, acting as a group here is better than making it a
> sub-groups/individuals responsibility. Which has been put forth by Atin
> (IMO) well. Thus, keep the merge rights out for all (of course some
> still need to have it), and get the situation addressed is better.
> ___
> maintainers mailing list
> maintainers@gluster.org
> https://lists.gluster.org/mailman/listinfo/maintainers
>


-- 
Amar Tumballi (amarts)
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Glusto Happenings

2018-09-24 Thread Amar Tumballi
Planning to discuss this in next week's Gluster Maintainer's meeting.
Please make sure we have Glusto maintainers in the meeting, so we can have
a good syncup!

-Amar

On Wed, Sep 19, 2018 at 10:15 PM, Jonathan Holloway 
wrote:

> Sounds good. I'll talk to Vijay and Akarsha about providing updates on
> some of their activities with the test repo too.
>
> Cheers,
> Jonathan
>
> On Mon, Sep 17, 2018 at 7:37 PM Amye Scavarda  wrote:
>
>> Adding Maintainers as that's the group that will be more interested in
>> this.
>> Our next maintainers meeting is October 1st, want to present on what the
>> current status is there?
>> - amye
>>
>> On Mon, Sep 17, 2018 at 12:29 AM Jonathan Holloway 
>> wrote:
>>
>>> Hi Gluster-devel,
>>>
>>> It's been awhile, since we updated gluster-devel on things related to
>>> Glusto.
>>>
>>> The big thing in the works for Glusto is Python3 compatibility.
>>> A port is in progress, and the target is October to have a branch ready
>>> for testing. Look for another update here when that is available.
>>>
>>> Thanks to Vijay Avuthu for testing a change to the Python2 version of
>>> Carteplex (the cartesian product module in Glusto that drives the runs_on
>>> decorator used in Gluster tests). Tests inheriting from GlusterBaseClass
>>> have been using im_func to make calls against the base class setUp method.
>>> This change allows the use of super() as well as im_func.
>>>
>>> On a related note, the syntax for both im_func and super() changes in
>>> Python3. The "Developer Guide for Tests and Libraries" section of the
>>> glusterfs/glusto-tests docs currently shows 
>>> "GlusterBaseClass.setUp.im_func(self)",
>>> but will be updated with the preferred call for Python3.
>>>
>>> And lastly, you might have seen an issue with tests under Python2 where
>>> a run kicked off via py.test or /usr/bin/glusto would immediately fail with
>>> a message indicating gcc needs to be installed. The problem was specific to
>>> a recent update of PyTest and scandir, and the original workaround was to
>>> install gcc or a previous version of pytest and scandir. The scandir
>>> maintainer fixed the issue upstream with scandir 1.9.0 (available in PyPI).
>>>
>>> That's all for now.
>>>
>>> Cheers,
>>> Jonathan (loadtheacc)
>>>
>>>
>>> ___
>>> Gluster-devel mailing list
>>> gluster-de...@gluster.org
>>> https://lists.gluster.org/mailman/listinfo/gluster-devel
>>
>>
>>
>> --
>> Amye Scavarda | a...@redhat.com | Gluster Community Lead
>>
>
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>



-- 
Amar Tumballi (amarts)
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Maintainer's meeting minutes: 17th Sept, 2018

2018-09-18 Thread Amar Tumballi
BJ Link

   - Bridge: https://bluejeans.com/217609845
   - Watch: https://bluejeans.com/s/MI4xY

<https://hackmd.io/yTC-un5XT6KUB9V37LG6OQ?both#Attendance>Attendance

   - misc, ndevos(added few points to 'Version 5 Status),
   - Nigel, Amar, Nithya, Raghavendra Bhat, Shyam, Kaushal, Pranith, Sunny

<https://hackmd.io/yTC-un5XT6KUB9V37LG6OQ?both#Agenda>Agenda

   -

   AI from previous week:
   - clang-format: Finally complete
  - Need to rebase patches
 - One best way is to submit it with basic changes, and
 clang-formatter job suggests the right patch to apply, and
use that to get
 the final patch.
 - Talk to Nigel/Infra team if you are blocked on rebasing the
 existing patch. He wants to understand what are the problems,
and hs, and
 how he can help.
 - [Shyam]:
  - Any feedbacks? - Use email list
  - Use bz#564149 <https://bugzilla.redhat.com/show_bug.cgi?id=1564149>
   -

   GCS Status:
   - Deploy scripts mostly working fine: https://github.com/kshlm/gd2-k8s
 - Few more patches required in GD2.
  - Where to send the PR ?
 - Plan is to send it to GCS, as the plan is to get more issues and
 discussions, along with documentation on that repo.
  - k8s is initial focus, openshift origin would also be added in
  future.
  -
   -

   Version 5 Status
   - Goal: Stability release
  - GD2 tagging is not done!
 - Pending on GCS dependant patches to get merged!
  - GlusterFS Branching done!
 - RC0 Tagging today (post cleanup patches are merged)
 - Need to get some cadence on testing release branches [Shyam]
- Infra bug for release dash (regression and mux to start with)
- Glusto with FUSE
- Gbench
- More ideas welcome!
 - PR to add release-5 to Jenking jobs (centosci#19
  <https://github.com/gluster/centosci/pull/19>)
  - preparations for CentOS Storage SIG
 - seeding of dependencies (cbs
 <https://cbs.centos.org/koji/builds?tagID=1621>)
 - centos-release-gluster5 package with YUM .repo file (cbs
 <https://cbs.centos.org/koji/packageinfo?packageID=7242>)
 - repository sync for testing has been requested (req#15282
 <https://bugs.centos.org/view.php?id=15282>)
  - Release and options notes/documentations to start [Shyam]
   -

   Triaging:
   - ~1050 Bugs
  
<https://bugzilla.redhat.com/report.cgi?x_axis_field=bug_status_axis_field=version_axis_field=_redirect=1_format=report-table_desc_type=allwordssubstr_desc==Community=GlusterFS_status=NEW_status=ASSIGNED_status=POST_status=MODIFIED_status=ON_DEV_status=ON_QA_status=VERIFIED_status=RELEASE_PENDING_type=allwordssubstr=_file_loc_type=allwordssubstr_file_loc=_whiteboard_type=allwordssubstr_whiteboard=_type=allwords===_id=_id_type=anyexact=_type=greaterthaneq=substring==substring==substringNow_top=AND=noop=noop==table=wrap>
   (111 on 3.12, ~50 on 4.1, and 850+ on mainline)
 - ~400 MODIFIED on master (may be it gets closed on v5.0 release)
 - Everyone having a look on components and resolving would help.
 - On a glance, at least 50+ RFEs still open in Bugzilla from old
 time.
 - Use component-wise health status, and present it back to RHT
 programs.
 - AI: Nigel to volunteer to see how the status check helps. Shyam
 to help Nigel as a buddy in this effort.
- Look at tools to see most of these issues. Understand the
volume, see what is the effort needed.
 - 261 Open Github Issues
  <https://github.com/gluster/glusterfs/issues>
   -

   New Peer/Maintainers proposals:
   - Sunny to Snapshot
  - Shubendu Tripati for gluster-prometheus
  -
   -

   Round Table
   - [Nigel] Can we have a section in this meeting every month to propose
  new maintainers/peers? We seem to be quite hesitant to add new
folks (Shout
  out to Kruthika for adding Xavi).
 - Raghavenra Bhat sent one more patch for adding Sunny (to
 Snapshot).
  - [Shyam] Discuss “Lock down period merge process” mail thread
  
<https://lists.gluster.org/pipermail/maintainers/2018-September/004991.html>
 - Should we block at a component level or at a project level
- Components are not tightly decoupled, hence allowing merges
in other parts can make getting to stability harder [Nigel, Shyam]
- Also, requesting at component level goes towards “good faith”
approach than tooling and process approach, so not a
direction we want to
    go with as a project.




On Mon, Sep 17, 2018 at 8:49 AM, Amar Tumballi  wrote:

> BJ Link
>
>- Bridge: https://bluejeans.com/217609845
>- Watch:
>
> <https://hackmd.io/yTC-un5XT6KUB9V37LG6OQ?both#Attendance>Attendance
>
>

Re: [Gluster-Maintainers] [Gluster-devel] Clang-Formatter for GlusterFS.

2018-09-18 Thread Amar Tumballi
On Tue, Sep 18, 2018 at 2:33 PM, Kotresh Hiremath Ravishankar <
khire...@redhat.com> wrote:

> I have a different problem. clang is complaining on the 4.1 back port of a
> patch which is merged in master before
> clang-format is brought in. Is there a way I can get smoke +1 for 4.1 as
> it won't be neat to have clang changes
> in 4.1 and not in master for same patch. It might further affect the clean
> back ports.
>
>
This is a bug.. please file an 'project-infrastructure' bug to disable
clang-format job in release branches (other than release-5 branch).

-Amar


> - Kotresh HR
>
> On Tue, Sep 18, 2018 at 2:13 PM, Ravishankar N 
> wrote:
>
>>
>>
>> On 09/18/2018 02:02 PM, Hari Gowtham wrote:
>>
>>> I see that the procedure mentioned in the coding standard document is
>>> buggy.
>>>
>>> git show --pretty="format:" --name-only | grep -v "contrib/" | egrep
>>> "*\.[ch]$" | xargs clang-format -i
>>>
>>> The above command edited the whole file. which is not supposed to happen.
>>>
>> It works fine on fedora 28 (clang version 6.0.1). I had the same problem
>> you faced on fedora 26 though, presumably because of the older clang
>> version.
>> -Ravi
>>
>>
>>
>>> +1 for the readability of the code having been affected.
>>> On Mon, Sep 17, 2018 at 10:45 AM Amar Tumballi 
>>> wrote:
>>>
>>>>
>>>>
>>>> On Mon, Sep 17, 2018 at 10:00 AM, Ravishankar N 
>>>> wrote:
>>>>
>>>>>
>>>>> On 09/13/2018 03:34 PM, Niels de Vos wrote:
>>>>>
>>>>>> On Thu, Sep 13, 2018 at 02:25:22PM +0530, Ravishankar N wrote:
>>>>>> ...
>>>>>>
>>>>>>> What rules does clang impose on function/argument wrapping and
>>>>>>> alignment? I
>>>>>>> somehow found the new code wrapping to be random and highly
>>>>>>> unreadable. An
>>>>>>> example of 'before and after' the clang format patches went in:
>>>>>>> https://paste.fedoraproject.org/paste/dC~aRCzYgliqucGYIzxPrQ
>>>>>>> Wondering if
>>>>>>> this is just me or is it some problem of spurious clang fixes.
>>>>>>>
>>>>>> I agree that this example looks pretty ugly. Looking at random changes
>>>>>> to the code where I am most active does not show this awkward
>>>>>> formatting.
>>>>>>
>>>>>
>>>>> So one of my recent patches is failing smoke and clang-format is
>>>>> insisting [https://build.gluster.org/job/clang-format/22/console] on
>>>>> wrapping function arguments in an unsightly manner. Should I resend my
>>>>> patch with this new style of wrapping ?
>>>>>
>>>>> I would say yes! We will get better, by changing options of
>>>> clang-format once we get better options there. But for now, just following
>>>> the option suggested by clang-format job is good IMO.
>>>>
>>>> -Amar
>>>>
>>>> Regards,
>>>>> Ravi
>>>>>
>>>>>
>>>>>
>>>>> However, I was expecting to see enforcing of the
>>>>>> single-line-if-statements like this (and while/for/.. loops):
>>>>>>
>>>>>>   if (need_to_do_it) {
>>>>>>do_it();
>>>>>>   }
>>>>>>
>>>>>> instead of
>>>>>>
>>>>>>   if (need_to_do_it)
>>>>>>do_it();
>>>>>>
>>>>>> At least the conversion did not take care of this. But, maybe I'm
>>>>>> wrong
>>>>>> as I can not find the discussion in https://bugzilla.redhat.com/15
>>>>>> 64149
>>>>>> about this. Does someone remember what was decided in the end?
>>>>>>
>>>>>> Thanks,
>>>>>> Niels
>>>>>>
>>>>>
>>>>>
>>>>
>>>> --
>>>> Amar Tumballi (amarts)
>>>> ___
>>>> Gluster-devel mailing list
>>>> gluster-de...@gluster.org
>>>> https://lists.gluster.org/mailman/listinfo/gluster-devel
>>>>
>>>
>>>
>>>
>> ___
>> Gluster-devel mailing list
>> gluster-de...@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-devel
>>
>
>
>
> --
> Thanks and Regards,
> Kotresh H R
>



-- 
Amar Tumballi (amarts)
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Clang-Formatter for GlusterFS.

2018-09-16 Thread Amar Tumballi
On Mon, Sep 17, 2018 at 10:00 AM, Ravishankar N 
wrote:

>
> On 09/13/2018 03:34 PM, Niels de Vos wrote:
>
>> On Thu, Sep 13, 2018 at 02:25:22PM +0530, Ravishankar N wrote:
>> ...
>>
>>> What rules does clang impose on function/argument wrapping and
>>> alignment? I
>>> somehow found the new code wrapping to be random and highly unreadable.
>>> An
>>> example of 'before and after' the clang format patches went in:
>>> https://paste.fedoraproject.org/paste/dC~aRCzYgliqucGYIzxPrQ Wondering
>>> if
>>> this is just me or is it some problem of spurious clang fixes.
>>>
>> I agree that this example looks pretty ugly. Looking at random changes
>> to the code where I am most active does not show this awkward
>> formatting.
>>
>
> So one of my recent patches is failing smoke and clang-format is insisting
> [https://build.gluster.org/job/clang-format/22/console] on wrapping
> function arguments in an unsightly manner. Should I resend my patch with
> this new style of wrapping ?
>
>
I would say yes! We will get better, by changing options of clang-format
once we get better options there. But for now, just following the option
suggested by clang-format job is good IMO.

-Amar


> Regards,
> Ravi
>
>
>
>
>> However, I was expecting to see enforcing of the
>> single-line-if-statements like this (and while/for/.. loops):
>>
>>  if (need_to_do_it) {
>>   do_it();
>>  }
>>
>> instead of
>>
>>  if (need_to_do_it)
>>   do_it();
>>
>> At least the conversion did not take care of this. But, maybe I'm wrong
>> as I can not find the discussion in https://bugzilla.redhat.com/1564149
>> about this. Does someone remember what was decided in the end?
>>
>> Thanks,
>> Niels
>>
>
>


-- 
Amar Tumballi (amarts)
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Maintainer's meeting Agenda: 17th Sept, 2018

2018-09-16 Thread Amar Tumballi
BJ Link

   - Bridge: https://bluejeans.com/217609845
   - Watch:

Attendance

   -

Agenda

   -

   AI from previous week:
   - clang-format: Finally complete
  - Need to rebase patches
  - Any feedback?
  - Use bz#564149 
   -

   GCS Status
   - Deploy scripts mostly working fine: https://github.com/kshlm/gd2-k8s
  - Whats the major difference between gluster-k8s and this?
  - Where to send the PR ?
   -

   Version 5 Status
   - Goal: Stability release
  - Branching done!
  -
   -

   Triaging:
   - 1050 Bugs
  

   (111 on 3.12, ~50 on 4.1, and 850+ on mainline)
 - Everyone having a look on components and resolving would help.
  - 261 Open Github Issues 

-

Add your points at https://hackmd.io/yTC-un5XT6KUB9V37LG6OQ?both
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Build failed in Jenkins: regression-test-with-multiplex #857

2018-09-13 Thread Amar Tumballi
This looks to be an issue with clang-format changes...

On Wed 12 Sep, 2018, 7:58 PM ,  wrote:

> See <
> https://build.gluster.org/job/regression-test-with-multiplex/857/display/redirect?page=changes
> >
>
> Changes:
>
> [Vijay Bellur] doc: add coding-standard and commit-msg link in README
>
> [Amar Tumballi] dht: Use snprintf instead of strncpy
>
> [Amar Tumballi] doc: make developer-index.md as README
>
> [Nigel Babu] clang-format: add the config file
>
> [Nigel Babu] Land clang-format changes
>
> [Nigel Babu] Land part 2 of clang-format changes
>
> [Amar Tumballi] template files: revert clang
>
> --
> [...truncated 188.30 KB...]
> in a given directory, LIBDIR, you must either use libtool, and
> specify the full pathname of the library, or use the `-LLIBDIR'
> flag during linking and do at least one of the following:
>- add LIBDIR to the `LD_LIBRARY_PATH' environment variable
>  during execution
>- add LIBDIR to the `LD_RUN_PATH' environment variable
>  during linking
>- use the `-Wl,-rpath -Wl,LIBDIR' linker flag
>- have your system administrator add LIBDIR to `/etc/ld.so.conf'
>
> See any operating system documentation about shared libraries for
> more information, such as the ld(1) and ld.so(8) manual pages.
> --
> make[5]: Nothing to be done for `install-exec-am'.
> make[5]: Nothing to be done for `install-data-am'.
> Making install in utime
> Making install in src
> /usr/bin/python2 <
> https://build.gluster.org/job/regression-test-with-multiplex/ws/xlators/features/utime/src/utime-gen-fops-h.py>
> <
> https://build.gluster.org/job/regression-test-with-multiplex/ws/xlators/features/utime/src/utime-autogen-fops-tmpl.h>
> > utime-autogen-fops.h
> make --no-print-directory install-am
>   CC   utime-helpers.lo
>   CC   utime.lo
> /usr/bin/python2 <
> https://build.gluster.org/job/regression-test-with-multiplex/ws/xlators/features/utime/src/utime-gen-fops-c.py>
> <
> https://build.gluster.org/job/regression-test-with-multiplex/ws/xlators/features/utime/src/utime-autogen-fops-tmpl.c>
> > utime-autogen-fops.c
>   CC   utime-autogen-fops.lo
>   CCLD utime.la
> make[6]: Nothing to be done for `install-exec-am'.
>  /usr/bin/mkdir -p '/build/install/lib/glusterfs/4.2dev/xlator/features'
>  /bin/sh ../../../../libtool   --mode=install /usr/bin/install -c
> utime.la '/build/install/lib/glusterfs/4.2dev/xlator/features'
> libtool: install: warning: relinking `utime.la'
> libtool: install: (cd /build/scratch/xlators/features/utime/src; /bin/sh
> /build/scratch/libtool  --silent --tag CC --mode=relink gcc -Wall -g -O2 -g
> -rdynamic -O0 -DDEBUG -Wformat -Werror=format-security
> -Werror=implicit-function-declaration -Wall -Werror -Wno-cpp -module
> -avoid-version -export-symbols <
> https://build.gluster.org/job/regression-test-with-multiplex/ws/xlators/xlator.sym>
> -Wl,--no-undefined -o utime.la -rpath
> /build/install/lib/glusterfs/4.2dev/xlator/features utime-helpers.lo
> utime.lo utime-autogen-fops.lo ../../../../libglusterfs/src/
> libglusterfs.la -lrt -ldl -lpthread -lcrypto )
> libtool: install: /usr/bin/install -c .libs/utime.soT
> /build/install/lib/glusterfs/4.2dev/xlator/features/utime.so
> libtool: install: /usr/bin/install -c .libs/utime.lai
> /build/install/lib/glusterfs/4.2dev/xlator/features/utime.la
> libtool: finish:
> PATH="/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/bin:/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/bin:/usr/local/bin:/usr/bin:/build/install/sbin:/build/install/bin:/build/install/sbin:/build/install/bin:/sbin"
> ldconfig -n /build/install/lib/glusterfs/4.2dev/xlator/features
> --
> Libraries have been installed in:
>/build/install/lib/glusterfs/4.2dev/xlator/features
>
> If you ever happen to want to link against installed libraries
> in a given directory, LIBDIR, you must either use libtool, and
> specify the full pathname of the library, or use the `-LLIBDIR'
> flag during linking and do at least one of the following:
>- add LIBDIR to the `LD_LIBRARY_PATH' environment variable
>  during execution
>- add LIBDIR to the `LD_RUN_PATH' environment variable
>  during linking
>- use the `-Wl,-rpath -Wl,LIBDIR' linker flag
>- have your system administrator add LIBDIR to `/etc/ld.so.conf'
>
> See any operating system documentation about shared libraries for
> more information, such as the ld(1) and ld.so(8) manual pages.
> --
> make[5]: 

[Gluster-Maintainers] Clang-format change postmortem (12th Sept, 2018)

2018-09-12 Thread Amar Tumballi
People Involved

   - Nigel
   - Amar

<https://hackmd.io/_zKtoPXUT7mDAyV6OxzZRQ?both#Timeline-of-events-in-IST>Timeline
of events (in IST)

1725 - Nigel merges Amar’s patch with the .clang-format file
1727 - Nigel lands the .clang-format changes to master as gluster-ant.
Smoke jobs pass at this point)
1746 - Amar notices that some files are missing in the clang patch.
1752 - Nigel lands a new patch with the missing files (all .c files)
1811 - Amar notices compilation issues after landing the .c changes because
it modifes files with the pattern -tmpl.c. Amar starts working on a fix.
1839 - Nigel notices that the Jenkins job for clang-format doesn’t fail
when it’s supposed to fail and goes to fix.
1855 - Clang-format Jenkins job is now fixed.
1906 - Amar’s fixes are merged with manual votes for Smoke and Centos
Regression from the Infra team. At this point, the builds were passing, but
we had voting issues
<https://hackmd.io/_zKtoPXUT7mDAyV6OxzZRQ?both#What-Went-Wrong>What Went
Wrong

   - We staged the changes on Github on Sept 10th. Given the size of
   changes, we we missed that the command used to make the changes only caught
.h files and not .c files. The following is the command in question.
   find . -path ./contrib -prune -o -name '*.c' -or -name '*.h' -print |
   xargs clang-format -i
   - With the changes that we landed, we did run into build bugs 1
   <https://review.gluster.org/#/c/21130/>, 2
   <https://review.gluster.org/#/c/21128/> and fixed them. However, we did
   not verify that all the files were in fact modified or sync up on the find
   command.
   - We had a general framework of agreement on the steps to do but we
   looked at it as a code change rather than an infrastructure change. There
   wasn’t a well defined go/no-go checklist.
   - In the middle of this, we had a freebsd-builder enabled that made the
   smoke job for the final fix not vote. This needed a manual vote from the
   Infra team.

<https://hackmd.io/_zKtoPXUT7mDAyV6OxzZRQ?both#What-Went-Well>What Went Well

   - We did reasonably good planning to find potential issues and in fact,
   did find some potential issues.
   - Nigel and Amar were on hand and available to fix any potential issues
   that popped up
   - The changes landed at the end of a working day for India the day
   before a public holiday. While there was impact, it was much less than a
   similar change performed at working hours.

<https://hackmd.io/_zKtoPXUT7mDAyV6OxzZRQ?both#Future-recommendations>Future
recommendations

   - Template files need to be caught by the clang-format job correctly so
   that they are not checked for formatting. Or the name of the file needs to
   be changed so that they don’t end with .c or .h.
   - In the future, high impact changes need a good process which has at
   least an acceptance criteria, go/no-go checklist, and a rollback procedure.
   - This work is currently incomplete and the bug3
   <https://bugzilla.redhat.com/show_bug.cgi?id=1564149#c39> tracks the
   remaining action items.

-

Thanks Nigel for the postmortem report.

-- 
Amar Tumballi (amarts)
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Clang-Formatter for GlusterFS.

2018-09-12 Thread Amar Tumballi
Top posting:

All is well at the tip of glusterfs master branch now.

We will post a postmortem report of events and what went wrong in this
activity, later.

With this, Shyam, you can go ahead with release-v5.0 branching.

-Amar

On Wed, Sep 12, 2018 at 6:21 PM, Amar Tumballi  wrote:

>
>
> On Wed, Sep 12, 2018 at 5:36 PM, Amar Tumballi 
> wrote:
>
>>
>>
>> On Mon, Aug 27, 2018 at 8:47 AM, Amar Tumballi 
>> wrote:
>>
>>>
>>>
>>> On Wed, Aug 22, 2018 at 12:35 PM, Amar Tumballi 
>>> wrote:
>>>
>>>> Hi All,
>>>>
>>>> Below is an update about the project’s move towards using
>>>> clang-formatter for imposing few coding-standards.
>>>>
>>>> Gluster project, since inception followed certain basic coding
>>>> standard, which was (at that time) easy to follow, and easy to review.
>>>>
>>>> Over the time, with inclusion of many more developers and working with
>>>> other communities, as the coding standards are different across projects,
>>>> we got different type of code into source. After 11+years, now is the time
>>>> we should be depending on tool for it more than ever, and hence we have
>>>> decided to depend on clang-formatter for this.
>>>>
>>>> Below are some highlights of this activity. We expect each of you to
>>>> actively help us in this move, so it is smooth for all of us.
>>>>
>>>>- We kickstarted this activity sometime around April 2018
>>>><https://bugzilla.redhat.com/show_bug.cgi?id=1564149>
>>>>- There was a repo created for trying out the options, and
>>>>validating the code. Link to Repo
>>>><https://github.com/nigelbabu/clang-format-sample>
>>>>- Now, with the latest .clang-format file, we have made the whole
>>>>GlusterFS codebase changes. The change here
>>>><https://github.com/nigelbabu/glusterfs>
>>>>- We will be running regression with the changes, multiple times,
>>>>so we don’t want to miss something getting in without our notice.
>>>>- As it is a very big change (Almost 6 lakh lines changed), we will
>>>>not put this commit through gerrit, but directly pushing to the repo.
>>>>- Once this patch gets in (ETA: 28th August), all the pending
>>>>patches needs to go through rebase.
>>>>
>>>>
>>> All, as Shyam has proposed to change the branch out date for release-5.0
>>> as Sept 10th [1], we are now targeting Sept 7th for this activity.
>>>
>>>
>> We are finally Done!
>>
>> We delayed in by another 4 days to make sure we pass the regression
>> properly with clang changes, and it doesn't break anything.
>>
>> Also note, from now, it is always better to format the changes with below
>> command before committing.
>>
>>  sh$ cd glusterfs-git-repo/
>>  sh$ clang-format -i $(list_of_files_changed)
>>  sh$ git commit # and usual steps to publish your changes.
>>
>> Also note, all the changes which were present earlier, needs to be
>> rebased with clang-format too.
>>
>> One of the quick and dirty way to get your changes rebased in the case if
>> your patch is significantly large, is by applying the patches on top of the
>> commit before the clang-changes, and copy the files over, and run
>> clang-format -i on them, and checking the diff. As no code other coding
>> style changes happened, this should work fine.
>>
>> Please post if you have any concerns.
>>
>>
> Noticed some glitches! Stand with us till we handle the situation...
>
> meantime, found that below command for git am works better for applying
> smaller patches:
>
>  $ git am --ignore-whitespace --ignore-space-change --reject 0001-patch
>
> -Amar
>
>
>> Regards,
>> Amar
>>
>>
>>
>>> [1] - https://lists.gluster.org/pipermail/gluster-devel/2018-Augus
>>> t/055308.html
>>>
>>>
>>>> What are the next steps:
>>>>
>>>>- The patch <https://review.gluster.org/#/c/glusterfs/+/20892> of
>>>>adding .clang-format file will get in first
>>>>- Nigel/Infra team will be keeping the repo
>>>><https://github.com/nigelbabu/glusterfs> with all files changed
>>>>open for review till EOD 27th August, 2018
>>>>
>>>> This changes to 05th Sept, 2018
>>>
>>>
>>>>
>>>

Re: [Gluster-Maintainers] Clang-Formatter for GlusterFS.

2018-09-12 Thread Amar Tumballi
On Wed, Sep 12, 2018 at 5:36 PM, Amar Tumballi  wrote:

>
>
> On Mon, Aug 27, 2018 at 8:47 AM, Amar Tumballi 
> wrote:
>
>>
>>
>> On Wed, Aug 22, 2018 at 12:35 PM, Amar Tumballi 
>> wrote:
>>
>>> Hi All,
>>>
>>> Below is an update about the project’s move towards using
>>> clang-formatter for imposing few coding-standards.
>>>
>>> Gluster project, since inception followed certain basic coding standard,
>>> which was (at that time) easy to follow, and easy to review.
>>>
>>> Over the time, with inclusion of many more developers and working with
>>> other communities, as the coding standards are different across projects,
>>> we got different type of code into source. After 11+years, now is the time
>>> we should be depending on tool for it more than ever, and hence we have
>>> decided to depend on clang-formatter for this.
>>>
>>> Below are some highlights of this activity. We expect each of you to
>>> actively help us in this move, so it is smooth for all of us.
>>>
>>>- We kickstarted this activity sometime around April 2018
>>><https://bugzilla.redhat.com/show_bug.cgi?id=1564149>
>>>- There was a repo created for trying out the options, and
>>>validating the code. Link to Repo
>>><https://github.com/nigelbabu/clang-format-sample>
>>>- Now, with the latest .clang-format file, we have made the whole
>>>GlusterFS codebase changes. The change here
>>><https://github.com/nigelbabu/glusterfs>
>>>- We will be running regression with the changes, multiple times, so
>>>we don’t want to miss something getting in without our notice.
>>>- As it is a very big change (Almost 6 lakh lines changed), we will
>>>not put this commit through gerrit, but directly pushing to the repo.
>>>- Once this patch gets in (ETA: 28th August), all the pending
>>>patches needs to go through rebase.
>>>
>>>
>> All, as Shyam has proposed to change the branch out date for release-5.0
>> as Sept 10th [1], we are now targeting Sept 7th for this activity.
>>
>>
> We are finally Done!
>
> We delayed in by another 4 days to make sure we pass the regression
> properly with clang changes, and it doesn't break anything.
>
> Also note, from now, it is always better to format the changes with below
> command before committing.
>
>  sh$ cd glusterfs-git-repo/
>  sh$ clang-format -i $(list_of_files_changed)
>  sh$ git commit # and usual steps to publish your changes.
>
> Also note, all the changes which were present earlier, needs to be rebased
> with clang-format too.
>
> One of the quick and dirty way to get your changes rebased in the case if
> your patch is significantly large, is by applying the patches on top of the
> commit before the clang-changes, and copy the files over, and run
> clang-format -i on them, and checking the diff. As no code other coding
> style changes happened, this should work fine.
>
> Please post if you have any concerns.
>
>
Noticed some glitches! Stand with us till we handle the situation...

meantime, found that below command for git am works better for applying
smaller patches:

 $ git am --ignore-whitespace --ignore-space-change --reject 0001-patch

-Amar


> Regards,
> Amar
>
>
>
>> [1] - https://lists.gluster.org/pipermail/gluster-devel/2018-Augus
>> t/055308.html
>>
>>
>>> What are the next steps:
>>>
>>>- The patch <https://review.gluster.org/#/c/glusterfs/+/20892> of
>>>adding .clang-format file will get in first
>>>- Nigel/Infra team will be keeping the repo
>>><https://github.com/nigelbabu/glusterfs> with all files changed open
>>>for review till EOD 27th August, 2018
>>>
>>> This changes to 05th Sept, 2018
>>
>>
>>>
>>>- Upon passing regression, we will push this one change to main
>>>branch.
>>>- After that, we will have a smoke job to validate the coding
>>>standard as per the .clang-format file, which will vote -1 if it is
>>>not meeting the standard.
>>>- There will be guidelines about how to setup your own .clang-format
>>>setup, so while sending the patch, it gets posted in proper format
>>>   - This will be provided for both ./rfc.sh and git review users.
>>>- Having clang-formatter installed would be still optional, but
>>>there would be high chance the smoke would fail if not formatted right.
>>

Re: [Gluster-Maintainers] Clang-Formatter for GlusterFS.

2018-09-12 Thread Amar Tumballi
On Mon, Aug 27, 2018 at 8:47 AM, Amar Tumballi  wrote:

>
>
> On Wed, Aug 22, 2018 at 12:35 PM, Amar Tumballi 
> wrote:
>
>> Hi All,
>>
>> Below is an update about the project’s move towards using clang-formatter
>> for imposing few coding-standards.
>>
>> Gluster project, since inception followed certain basic coding standard,
>> which was (at that time) easy to follow, and easy to review.
>>
>> Over the time, with inclusion of many more developers and working with
>> other communities, as the coding standards are different across projects,
>> we got different type of code into source. After 11+years, now is the time
>> we should be depending on tool for it more than ever, and hence we have
>> decided to depend on clang-formatter for this.
>>
>> Below are some highlights of this activity. We expect each of you to
>> actively help us in this move, so it is smooth for all of us.
>>
>>- We kickstarted this activity sometime around April 2018
>><https://bugzilla.redhat.com/show_bug.cgi?id=1564149>
>>- There was a repo created for trying out the options, and validating
>>the code. Link to Repo
>><https://github.com/nigelbabu/clang-format-sample>
>>- Now, with the latest .clang-format file, we have made the whole
>>GlusterFS codebase changes. The change here
>><https://github.com/nigelbabu/glusterfs>
>>- We will be running regression with the changes, multiple times, so
>>we don’t want to miss something getting in without our notice.
>>- As it is a very big change (Almost 6 lakh lines changed), we will
>>not put this commit through gerrit, but directly pushing to the repo.
>>- Once this patch gets in (ETA: 28th August), all the pending patches
>>needs to go through rebase.
>>
>>
> All, as Shyam has proposed to change the branch out date for release-5.0
> as Sept 10th [1], we are now targeting Sept 7th for this activity.
>
>
We are finally Done!

We delayed in by another 4 days to make sure we pass the regression
properly with clang changes, and it doesn't break anything.

Also note, from now, it is always better to format the changes with below
command before committing.

 sh$ cd glusterfs-git-repo/
 sh$ clang-format -i $(list_of_files_changed)
 sh$ git commit # and usual steps to publish your changes.

Also note, all the changes which were present earlier, needs to be rebased
with clang-format too.

One of the quick and dirty way to get your changes rebased in the case if
your patch is significantly large, is by applying the patches on top of the
commit before the clang-changes, and copy the files over, and run
clang-format -i on them, and checking the diff. As no code other coding
style changes happened, this should work fine.

Please post if you have any concerns.

Regards,
Amar



> [1] - https://lists.gluster.org/pipermail/gluster-devel/2018-
> August/055308.html
>
>
>> What are the next steps:
>>
>>- The patch <https://review.gluster.org/#/c/glusterfs/+/20892> of
>>adding .clang-format file will get in first
>>- Nigel/Infra team will be keeping the repo
>><https://github.com/nigelbabu/glusterfs> with all files changed open
>>for review till EOD 27th August, 2018
>>
>> This changes to 05th Sept, 2018
>
>
>>
>>- Upon passing regression, we will push this one change to main
>>branch.
>>- After that, we will have a smoke job to validate the coding
>>standard as per the .clang-format file, which will vote -1 if it is
>>not meeting the standard.
>>- There will be guidelines about how to setup your own .clang-format
>>setup, so while sending the patch, it gets posted in proper format
>>   - This will be provided for both ./rfc.sh and git review users.
>>- Having clang-formatter installed would be still optional, but there
>>would be high chance the smoke would fail if not formatted right.
>>
>> Any future changes to coding standard, due to improvements in
>> clang-format tool itself, or due to developers believing some other option
>> is better suited, can be getting in through gerrit.
>>
>> Also note that, we will not be applying the changes to contrib/ directory,
>> as that is expected to be same as corresponding upstream coding standard of
>> particular project. We believe that helps to make sure we can quickly check
>> the diff with corresponding changes really easily.
>>
>> Happy to hear any feedback!
>>
>> Regards,
>> Amar (on behalf of many Gluster Maintainers)
>>
>>
>>
>
>
> --
> Amar Tumballi (amarts)
>



-- 
Amar Tumballi (amarts)
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Build failed in Jenkins: experimental-periodic #441

2018-09-09 Thread Amar Tumballi
d
>>>>> ./tests/bugs/readdir-ahead/bug-1512437.t  -  3 second
>>>>> ./tests/bugs/readdir-ahead/bug-1446516.t  -  3 second
>>>>> ./tests/bugs/posix/disallow-gfid-volumeid-removexattr.t  -  3 second
>>>>> ./tests/bugs/nfs/bug-970070.t  -  3 second
>>>>> ./tests/bugs/nfs/bug-1302948.t  -  3 second
>>>>> ./tests/bugs/logging/bug-823081.t  -  3 second
>>>>> ./tests/bugs/glusterfs-server/bug-889996.t  -  3 second
>>>>> ./tests/bugs/glusterfs-server/bug-877992.t  -  3 second
>>>>> ./tests/bugs/glusterfs/bug-892730.t  -  3 second
>>>>> ./tests/bugs/glusterfs/bug-811493.t  -  3 second
>>>>> ./tests/bugs/fuse/bug-1336818.t  -  3 second
>>>>> ./tests/bugs/distribute/bug-924265.t  -  3 second
>>>>> ./tests/bugs/distribute/bug-1204140.t  -  3 second
>>>>> ./tests/bugs/core/log-bug-1362520.t  -  3 second
>>>>> ./tests/bugs/core/bug-924075.t  -  3 second
>>>>> ./tests/bugs/core/bug-903336.t  -  3 second
>>>>> ./tests/bugs/core/bug-845213.t  -  3 second
>>>>> ./tests/bugs/core/bug-1135514-allow-setxattr-with-null-value.t  -  3
>>>>> second
>>>>> ./tests/bugs/core/bug-1117951.t  -  3 second
>>>>> ./tests/bugs/cli/bug-983317-volume-get.t  -  3 second
>>>>> ./tests/bugs/cli/bug-921215.t  -  3 second
>>>>> ./tests/bugs/cli/bug-867252.t  -  3 second
>>>>> ./tests/bugs/cli/bug-764638.t  -  3 second
>>>>> ./tests/bitrot/bug-internal-xattrs-check-1243391.t  -  3 second
>>>>> ./tests/basic/md-cache/bug-1418249.t  -  3 second
>>>>> ./tests/basic/distribute/non-root-unlink-stale-linkto.t  -  3 second
>>>>> ./tests/basic/afr/arbiter-cli.t  -  3 second
>>>>> ./tests/bugs/glusterfs/bug-860297.t  -  2 second
>>>>> ./tests/bugs/glusterfs/bug-853690.t  -  2 second
>>>>> ./tests/bugs/core/bug-557.t  -  2 second
>>>>> ./tests/bugs/cli/bug-969193.t  -  2 second
>>>>> ./tests/bugs/cli/bug-949298.t  -  2 second
>>>>> ./tests/bugs/cli/bug-1378842-volume-get-all.t  -  2 second
>>>>> ./tests/bugs/cli/bug-1047378.t  -  2 second
>>>>> ./tests/basic/peer-parsing.t  -  2 second
>>>>> ./tests/basic/gfapi/sink.t  -  2 second
>>>>> ./tests/features/glupy.t  -  1 second
>>>>> ./tests/basic/posixonly.t  -  1 second
>>>>> ./tests/basic/netgroup_parsing.t  -  1 second
>>>>> ./tests/bugs/shard/zero-flag.t  -  0 second
>>>>> ./tests/basic/first-test.t  -  0 second
>>>>> ./tests/basic/exports_parsing.t  -  0 second
>>>>>
>>>>>
>>>>> 1 test(s) failed
>>>>> ./tests/bugs/replicate/bug-1586020-mark-dirty-for-entry-
>>>>> txn-on-quorum-failure.t
>>>>>
>>>>> 0 test(s) generated core
>>>>>
>>>>>
>>>>> 1 test(s) needed retry
>>>>> ./tests/bugs/replicate/bug-1586020-mark-dirty-for-entry-
>>>>> txn-on-quorum-failure.t
>>>>>
>>>>> Result is 1
>>>>>
>>>>> tar: Removing leading `/' from member names
>>>>> ssh: connect to host http.int.rht.gluster.org port 22: Connection
>>>>> timed out
>>>>> lost connection
>>>>> kernel.core_pattern = /%e-%p.core
>>>>> Build step 'Execute shell' marked build as failure
>>>>> ___
>>>>> maintainers mailing list
>>>>> maintainers@gluster.org
>>>>> https://lists.gluster.org/mailman/listinfo/maintainers
>>>>>
>>>> --
>>>> - Atin (atinm)
>>>>
>>> ___
>>> maintainers mailing list
>>> maintainers@gluster.org
>>> https://lists.gluster.org/mailman/listinfo/maintainers
>>>
>>
>>
>> --
>> nigelb
>>
>
> ___
> maintainers mailing list
> maintainers@gluster.org
> https://lists.gluster.org/mailman/listinfo/maintainers
>
>


-- 
Amar Tumballi (amarts)
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Fedora-Smoke job ready for getting to 'vote'.

2018-08-27 Thread Amar Tumballi
https://build.gluster.org/job/fedora-smoke/ is now passing for the
submissions, and is ready for voting (right now, it is skipping the vote if
fails).

This is important as the gcc8 warnings are captured by the job.

Thanks Ravishankar N for focusing on it, and getting these to pass.

Regards,
Amar
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Clang-Formatter for GlusterFS.

2018-08-22 Thread Amar Tumballi
Hi All,

Below is an update about the project’s move towards using clang-formatter
for imposing few coding-standards.

Gluster project, since inception followed certain basic coding standard,
which was (at that time) easy to follow, and easy to review.

Over the time, with inclusion of many more developers and working with
other communities, as the coding standards are different across projects,
we got different type of code into source. After 11+years, now is the time
we should be depending on tool for it more than ever, and hence we have
decided to depend on clang-formatter for this.

Below are some highlights of this activity. We expect each of you to
actively help us in this move, so it is smooth for all of us.

   - We kickstarted this activity sometime around April 2018
   
   - There was a repo created for trying out the options, and validating
   the code. Link to Repo 
   - Now, with the latest .clang-format file, we have made the whole
   GlusterFS codebase changes. The change here
   
   - We will be running regression with the changes, multiple times, so we
   don’t want to miss something getting in without our notice.
   - As it is a very big change (Almost 6 lakh lines changed), we will not
   put this commit through gerrit, but directly pushing to the repo.
   - Once this patch gets in (ETA: 28th August), all the pending patches
   needs to go through rebase.

What are the next steps:

   - The patch  of adding
   .clang-format file will get in first
   - Nigel/Infra team will be keeping the repo
    with all files changed open for
   review till EOD 27th August, 2018
   - Upon passing regression, we will push this one change to main branch.
   - After that, we will have a smoke job to validate the coding standard
   as per the .clang-format file, which will vote -1 if it is not meeting
   the standard.
   - There will be guidelines about how to setup your own .clang-format
   setup, so while sending the patch, it gets posted in proper format
  - This will be provided for both ./rfc.sh and git review users.
   - Having clang-formatter installed would be still optional, but there
   would be high chance the smoke would fail if not formatted right.

Any future changes to coding standard, due to improvements in clang-format
tool itself, or due to developers believing some other option is better
suited, can be getting in through gerrit.

Also note that, we will not be applying the changes to contrib/ directory,
as that is expected to be same as corresponding upstream coding standard of
particular project. We believe that helps to make sure we can quickly check
the diff with corresponding changes really easily.

Happy to hear any feedback!

Regards,
Amar (on behalf of many Gluster Maintainers)
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Maintainer meeting minutes: 20th August, 2018

2018-08-20 Thread Amar Tumballi
BJ Link

   - Bridge: https://bluejeans.com/217609845
   - Download: https://bluejeans.com/s/IVExy

Attendance

   - Nithya, Amar, Nigel, Sunny, Ravi, Kaleb, kshlm, Raghavendra M,
   Raghavendra G, Shyam (Partial)
   - Calendar Declines: ppai

Agenda

   -

   Master lockdown, and other stability initiatives
   - Where do we stand?
 - [Amar] Everything looks good at this point. Not as many random
 failures as there used to be.
  - What needs further attention?
 - There are still some random failures on brick-mux regression.
  - Any metrics?
 - [Atin]
 
https://fstat.gluster.org/summary?start_date=2018-08-13_date=2018-08-20
  - tracks the failures reported since master lock down is revoked.
 - [Atin] Overall, things look much more stable after a bunch of
 test fixes being worked on during this lock down.
 - c7 nightly had reported green 5 out of 7 times, 1 failed with
 some core (
 
https://lists.gluster.org/pipermail/gluster-devel/2018-August/055298.html),
 18th August’s run seem to hit n/w failure.
 - brick mux nightly had seen couple of issues since last week, 1.
 crash from tests/basic/ec/ec-5-2.t & 2. spurious failure from
 
tests/bugs/replicate/bug-1586020-mark-dirty-for-entry-txn-on-quorum-failure.t.
 (fix merged today)
 - line-cov nightly had failed once in
 
tests/bugs/replicate/bug-1586020-mark-dirty-for-entry-txn-on-quorum-failure.t
 (fix merged today)
  -

   Outstanding review queue is quite long, need review attention.
   - Maintainers keep track of their lanes and how many patches are pending.
   -

   v5.0 branching
   - Can we get all the features completed by today??
 - What are the features we’re targetting for this release? Github
 doesn’t give a clear picture. This is a problem.
  - Will this release be just a stability release?
 - There is nothing wrong in calling it a stability release.
 - If there are no features, and as we are still working on
 coverity, clang and other code coverage initiatives, should
we delay the
 branching?
 - If we’re calling this a stability release, perhaps we should
 look into memory leaks that have been reported on the lists already.
  - What about clang-formatter? Planned email content here
  
 - Around branching point is the time to get this done.
 - The messaging around “personal preference” for style vs “project
 preference” needs to be clear.
 - [Nigel] Still need to see the content, and give the proper set
 of tasks.
 - 2 options:
-
   1. Big change where we can change all files, and make
   currently posted patches conflict.
-
   1. Or make user change the file he touches with clang format.
- Precommit hook Vs Server side change.
 - problem with git review, or git push directly compared to
 `./rfc.sh`. We need a smoke job for sure.
 - Should we force everyone to install clang? Make it optional for
 user.
 - AI on Amar for giving a date for this activity completion, by
 EOD 21st August, 2018.
  - [Kaleb] what is the status of python3? (fedora 29(?) and rhel8 are
  looming.)
 - Seemingly, we’re not 100% python3 ready
 
.
 There are bugs that need fixing.
 - Ship with Python3 and we’ll fix the bugs as we find them.
 - Let’s change all the shebangs to python3.
  -

   GCS:
   - Any updates?
 - CSI driver under review -
 https://github.com/gluster/gluster-csi-driver/pull/11
- We can land the patch and fix the license later too.
- Good to go ahead and merge, and then get the followup
patches, it would move things faster.
- Kaushal to review Madhu’s changes on top of the PR, and if
things look OK, then we can merge the PR.
 - GD2 + Gluster nightly container image -
 https://hub.docker.com/r/gluster/glusterd2-nightly/
-

https://github.com/gluster/glusterd2/tree/master/extras/nightly-container
 - Build pipeline - in progress. Waiting on infra to have all the
 deps installed.
 - Deployment script in-progress
  -

   Mountpoint
   - Presentations ready?
  - All set for travel?
 - Some delays with Visa arrival, few maintainers who were supposed
 to travel from India will be confirming only by end of the week.
  -

   Round Table
   - [Amar] Can we disable bd (block-device) translator from the
  build/code? No 

[Gluster-Maintainers] Maintainer's meeting agenda: 20th August, 2018

2018-08-19 Thread Amar Tumballi
BJ Link

   - Bridge: https://bluejeans.com/217609845
   - Download:

Attendance

   - 

Agenda

   -

   Master lockdown, and other stability initiatives
   - Where do we stand?
  - What needs further attention?
  - Any metrics?
   -

   Outstanding review queue is quite long, need review attention.
   -

   v5.0 branching
   - Can we get all the features completed by today??
  - Will this release be just a stability release?
  - If there are no features, and as we are still working on coverity,
  clang and other code coverage initiatives, should we delay the branching?
  - What about clang-formatter? Planned email content here
  
   -

   GCS:
   - Any updates?
   -

   Mountpoint
   - Presentations ready?
  - All set for travel?
   -

   Round Table
   - you get to talk here

---

Add your points at https://hackmd.io/yTC-un5XT6KUB9V37LG6OQ?both

-Amar
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Maintainer's meeting Agenda: 6th August, 2018

2018-08-04 Thread Amar Tumballi
Meeting date: 08/06/2018 (August 06th, 2018), 18:30 IST, 13:00 UTC, 09:00
EDT <https://hackmd.io/yTC-un5XT6KUB9V37LG6OQ?both#BJ-Link>BJ Link

   - Bridge: https://bluejeans.com/217609845
   - Download:

<https://hackmd.io/yTC-un5XT6KUB9V37LG6OQ?both#Attendance>Attendance

   - 

<https://hackmd.io/yTC-un5XT6KUB9V37LG6OQ?both#Agenda>Agenda

   -

   AIs from previous meeting
   - None
   -

   Master lockdown
   - Randomly tests are failing. Need immediate fixes
  - No more patches without getting all the tests to GREEN
 - ie, fixes to pass the tests are the ones which will go through
 merging.
  - Use the proposal of deprecating of few features to take ‘priority’
  decisions on if a test is really needed.
   -

   Focus on ‘stability’
   - No major features coming out of Red Hat engineering, and focus is on
  stability of the project.
  - Initially, gcc8, coverity, regression and brick-mux tests, along
  with line coverage.
  - Let Infra team know all your needs for stability through bugzilla,
  so it can be tracked.
   -

   GCS
   - Email sent, repo created.
  - All discussions on GCS to happen on
  https://github.com/gluster/gcs/issues
  - More details on how everyone can track progress in another week
 - AI:
  -

   Round Table
   - 
  -

Please edit https://hackmd.io/yTC-un5XT6KUB9V37LG6OQ?both with points you
want to add.


-- 
Amar Tumballi (amarts)
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Release 5: Master branch health report (Week of 30th July)

2018-08-02 Thread Amar Tumballi
t;>> core, not sure if related to brick mux or not, so not sure if brick mux is
>>>> culprit here or not. Ref - https://build.gluster.org/job/
>>>> regression-test-with-multiplex/806/console . Seems to be a glustershd
>>>> crash. Need help from AFR folks.
>>>>
>>>> 
>>>> 
>>>> =
>>>> Fails for non-brick mux case too
>>>> 
>>>> 
>>>> =
>>>> tests/bugs/distribute/bug-1122443.t 0 Seems to be failing at my setup
>>>> very often, with out brick mux as well. Refer
>>>> https://build.gluster.org/job/regression-test-burn-in/4050/consoleText
>>>> . There's an email in gluster-devel and a BZ 1610240 for the same.
>>>>
>>>> tests/bugs/bug-1368312.t - Seems to be recent failures (
>>>> https://build.gluster.org/job/regression-test-with-multiple
>>>> x/815/console) - seems to be a new failure, however seen this for a
>>>> non-brick-mux case too - https://build.gluster.org/job/
>>>> regression-test-burn-in/4039/consoleText . Need some eyes from AFR
>>>> folks.
>>>>
>>>> tests/00-geo-rep/georep-basic-dr-tarssh.t - this isn't specific to
>>>> brick mux, have seen this failing at multiple default regression runs.
>>>> Refer https://fstat.gluster.org/failure/392?state=2_date=
>>>> 2018-06-30_date=2018-07-31=all . We need help from geo-rep
>>>> dev to root cause this earlier than later
>>>>
>>>> tests/00-geo-rep/georep-basic-dr-rsync.t - this isn't specific to
>>>> brick mux, have seen this failing at multiple default regression runs.
>>>> Refer https://fstat.gluster.org/failure/393?state=2_date=
>>>> 2018-06-30_date=2018-07-31=all . We need help from geo-rep
>>>> dev to root cause this earlier than later
>>>>
>>>> tests/bugs/glusterd/validating-server-quorum.t (
>>>> https://build.gluster.org/job/regression-test-with-multiple
>>>> x/810/console) - Fails for non-brick-mux cases too,
>>>> https://fstat.gluster.org/failure/580?state=2_date=
>>>> 2018-06-30_date=2018-07-31=all .  Atin has a patch
>>>> https://review.gluster.org/20584 which resolves it but patch is
>>>> failing regression for a different test which is unrelated.
>>>>
>>>> tests/bugs/replicate/bug-1586020-mark-dirty-for-entry-txn-on-quorum-failure.t
>>>> (Ref - https://build.gluster.org/job/regression-test-with-multiplex
>>>> /809/console) - fails for non brick mux case too -
>>>> https://build.gluster.org/job/regression-test-burn-in/4049/consoleText
>>>> - Need some eyes from AFR folks.
>>>>
>>>
>
>
> --
> Thanks and Regards,
> Kotresh H R
>
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>



-- 
Amar Tumballi (amarts)
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Release 5: Master branch health report (Week of 23rd July)

2018-07-27 Thread Amar Tumballi
On Fri, Jul 27, 2018 at 7:36 PM, Shyam Ranganathan 
wrote:

> On 07/26/2018 12:53 AM, Nigel Babu wrote:
> > 3) bug-1432542-mpx-restart-crash.t times out consistently:
> > https://bugzilla.redhat.com/show_bug.cgi?id=1608568
> >
> > @nigel is there a way to on-demand request lcov tests through
> gerrit? I
> > am thinking of pushing a patch that increases the timeout and check
> if
> > it solves the problem for this test as detailed in the bug.
> >
> >
> > You should have access to trigger the job from Jenkins. Does that work
> > for now?
>
> Thanks Nigel.
>
> After fixing up the Jenkins job to run against a pending commit in
> gerrit and tweaking one more timeout value, this test has passed in lcov
> runs (see [1], still running but the first test that has passed was the
> failing test).
>
> @mohit/@sanju, this is a mux test, and increasing timeouts seem to do
> the trick, but I am not quite happy with the situation, can you take a
> look and see where the (extra) time is being spent and why?
>
>
Notice that lcov tests are at least 1.5x slower than normal, as it dumps
most of the codepath etc into the system. If the timeout is very close to
actual completion time, then there is this chance.



> The other test has also passed in the nightly regressions, post the fix
> in sdfs. So with this we should get back to GREEN on line-coverage
> nightly runs.
>
> [1] line-coverage test run: https://build.gluster.org/job/
> line-coverage/401
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
>
>


-- 
Amar Tumballi (amarts)
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Maintainer's Meeting on 23rd July, 2018: Meeting minutes

2018-07-26 Thread Amar Tumballi
BJ Link

   - Bridge: https://bluejeans.com/217609845
   - Download: https://bluejeans.com/s/qGMqd

<https://hackmd.io/yTC-un5XT6KUB9V37LG6OQ?both#Attendance>Attendance

   - Amar, Kaleb, Nigel, Ravi, Vijay, Shyam, Rafi, Nithya, Kaushal, Pranith

<https://hackmd.io/yTC-un5XT6KUB9V37LG6OQ?both#Agenda>Agenda

   -

   AI from previous meetings:
   - AI-1: Python3/Python2 discussion, take it to closure:
 - We now have agreed upon how it all looks.
 - Will be running tests on Fedora 28 for python3, and CentOS for
 Python2
 - AI: Shyam to update release notes - DONE
  - AI-2: Coding Standard - Clang format
 - All set to do it this week.
 - There is more work on coding standard agreement.
 - AI: Amar to send a proposal on going live
  -

   Documentation Hackathon <https://bit.ly/gluster-doc-hack>:
   - Review help needed: http://bit.ly/gluster-doc-hack-report
  - Help more, fix your component.
  - Hold another hackathon with more advance notice
   -

   Coding Standard:
   - Need to improve the existing coding standard properly
  
<https://github.com/gluster/glusterfs/blob/master/doc/developer-guide/coding-standard.md>,
  and point it clearly in developer document.
  - For example, one point added here: https://review.gluster.org/20540
  - More suggestions welcome
   -

   Commit message:
   - While there is a standard we ‘try’ to follow, it is not enforced. Can
  we try to get it documented?
  - Sample here @ github
  
<https://github.com/gluster/glusterfs/blob/master/doc/developer-guide/coding-standard.md>.
  Review it in Gerrit <https://review.gluster.org/20541>.
  - Can we bring back the gerrit URL to commit messages? There is a
  wealth of information and review comments captured there and
going back to
  see the discussions later on is becoming a pain point.
 - One way to get that is by using notes in a local repo :
 
https://gerrit.googlesource.com/plugins/reviewnotes/+/master/src/main/resources/Documentation/refs-notes-review.md
 - Also, in the repo do git config notes.displayRef notes/ref after
 setting up the notes remote.
  -

   Infrastructure:
   - Now the regression failure output comes in Gerrit itself, hence please
  check the reason for failure before re-triggerring regression.
   -

   Emails on Feature classification & Sunset of few features:
   - Have a look @ Proposal for deprecation
  <https://lists.gluster.org/pipermail/gluster-devel/2018-July/054990.html>
   and Classification of Features
  <https://lists.gluster.org/pipermail/gluster-devel/2018-July/054998.html>
   emails.
   - What happens to tests when we sunset components?
 - We will tag tests that map to components that we don’t support
 anymore.
 - We will no longer run tests that are not relevant to releases
 anymore.
  - What is the difference between sunset and deprecated?
 - We seem to be using them with opposite meanings that other
 projects use.
 - Sunset - We will remove it in the future.
 - Deprecated - We are going to remove it in the next release
  - Add anything more missing into the emails. Or even your thoughts.
   -

   Mountpoint.IO <https://mountpoint.io/>
   - Who all are attending?
 - kshlm (visa dependent)
 - nigelb (depends on visa)
 - amarts (depends on visa)
 - gobinda Das (depends on visa)
  -

   Release *v5.0*
   - Have you tagged the feature you are working on, for next release?
 - Feature tagging and a post on devel list about proposed features
 would be awesome!
  -

   Status update from other projects?
   - GlusterD2
 - Focus on GCS
 - Automatic volume provisoning is in alpha state
 - Ongoing work on transacation framework, snapshots etc.
  - NFS-Ganesha
 - Upcoming Bakeathon in September
 - storahug being integrated with gd2?
  -

   Round Table:
   - Kaleb: Coverity tool updated to 2018-06, 50 more defects observed
  - Possible move back to gerrit for gd2 reviews

-Amar


On Fri, Jul 20, 2018 at 5:22 PM, Amar Tumballi  wrote:

> BJ Link
>
>- Bridge: https://bluejeans.com/217609845
>- Download:
>
> <https://hackmd.io/yTC-un5XT6KUB9V37LG6OQ?both#Attendance>Attendance
>
>- 
>- 
>
> <https://hackmd.io/yTC-un5XT6KUB9V37LG6OQ?both#Agenda>Agenda
>
>-
>
>AI from previous meetings:
>- AI-1: Python3/Python2 discussion, take it to closure:
>  - We now have agreed upon how it all looks.
>  - Will be running tests on Fedora 28 for python3, and CentOS for
>  Python2
>   - AI-2: Coding Standard - Clang format
>  - All set to do it this week.
>  - There is more 

Re: [Gluster-Maintainers] [Gluster-devel] Release 5: Release targeted for first week of October-2018

2018-07-24 Thread Amar Tumballi
On Wed, Jul 25, 2018 at 12:42 AM, Shyam Ranganathan 
wrote:

> On 07/17/2018 11:31 AM, Shyam Ranganathan wrote:
> > Hi,
> >
> > Post release 4.1 we have announced a release version and cadence change
> > to the lists [1]. Based on this the next release of Gluster would be "5"
> > and is slated during the first week of October, 2018 [2].
> >
> > With this release of Gluster (i.e 5), 3.12 will be EOLd and that is the
> > last release in the 3.x line for Gluster.
> >
> > This mail is to solicit the following,
> >
> > Features/enhancements planned for Gluster 5 needs the following from
> > contributors:
> >   - Open/Use relevant issue
> >   - Mark issue with the "Release 5" milestone [3]
> >   - Post to the devel lists issue details, requesting addition to track
> > the same for the release
>
> Still awaiting features that are to be part of this release, please mark
> them appropriately and also notify the lists.
>
>
Proposal:

* Move thin-arbiter from tech-preview to supported.
  - https://github.com/gluster/glusterfs/issues/352
* Wireshark for RPC 4.0
  - https://github.com/gluster/glusterfs/issues/157
* Provide infra to classify features
  - https://github.com/gluster/glusterfs/issues/430
* Python 3 support:
  - https://github.com/gluster/glusterfs/issues/411

* DHT as Pass through:
  - https://github.com/gluster/glusterfs/issues/405


For TechPreview / Experimental : (Ie, code may come in late too)

* Reflink support:
  - https://github.com/gluster/glusterfs/issues/349
* locking using open (makes sense if users are exporting file as a disk
image)
  - https://github.com/gluster/glusterfs/issues/466
* Interrupt handler in fuse
 - https://github.com/gluster/glusterfs/issues/465

For now these are the things at the top of head. Will respond back with
more things soon.

-Amar


> >
> > I will follow this mail up, within a week, with a calendar of activities
> > as usual.
>
> Calendar of activities look as follows:
>
> 1) master branch health checks (weekly, till branching)
>   - Expect every Monday a status update on various tests runs
>
> 2) Branching date: (Monday) Aug-20-2018 (~40 days before GA tagging)
>
> 3) Late feature back port closure: (Friday) Aug-24-2018 (1 week from
> branching)
>
> 4) Initial release notes readiness: (Monday) Aug-27-2018
>
> 5) RC0 build: (Monday) Aug-27-2018
>
> 
>
> 6) RC1 build: (Monday) Sep-17-2018
>
> 
>
> 7) GA tagging: (Monday) Oct-01-2018
>
> 
>
> 8) ~week later release announcement
>
> Go/no-go discussions per-phase will be discussed in the maintainers list.
>
> >
> > Thanks,
> > Shyam
> >
> > [1] Announce on release cadence and version changes:
> > https://lists.gluster.org/pipermail/announce/2018-July/000103.html
> >
> > [2] Release schedule: https://www.gluster.org/release-schedule/
> >
> > [3] Release milestone: https://github.com/gluster/glusterfs/milestone/7
> > ___
> > Gluster-devel mailing list
> > gluster-de...@gluster.org
> > https://lists.gluster.org/mailman/listinfo/gluster-devel
> >
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>



-- 
Amar Tumballi (amarts)
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-users] Proposal to mark few features as Deprecated / SunSet from Version 5.0

2018-07-23 Thread Amar Tumballi
On Mon, Jul 23, 2018 at 8:21 PM, Gudrun Mareike Amedick <
g.amed...@uni-luebeck.de> wrote:

> Hi,
>
> we're planning a dispersed volume with at least 50 project directories.
> Each of those has its own quota ranging between 0.1TB and 200TB. Comparing
> XFS
> project quotas over several servers and bricks to make sure their total
> matches the desired value doesn't really sound practical. It would probably
> be
> possible to create and maintain 50 volumes and more, but it doesn't seem
> to be a desirable solution. The quotas aren't fixed and resizing a volume is
> not as trivial as changing the quota.
>
> Quota was in the past and still is a very comfortable way to solve this.
>
> But what is the new recommended way for such a setting when the quota is
> going to be deprecated?
>
>
Thanks for the feedback. Helps us to prioritize. Will get back on this.

-Amar



> Kind regards
>
> Gudrun
> Am Donnerstag, den 19.07.2018, 12:26 +0530 schrieb Amar Tumballi:
> > Hi all,
> >
> > Over last 12 years of Gluster, we have developed many features, and
> continue to support most of it till now. But along the way, we have figured
> out
> > better methods of doing things. Also we are not actively maintaining
> some of these features.
> >
> > We are now thinking of cleaning up some of these ‘unsupported’ features,
> and mark them as ‘SunSet’ (i.e., would be totally taken out of codebase in
> > following releases) in next upcoming release, v5.0. The release notes
> will provide options for smoothly migrating to the supported configurations.
> >
> > If you are using any of these features, do let us know, so that we can
> help you with ‘migration’.. Also, we are happy to guide new developers to
> > work on those components which are not actively being maintained by
> current set of developers.
> >
> > List of features hitting sunset:
> >
> > ‘cluster/stripe’ translator:
> >
> > This translator was developed very early in the evolution of GlusterFS,
> and addressed one of the very common question of Distributed FS, which is
> > “What happens if one of my file is bigger than the available brick. Say,
> I have 2 TB hard drive, exported in glusterfs, my file is 3 TB”. While it
> > solved the purpose, it was very hard to handle failure scenarios, and
> give a real good experience to our users with this feature. Over the time,
> > Gluster solved the problem with it’s ‘Shard’ feature, which solves the
> problem in much better way, and provides much better solution with existing
> > well supported stack. Hence the proposal for Deprecation.
> >
> > If you are using this feature, then do write to us, as it needs a proper
> migration from existing volume to a new full supported volume type before
> > you upgrade.
> >
> > ‘storage/bd’ translator:
> >
> > This feature got into the code base 5 years back with this patch[1].
> Plan was to use a block device directly as a brick, which would help to
> handle
> > disk-image storage much easily in glusterfs.
> >
> > As the feature is not getting more contribution, and we are not seeing
> any user traction on this, would like to propose for Deprecation.
> >
> > If you are using the feature, plan to move to a supported gluster volume
> configuration, and have your setup ‘supported’ before upgrading to your new
> > gluster version.
> >
> > ‘RDMA’ transport support:
> >
> > Gluster started supporting RDMA while ib-verbs was still new, and very
> high-end infra around that time were using Infiniband. Engineers did work
> > with Mellanox, and got the technology into GlusterFS for better data
> migration, data copy. While current day kernels support very good speed with
> > IPoIB module itself, and there are no more bandwidth for experts in
> these area to maintain the feature, we recommend migrating over to TCP (IP
> > based) network for your volume.
> >
> > If you are successfully using RDMA transport, do get in touch with us to
> prioritize the migration plan for your volume. Plan is to work on this
> > after the release, so by version 6.0, we will have a cleaner transport
> code, which just needs to support one type.
> >
> > ‘Tiering’ feature
> >
> > Gluster’s tiering feature which was planned to be providing an option to
> keep your ‘hot’ data in different location than your cold data, so one can
> > get better performance. While we saw some users for the feature, it
> needs much more attention to be completely bug free. At the time, we are not
> > having any active maintainers for the feature, and hence suggesting to
> take it out of the ‘supported’ tag.
> >

[Gluster-Maintainers] Meeting on 23rd July, 2018: Agenda

2018-07-20 Thread Amar Tumballi
BJ Link

   - Bridge: https://bluejeans.com/217609845
   - Download:

<https://hackmd.io/yTC-un5XT6KUB9V37LG6OQ?both#Attendance>Attendance

   - 
   - 

<https://hackmd.io/yTC-un5XT6KUB9V37LG6OQ?both#Agenda>Agenda

   -

   AI from previous meetings:
   - AI-1: Python3/Python2 discussion, take it to closure:
 - We now have agreed upon how it all looks.
 - Will be running tests on Fedora 28 for python3, and CentOS for
 Python2
  - AI-2: Coding Standard - Clang format
 - All set to do it this week.
 - There is more work on coding standard agreement.
  -

   Documentation Hackathon <https://bit.ly/gluster-doc-hack>:
   - Review help needed: http://bit.ly/gluster-doc-hack-report
  - Help more, fix your component.
   -

   Coding Standard:
   - Need to improve the existing coding standard properly
  
<https://github.com/gluster/glusterfs/blob/master/doc/developer-guide/coding-standard.md>,
  and point it clearly in developer document.
  - For example, one point added here: https://review.gluster.org/20540
  - More suggestions welcome
   -

   Commit message:
   - While there is a standard we ‘try’ to follow, it is not enforced. Can
  we try to get it documented?
  - Sample here @ github
  
<https://github.com/gluster/glusterfs/blob/master/doc/developer-guide/coding-standard.md>.
  Review it in Gerrit <https://review.gluster.org/20541>.
  - Can we bring back the gerrit URL to commit messages? There is a
  wealth of information and review comments captured there and
going back to
  see the discussions later on is becoming a pain point.
   -

   Infrastructure:
   - Now the regression failure output comes in Gerrit itself, hence please
  check the reason for failure before re-triggerring regression.
   -

   Emails on Feature classification & Sunset of few features:
   - Have a look @ Proposal for deprecation
  <https://lists.gluster.org/pipermail/gluster-devel/2018-July/054990.html>
   and Classification of Features
  <https://lists.gluster.org/pipermail/gluster-devel/2018-July/054998.html>
   emails.
  - Add anything more missing into the emails. Or even your thoughts.
   -

   Mountpoint.IO <https://mountpoint.io/>
   - Who all are attending?
   -

   Release *v5.0*
   - Have you tagged the feature you are working on, for next release?
   -

   Status update from other projects?
   - GlusterD2
   -

   Round Table:
   - 

-
If you have anything to add, https://hackmd.io/yTC-un5XT6KUB9V37LG6OQ?both#


-- 
Amar Tumballi (amarts)
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Feature Classification & Quality Expectation.

2018-07-20 Thread Amar Tumballi
We sent an email few days back for proposal to deprecate some features in
glusterfs (
https://lists.gluster.org/pipermail/gluster-devel/2018-July/054997.html).

Shyam recently sent the document as a patch upstream Gluster @
https://review.gluster.org/20538/. (Same is copied below in email here).
Please provide your valuable feedback on the same, so we can make it a
general practice.

This is done for making things more clear about proper expectation from
each of the component/feature, we want to have more classification of each
feature, control them using the code itself, and make sure we keep the list
up-to-date with each release, so our users have proper expectations set.



The purpose of the document is to define a classification for various
xlators and expectations around what each classification means from a
perspective of health and maintenance of the xlator.

The need to do this is to ensure certain classifications are kept in good
health, and helps the community and contributors focus efforts on around
the same.
<https://hackmd.io/q_aEokbxSBSN-LdoaJ6Z6A#Classifications>Classifications

   1. Experimental (E)
   2. TechPreview (TP)
   3. Maintained/Supported (M)
   4. Sunset (S)
   5. Deprecated (D)

<https://hackmd.io/q_aEokbxSBSN-LdoaJ6Z6A#Experimental-E>Experimental (E)

Developed in the experimental branch, for exploring new features. These are
NEVER released, and MAYBE packaged to help with getting feedback from
interested users.
<https://hackmd.io/q_aEokbxSBSN-LdoaJ6Z6A#Quality-expectations>Quality
expectations

   - Compiles
   - Does not break nightly experimental regressions

<https://hackmd.io/q_aEokbxSBSN-LdoaJ6Z6A#TechPreview-TP>TechPreview (TP)

Features in master or release branches that are not complete for general
purpose consumption, but are mature enough to invite feedback and host user
data.

These features will receive better attention from maintainers/authors that
are working on maturing the same, than ones in
Experimental/Sunset/Deprecated states.

There is no grantee that these features will move to the Maintained state,
and may just get Deprecated based on feedback, or other project goals or
technical alternatives.
<https://hackmd.io/q_aEokbxSBSN-LdoaJ6Z6A#Quality-expectations1>Quality
expectations

   - Same as Maintained, sans
  - Performance, Scale, other(?)

<https://hackmd.io/q_aEokbxSBSN-LdoaJ6Z6A#Maintained-M>Maintained (M)

These features are part of the core Gluster functionality and are
maintained actively. These are part of master and release branches and get
high priority attention from maintainers and other interested contributors.
<https://hackmd.io/q_aEokbxSBSN-LdoaJ6Z6A#Quality-expectations2>Quality
expectations

NOTE: A short note on what each of these mean are added here, details to
follow.

   - Bug backlog: Actively address bug backlog
   - Enhancement backlog: Actively maintain outstanding enhancement backlog
   (need not be acted on, but should be visible to all)
   - Review backlog: Actively keep this below desired counts and states
   - Static code health: Actively meet near-zero issues in this regard
  - Coverity, spellcheck and other checks
   - Runtime code health: Actively meet defined coverage levels in this
   regard
  - Coverage, others?
  - Per-patch regressions
  - Glusto runs
  - Performance
  - Scalability
   - Technical specifications: Implementation details should be documented
   and updated at regular cadence (even per patch that change assumptions in
   here)
   - User documentation: User facing details should be maintained to
   current status in the documentation
   - Debuggability: Steps, tools, procedures should be documented and
   maintained each release/patch as applicable
   - Troubleshooting: Steps, tools, procedures should be documented and
   maintained each release/patch as applicable
  - Steps/guides for self service
  - Knowledge base for problems
   - Other common criteria that will apply: Required metrics/desired states
   to be define per criteria
  - Monitoring, usability, statedump, and other such xlator expectations

<https://hackmd.io/q_aEokbxSBSN-LdoaJ6Z6A#Sunset-S>Sunset (S)

Features on master or release branches that would be deprecated and/or
replaced with similar or other functionality in the next major release.
<https://hackmd.io/q_aEokbxSBSN-LdoaJ6Z6A#Quality-expectations3>Quality
expectations

   - Retain status-quo when moved to this state, till it is moved to
   deprecated

<https://hackmd.io/q_aEokbxSBSN-LdoaJ6Z6A#Deprecated-D>Deprecated (D)

Features/code still in tree, but not packaged or shipped or supported in
any form. This is noted as a category till the code is removed from the
tree.

These feature and their corresponding code and test health will not be
executed.
<https://hackmd.io/q_aEokbxSBSN-LdoaJ6Z6A#Quality-expectations4>Quality
expectations

   - None



Regards,
Amar


On Thu, Jul 

Re: [Gluster-Maintainers] [Gluster-users] Proposal to mark few features as Deprecated / SunSet from Version 5.0

2018-07-20 Thread Amar Tumballi
On Thu, Jul 19, 2018 at 6:46 PM, mabi  wrote:

> Hi Amar,
>
> Just wanted to say that I think the quota feature in GlusterFS is really
> useful. In my case I use it on one volume where I have many cloud
> installations (mostly files) for different people and all these need to
> have a different quota set on a specific directory. The GlusterFS quota
> allows me nicely to manage that which would not be possible in the
> application directly. It would really be an overhead for me to for example
> to have one volume per installation just because of setting the max size
> like that.
>
> I hope that this feature can continue to exist.
>
>
Thanks for the feedback. We will consider this use-case.


> Best regards,
> M.
>
>
>
> ‐‐‐ Original Message ‐‐‐
> On July 19, 2018 8:56 AM, Amar Tumballi  wrote:
>
> Hi all,
>
> Over last 12 years of Gluster, we have developed many features, and
> continue to support most of it till now. But along the way, we have figured
> out better methods of doing things. Also we are not actively maintaining
> some of these features.
>
> We are now thinking of cleaning up some of these ‘unsupported’ features,
> and mark them as ‘SunSet’ (i.e., would be totally taken out of codebase in
> following releases) in next upcoming release, v5.0. The release notes
> will provide options for smoothly migrating to the supported configurations.
>
> If you are using any of these features, do let us know, so that we can
> help you with ‘migration’.. Also, we are happy to guide new developers to
> work on those components which are not actively being maintained by current
> set of developers.
> *List of features hitting sunset:*
> *‘cluster/stripe’ translator:*
>
> This translator was developed very early in the evolution of GlusterFS,
> and addressed one of the very common question of Distributed FS, which is
> “What happens if one of my file is bigger than the available brick. Say, I
> have 2 TB hard drive, exported in glusterfs, my file is 3 TB”. While it
> solved the purpose, it was very hard to handle failure scenarios, and give
> a real good experience to our users with this feature. Over the time,
> Gluster solved the problem with it’s ‘Shard’ feature, which solves the
> problem in much better way, and provides much better solution with existing
> well supported stack. Hence the proposal for Deprecation.
>
> If you are using this feature, then do write to us, as it needs a proper
> migration from existing volume to a new full supported volume type before
> you upgrade.
> *‘storage/bd’ translator:*
>
> This feature got into the code base 5 years back with this *patch*
> <http://review.gluster.org/4809>[1]. Plan was to use a block device
> directly as a brick, which would help to handle disk-image storage much
> easily in glusterfs.
>
> As the feature is not getting more contribution, and we are not seeing any
> user traction on this, would like to propose for Deprecation.
>
> If you are using the feature, plan to move to a supported gluster volume
> configuration, and have your setup ‘supported’ before upgrading to your new
> gluster version.
> *‘RDMA’ transport support:*
>
> Gluster started supporting RDMA while ib-verbs was still new, and very
> high-end infra around that time were using Infiniband. Engineers did work
> with Mellanox, and got the technology into GlusterFS for better data
> migration, data copy. While current day kernels support very good speed
> with IPoIB module itself, and there are no more bandwidth for experts in
> these area to maintain the feature, we recommend migrating over to TCP (IP
> based) network for your volume.
>
> If you are successfully using RDMA transport, do get in touch with us to
> prioritize the migration plan for your volume. Plan is to work on this
> after the release, so by version 6.0, we will have a cleaner transport
> code, which just needs to support one type.
> *‘Tiering’ feature*
>
> Gluster’s tiering feature which was planned to be providing an option to
> keep your ‘hot’ data in different location than your cold data, so one can
> get better performance. While we saw some users for the feature, it needs
> much more attention to be completely bug free. At the time, we are not
> having any active maintainers for the feature, and hence suggesting to take
> it out of the ‘supported’ tag.
>
> If you are willing to take it up, and maintain it, do let us know, and we
> are happy to assist you.
>
> If you are already using tiering feature, before upgrading, make sure to
> do gluster volume tier detach all the bricks before upgrading to next
> release. Also, we recommend you to use features like dmcache on your LVM
> setup to get best performance 

Re: [Gluster-Maintainers] [Gluster-users] Proposal to mark few features as Deprecated / SunSet from Version 5.0

2018-07-19 Thread Amar Tumballi
On Thu, Jul 19, 2018 at 6:06 PM, Jim Kinney  wrote:

> Too bad the RDMA will be abandoned. It's the perfect transport for
> intranode processing and data sync.
>
>


> I currently use RDMA on a computational cluster between nodes and gluster
> storage. The older IB cards will support 10G IP and 40G IB. I've had some
> success with connectivity but am still faltering with fuse performance. As
> soon as some retired gear is reconnected I'll have a test bed for HA NFS
> over RDMA to computational cluster and 10G IP to non-cluster systems.
>
> But it looks like Gluster 6 is a ways away so maybe I'll get more hardware
> or time to pitch in some code after groking enough IB.
>
>
We are happy to continue to make releases with RDMA for some more time if
there are users. The "proposal" is to make sure we give enough heads up
about the experts in that area not having cycles to make any more
enhancements to the feature.



> Thanks for the heads up and all the work to date.
>

Glad to hear back from you! Makes us realize there are things which we
haven't touched in some time, but people using them.

Thanks,
Amar


>
> On July 19, 2018 2:56:35 AM EDT, Amar Tumballi 
> wrote:
>>
>>
>> *Hi all,Over last 12 years of Gluster, we have developed many features,
>> and continue to support most of it till now. But along the way, we have
>> figured out better methods of doing things. Also we are not actively
>> maintaining some of these features.We are now thinking of cleaning up some
>> of these ‘unsupported’ features, and mark them as ‘SunSet’ (i.e., would be
>> totally taken out of codebase in following releases) in next upcoming
>> release, v5.0. The release notes will provide options for smoothly
>> migrating to the supported configurations.If you are using any of these
>> features, do let us know, so that we can help you with ‘migration’.. Also,
>> we are happy to guide new developers to work on those components which are
>> not actively being maintained by current set of developers.List of features
>> hitting sunset:‘cluster/stripe’ translator:This translator was developed
>> very early in the evolution of GlusterFS, and addressed one of the very
>> common question of Distributed FS, which is “What happens if one of my file
>> is bigger than the available brick. Say, I have 2 TB hard drive, exported
>> in glusterfs, my file is 3 TB”. While it solved the purpose, it was very
>> hard to handle failure scenarios, and give a real good experience to our
>> users with this feature. Over the time, Gluster solved the problem with
>> it’s ‘Shard’ feature, which solves the problem in much better way, and
>> provides much better solution with existing well supported stack. Hence the
>> proposal for Deprecation.If you are using this feature, then do write to
>> us, as it needs a proper migration from existing volume to a new full
>> supported volume type before you upgrade.‘storage/bd’ translator:This
>> feature got into the code base 5 years back with this patch
>> <http://review.gluster.org/4809>[1]. Plan was to use a block device
>> directly as a brick, which would help to handle disk-image storage much
>> easily in glusterfs.As the feature is not getting more contribution, and we
>> are not seeing any user traction on this, would like to propose for
>> Deprecation.If you are using the feature, plan to move to a supported
>> gluster volume configuration, and have your setup ‘supported’ before
>> upgrading to your new gluster version.‘RDMA’ transport support:Gluster
>> started supporting RDMA while ib-verbs was still new, and very high-end
>> infra around that time were using Infiniband. Engineers did work with
>> Mellanox, and got the technology into GlusterFS for better data migration,
>> data copy. While current day kernels support very good speed with IPoIB
>> module itself, and there are no more bandwidth for experts in these area to
>> maintain the feature, we recommend migrating over to TCP (IP based) network
>> for your volume.If you are successfully using RDMA transport, do get in
>> touch with us to prioritize the migration plan for your volume. Plan is to
>> work on this after the release, so by version 6.0, we will have a cleaner
>> transport code, which just needs to support one type.‘Tiering’
>> featureGluster’s tiering feature which was planned to be providing an
>> option to keep your ‘hot’ data in different location than your cold data,
>> so one can get better performance. While we saw some users for the feature,
>> it needs much more attention to be completely bug free. At the time, we are
>> not having any active maintainers for the feature, and hence sugg

[Gluster-Maintainers] Proposal to mark few features as Deprecated / SunSet from Version 5.0

2018-07-19 Thread Amar Tumballi
*Hi all,Over last 12 years of Gluster, we have developed many features, and
continue to support most of it till now. But along the way, we have figured
out better methods of doing things. Also we are not actively maintaining
some of these features.We are now thinking of cleaning up some of these
‘unsupported’ features, and mark them as ‘SunSet’ (i.e., would be totally
taken out of codebase in following releases) in next upcoming release,
v5.0. The release notes will provide options for smoothly migrating to the
supported configurations.If you are using any of these features, do let us
know, so that we can help you with ‘migration’.. Also, we are happy to
guide new developers to work on those components which are not actively
being maintained by current set of developers.List of features hitting
sunset:‘cluster/stripe’ translator:This translator was developed very early
in the evolution of GlusterFS, and addressed one of the very common
question of Distributed FS, which is “What happens if one of my file is
bigger than the available brick. Say, I have 2 TB hard drive, exported in
glusterfs, my file is 3 TB”. While it solved the purpose, it was very hard
to handle failure scenarios, and give a real good experience to our users
with this feature. Over the time, Gluster solved the problem with it’s
‘Shard’ feature, which solves the problem in much better way, and provides
much better solution with existing well supported stack. Hence the proposal
for Deprecation.If you are using this feature, then do write to us, as it
needs a proper migration from existing volume to a new full supported
volume type before you upgrade.‘storage/bd’ translator:This feature got
into the code base 5 years back with this patch
[1]. Plan was to use a block device
directly as a brick, which would help to handle disk-image storage much
easily in glusterfs.As the feature is not getting more contribution, and we
are not seeing any user traction on this, would like to propose for
Deprecation.If you are using the feature, plan to move to a supported
gluster volume configuration, and have your setup ‘supported’ before
upgrading to your new gluster version.‘RDMA’ transport support:Gluster
started supporting RDMA while ib-verbs was still new, and very high-end
infra around that time were using Infiniband. Engineers did work with
Mellanox, and got the technology into GlusterFS for better data migration,
data copy. While current day kernels support very good speed with IPoIB
module itself, and there are no more bandwidth for experts in these area to
maintain the feature, we recommend migrating over to TCP (IP based) network
for your volume.If you are successfully using RDMA transport, do get in
touch with us to prioritize the migration plan for your volume. Plan is to
work on this after the release, so by version 6.0, we will have a cleaner
transport code, which just needs to support one type.‘Tiering’
featureGluster’s tiering feature which was planned to be providing an
option to keep your ‘hot’ data in different location than your cold data,
so one can get better performance. While we saw some users for the feature,
it needs much more attention to be completely bug free. At the time, we are
not having any active maintainers for the feature, and hence suggesting to
take it out of the ‘supported’ tag.If you are willing to take it up, and
maintain it, do let us know, and we are happy to assist you.If you are
already using tiering feature, before upgrading, make sure to do gluster
volume tier detach all the bricks before upgrading to next release. Also,
we recommend you to use features like dmcache on your LVM setup to get best
performance from bricks.‘Quota’This is a call out for ‘Quota’ feature, to
let you all know that it will be ‘no new development’ state. While this
feature is ‘actively’ in use by many people, the challenges we have in
accounting mechanisms involved, has made it hard to achieve good
performance with the feature. Also, the amount of extended attribute
get/set operations while using the feature is not very ideal. Hence we
recommend our users to move towards setting quota on backend bricks
directly (ie, XFS project quota), or to use different volumes for different
directories etc.As the feature wouldn’t be deprecated immediately, the
feature doesn’t need a migration plan when you upgrade to newer version,
but if you are a new user, we wouldn’t recommend setting quota feature. By
the release dates, we will be publishing our best alternatives guide for
gluster’s current quota feature.Note that if you want to contribute to the
feature, we have project quota based issue open
[2] Happy to get
contributions, and help in getting a newer approach to
Quota.--These are our set of initial features
which we propose to take out of ‘fully’ supported features. While we are in
the process of making the user/developer experience of the 

Re: [Gluster-Maintainers] Meeting date: 07/09/2018 (July 09th, 2018), 18:30 IST, 13:00 UTC, 09:00 EDT

2018-07-09 Thread Amar Tumballi
Meeting date: 07/09/2018 (July 09th, 2018), 18:30 IST, 13:00 UTC, 09:00 EDT
BJ Link

   - Bridge: https://bluejeans.com/217609845
   - Download: https://bluejeans.com/s/FC2Qi

Attendance

   - Sorry Note: Ravi, NigelBabu, ndevos,
   - Nithya, Xavi, Rafi, Susant, Raghavendra Bhat, Amar, Deepshika, Csaba,
   Atin, ppai, Sachi, Hari.

Agenda

   -

   Python3 migration:
   - Some confusion existed about Py2 support in RHEL etc. Should we bother?
 - [Amar] For upstream, focusing on just 1 version is good, and
 confusion less. Downstream projects can decide for themselves
about how to
 handle the project, when they ship it.
 - [ndevos] Maybe we need to decide which distributions/versions we
 want to offer to our users? Many run on CentOS-7 and there is
no reasonable
 way to include Python3 there. I assume other commonly used stable
 distributions (Debian, Ubuntu?) are also lacking Python3.
 - [Atin] Are we not supporting python2 at all? What are the
 patches intended to?
 - [ppai] Code is mostly py2 and py3 compatible. The issue is with
 #! line, where we have to have specific python2 or python3.
Fedora mandates
 it to be one of it.
 - [Atin] My vote would be to go for both py2 & py3 compatibility
 and figure out a way how to handle builds?
 - [Amar] Need a guidelines about how to handle existing code Vs
 reviewing new code into the repository.
 - [ppai] Many companies are moving towards new project being
 python3 only, where as supporting py2 for existing projects.
 - [AI] Amar to respond to Nigel’s email, and plan to take to
 completion soon.
  - If we go with only python3, what is the work pending?
  - What are the automated validation tests needed? Are we good there?
   -

   Infra: Update on where are we.
   - Distributed tests
 - [Deepshika] jobs are running, figuring out issues as we run
 tests.
 - Need to increase the disk storage.
  - Coding Standard as pre-commit hook (clang)
 - In progress, need to finalize the config file.
 - AI: all gets 1 week to finalize config.
  - shellcheck?
 - [Sac] shellcheck is good enough! Does check for unused variable
 etc.
  - Anything else?
   -

   Failing regression:
   -

  tests/bugs/core/bug-1432542-mpx-restart-crash.t
  - consistently failing now. Need to address ASAP. Even if it takes
 disabling it.
 - It is an important feature, but not default in the project yet,
 hence should it be blocking all the 10-15 patches now?
 - Mohit/Xavi’s patches seems to solve the issue, is it just fine
 for all the pending patches?
 - [Atin] We should try to invest some time to figure out why the
 cleanup is taking more time.
 - [Xavi] noticed that selinux is taking more time (semanage).
 Mostly because there are many volumes.
 - [Deepshika] I saw it takes lot of memory, is that a concern?
 - [Atin] It creates 120 volumes, so expected to take more than 1GB
 memory, easily.
 - [Nithya] Do we need so many volumes for regression?
 - [Atin] Looks like we can reduce it a bit.
  -

  tests/00-geo-rep/georep-basic-dr-rsync.t
  - Test itself passes, but generates CORE. Doesnt’ happen always.
 - Not a geo-rep issue. The crash is in cleanup path, in gf_msg()
 path.
  -

  Any other tests?
  -

   Round Table
   - 
  - [Atin] - Timing works great for me.
  - [Nithya] - Anyone triaging upstream BZs?
  -
 - Most like not happening.


Regards,
Amar
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Meeting date: 07/09/2018 (July 09th, 2018), 18:30 IST, 13:00 UTC, 09:00 EDT

2018-07-08 Thread Amar Tumballi
---
BJ Link

   - Bridge: https://bluejeans.com/217609845
   - Download:

<https://hackmd.io/yTC-un5XT6KUB9V37LG6OQ?both#Attendance>Attendance

   -

<https://hackmd.io/yTC-un5XT6KUB9V37LG6OQ?both#Agenda>Agenda

   -

   Python3 migration:
   - Some confusion existed about Py2 support in RHEL etc. Should we bother?
 - [Amar] For upstream, focusing on just 1 version is good, and
 confusion less. Downstream projects can decide for themselves
about how to
 handle the project, when they ship it.
  - If we go with only python3, what is the work pending?
  - What are the automated validation tests needed? Are we good there?
   -

   Infra: Update on where are we.
   - Distributed tests
  - Coding Standard as pre-commit hook (clang)
  - shellcheck?
  - Anything else?
   -

   Failing regression:
   -

  tests/bug/core/bug-1432542-mpx-restart-crash.t
  - consistently failing now. Need to address ASAP. Even if it takes
 disabling it.
 - It is an important feature, but not default in the project yet,
 hence should it be blocking all the 10-15 patches now?
  -

  Any other tests?
  -

   Round Table
   - 

<https://hackmd.io/yTC-un5XT6KUB9V37LG6OQ?both#Decisions>Decision(s)?

   - 

---
Add your points @ https://hackmd.io/yTC-un5XT6KUB9V37LG6OQ?both


-- 
Amar Tumballi (amarts)
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Maintainer's meeting series

2018-07-02 Thread Amar Tumballi
On Mon, Jul 2, 2018 at 4:51 PM, Nigel Babu  wrote:

> This discussion sort of died, so I'm going to propose 1300 UTC and 1500
> UTC on Mondays. If you cannot make it to *both* those times, please chime
> in.
>
>
Thanks for getting this back to life. Was in my todo list to finalize it.

If there are no voices, I would pick 1300 UTC on Monday and schedule events
starting next week.
I don't have any concerns for 1500 UTC too. Would like to hear others.


> On Wed, Jun 20, 2018 at 12:09 AM Vijay Bellur  wrote:
>
>>
>>
>> On Tue, Jun 19, 2018 at 3:08 AM Nigel Babu  wrote:
>>
>>> I propose that we alternate times for every other meeting so that we can
>>> accommodate people across all timezones. We're never going to find one
>>> single timezone that works for everyone. The next best compromise that I've
>>> seen projects make is to have the edge timezones take a compromise every
>>> other meeting.
>>>
>>
>> +1. Other models that we can consider:
>>
>> - Choose a time slot that works for the majority of maintainers.
>> - Have two different meetings to accommodate various TZs.
>>
>> Thanks,
>> Vijay
>>
>>
>>
>>> On Tue, Jun 19, 2018 at 2:36 PM Amar Tumballi 
>>> wrote:
>>>
>>>> Hi All,
>>>>
>>>> On the fun side, it seems that other than 2 people, not many people
>>>> have noticed the end of recurring maintainer's meetings, on Wednesdays.
>>>>
>>>> Overall, there were 20+ maintainers meeting in last 1 year, and in
>>>> those meetings, we tried to keep all the discussion open (shared agenda
>>>> before for everyone to make a point, and shared meeting minutes with even
>>>> the BJ download link). This also helped us to take certain decisions which
>>>> otherwise would have taken long time to achieve, or even help with some
>>>> release related discussions, helping us to keep the release timelines sane.
>>>>
>>>> I propose to get the biweekly maintainer's meeting back to life, and
>>>> this time, considering some requests from previous thread, would like to
>>>> keep it on Monday 9AM EST (would recommend to keep it 9AM EDT as-well). Or
>>>> else Thursday 10AM EST ? I know it wouldn't be great time for many
>>>> maintainers in India, but considering we now have presence from US West
>>>> Coast to India... I guess these times are the one we can consider.
>>>>
>>>> Avoiding Tuesday/Wednesday slots mainly because major sponsor for
>>>> project, Red Hat's members would be busy with multiple meetings during that
>>>> time.
>>>>
>>>> Happy to hear the thoughts, and comments.
>>>>
>>>> Regards,
>>>> Amar
>>>> --
>>>> Amar Tumballi (amarts)
>>>> ___
>>>> maintainers mailing list
>>>> maintainers@gluster.org
>>>> http://lists.gluster.org/mailman/listinfo/maintainers
>>>>
>>>
>>>
>>> --
>>> nigelb
>>> ___
>>> maintainers mailing list
>>> maintainers@gluster.org
>>> http://lists.gluster.org/mailman/listinfo/maintainers
>>>
>>
>
> --
> nigelb
>



-- 
Amar Tumballi (amarts)
___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Proposal (rfc): Upstream release versions and cadence

2018-07-02 Thread Amar Tumballi
On Mon, Jul 2, 2018 at 4:40 PM, Shyam Ranganathan 
wrote:

> On 06/25/2018 06:33 PM, Shyam Ranganathan wrote:
> > Comments, alternatives welcome, I will keep this open for a week and
> > announce this to the users and other lists post that.
>
> This will be announced to the Gluster users list tomorrow.
>
>
Please go ahead :-)

Regards,
Amar
___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Jenkins build is back to normal : experimental-periodic #361

2018-06-21 Thread Amar Tumballi
Thanks for resolving this Shyam.

RIO is back in building state again. But I noticed some of the patches
Kotresh sent were abandon'd due to timeout, but I see he was working on
ctime feature for upstream anyways.

Let me know if I should rebase to latest master to get the ctime patches
merged properly anyways, as in experimental branch, only RIO code is extra
to start with.

-Amar


On Thu, Jun 21, 2018 at 11:46 PM,  wrote:

> See <https://build.gluster.org/job/experimental-periodic/361/
> display/redirect?page=changes>
>
>


-- 
Amar Tumballi (amarts)
___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Resigning from the NFS maintainers team

2018-06-11 Thread Amar Tumballi
Thanks for the update, Niels!


On Wed, Jun 6, 2018 at 7:02 AM, Niels de Vos  wrote:

> Hi,
>
> My role at Red Hat changed a little and I do not have the time to look
> into gNFS anymore (NFS is not part of my current tasks). If needed, I am
> still around to assist, but the initial contacts should be the active
> maintainers and peers. (https://review.gluster.org/20173)
>
>
Will Acknowledge the patch, and merge it if there are no concerns about it
in next 24hrs.


> We currently have Jeff and Shreyas as maintainers, with Jiffin and
> Soumya as peers. There are very few changes happening in gNFS, and the
> component is disabled by default in Gluster 4.x. With the little work
> that is needed, I dont expect the team to miss me ;-)
>
> I will try to keep contributing to the components and any other Gluster
> things, of course!
>
>
Of-course! the project will be not the same without your insights!

-Amar
___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Update: Gerrit review system has one more command now

2018-05-21 Thread Amar Tumballi
Hi all,

As a push towards more flexibility to our developers, and options to run
more tests without too much effort, we are moving towards more and more
options to trigger tests from Gerrit during reviews.

One such example was 'regression-on-demand-multiplex' tests, where any one
can ask for a brick-mux regression for a particular patch.

In the same way, in certain cases where developers are making changes, and
more than 1 tests would be impacted, there was no easy way to run all the
regression, other than sending one patchset with changes to 'run-tests.sh'
to not fail on failures. This was tedious, and also is not known to many
new developers. Hence a new command is added to gerrit, where one can
trigger all the runs (if something is failing), by entering *'run full
regression'* in a single line at the top of your review comments.

With this, a separate job will be triggered which will run the full
regression suite with the patch. So, no more requirement to make
'run-tests.sh' changes.

More on this at http://bugzilla.redhat.com/1564119

Regards,
Amar

-- 
Amar Tumballi (amarts)
___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Meeting minutes : May 2nd, 2018 Maintainers meeting.

2018-05-02 Thread Amar Tumballi
On Wed, May 2, 2018 at 8:27 PM, Shyamsundar Ranganathan <srang...@redhat.com
> wrote:

> Meeting date: 05/02/2018 (May 02nd, 2018), 19:30 IST, 14:00 UTC, 10:00 EDTBJ
> Link
>
>- Bridge: https://bluejeans.com/205933580
>- Download: 
>
> Download: https://bluejeans.com/s/fPavr


Attendance
>
>- Raghavendra M (Raghavendra Bhat), Kaleb, Atin, Amar, Nithya, Rafi,
>Shyam
>
> Agenda
>
>-
>
>Commitment (GPLv2 Cure)
>- Email
>   
> <http://lists.gluster.org/pipermail/gluster-devel/2018-April/054751.html>
>   and Patch <https://review.gluster.org/19902>
>   - [amarts] 20+ people already have done +1. Will wait another
>   15days before any action on this.
>   - AI: Send a reminder to the lists and get the changes merged
>   around next maintainers meeting [Amar]
>-
>
>GD2 testing upstream
>- Is there a glusterd v1 Vs v2 parity check matrix?
>  - Functional parity of the CLI
>   - As the cli format is not 100% compatible, how to proceed further
>   with regression tests without much fuss?
>  - [amarts] Easier option is to handle it similar to brick-mux
>  test. Create a new directory ‘tests2/’ which is copy of current 
> tests, and
>  files changed as per glusterd2/glustercli needs. We can do bulk 
> replace etc
>  etc… start small, make incremental progress. Run it once a day.
> - Add smoke check for core glusterfs to keep things working
> with GD2
> - Add GD2 tests into the said patch, to ensure functionality
> of GD2 itself
> - Approach ack: Shyam
> - Approach nack:
>  -
>
>Coding standards
>- Did we come to conclusion? What next?
>  - Need some more votes to take it forward
>  - Settle current conflicts to the settings
>   - [amarts] Need to see what should be ‘deadline’ for this. Ideal to
>   have before 4.1, or else backporting would be serious problem.
>   - AI: Reminder mail on release to get this closed [Shyam]
>   - Conversion work should be doable in 1/2 a day
>   - Per-patch format auto-verifier/correction job readiness
>  - Possibly not ready during roll-out
>  - Not looking at it as the blocker, as we can get it within a
>  week and sanitize patches that were merged in that week [Shyam]
>   -
>
>Branching for 4.1?
>- Today would be branching date [Shyam]
>  - No time to fold slipping features, as we are 2 weeks off
>  already!
>  - Branching is on 4th May, 2018, what makes it gets in, rest is
>  pushed to 4.2
>   - Leases?
>   - Ctime?
>   - Thin-Arbiter?
>   - ?
>-
>
>Round Robin:
>- [amarts] - Is the plan for version change (or not change) ready
>   after 4.1? or do we need to extend the period for this?
>  - AI: Send proposal to devel -> users and take action on getting
>  this done before 4.2
>  - Fold in xlator maturity states into the same release
>   - [kkeithley] - new (untriaged) upstream bugs list is getting
>   longer.
>  - Triage beyond assignment
>  - Tracking fixes and closure of the same
>  - AI: Shyam to work on this and next steps
>
> Decisions and Actions
>
>- AI (GPL cure): Send a reminder to the lists and get the changes
>merged around next maintainers meeting [Amar]
>- Decision: GD2 tests to create a patch of tests and work on it using
>nightly runs to get GD2 integrated
>- AI (format conversion): Reminder mail on release to get this closed
>[Shyam]
>- AI (format conversion): Conversion done Thursday, ready for merge
>Friday [Amar/Nigel]
>- Decision (4.1 branching): 4.1 branching date set at 4th May, no
>feature slips allowed beyond that [Shyam]
>- AI (release cadence and version numbers change): Send proposal to
>devel -> users and take action on getting this done before 4.2 [Shyam]
>- AI (Bugs triage cadence): Shyam to work on this and next steps
>[Shyam]
>
>
> ___
> maintainers mailing list
> maintainers@gluster.org
> http://lists.gluster.org/mailman/listinfo/maintainers
>
>


-- 
Amar Tumballi (amarts)
___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Agenda for maintainers' meeting (May 02nd, 2018)

2018-05-02 Thread Amar Tumballi
Meeting date: 05/02/2018 (May 02nd, 2018), 19:30 IST, 14:00 UTC, 10:00 EDT
<https://hackmd.io/yTC-un5XT6KUB9V37LG6OQ?both#BJ-Link>BJ Link

   - Bridge: https://bluejeans.com/205933580
   - Download:

<https://hackmd.io/yTC-un5XT6KUB9V37LG6OQ?both#Attendance>Attendance

   -

<https://hackmd.io/yTC-un5XT6KUB9V37LG6OQ?both#Agenda>Agenda

   -

   Commitment (GPLv2 Cure)
   -

   GD2 testing upstream
   - Is there a glusterd v1 Vs v2 parity check matrix?
  - As the cli format is not 100% compatible, how to proceed further
  with regression tests without much fuss?
   -

   Coding standards
   -

   Branching for 4.1?
   - Leases?
  - Ctime?
  - Thin-Arbiter?
   -

   Round Robin:



Add comments/ pointers at https://hackmd.io/yTC-un5XT6KUB9V37LG6OQ?both

-- 
Amar Tumballi (amarts)
___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] More on 'SpecApproved' and 'DocApproved'.

2018-04-20 Thread Amar Tumballi
>
> - github flag enforcement for all features (doc and spec requirement)
> [amar]
>

I tried to add these details to 'glusterdocs' repo [1], but noticed that we
don't talk much about github is used for 'RFE' itself there. So, to unblock
developers for the branching of 4.1 by information planning to write the
email with details. will fix the other docs soon(ish)- Any help here would
be appreciated with *Gluster Swags*.

The intention behind making these flags mandatory is explained in my
earlier email [2]. As the same is now enforced, more details below.

If any one of the 'DocApproved', 'SpecApproved' label is missing, the
'smoke' test name  'comment-on-issue' [3] would keep failing, if your
commit message as a reference to github issues.

*What is 'DocApproved' and how to get this?*

Doc Approved means, the data required for 'user' to use the feature (all
the options, CLI commands, how to setup etc) are provided, and also brief
note of what the feature is about, is written down, so the release lead can
just pick this information, and use it in release process. (Note, the idea
is to automate this too, so release-lead's role is to control the
cherry-picking after the branch-out).

We would consider a blog with all these points also can be considered for
this.

*What is 'SpecApproved' and how to get this label?*

Spec (or Specification) deals with the design of the feature, (mostly
similar to gluster-spec repo we have). Answer the question of who needs it
and why? (Detail on the usecase). How (for developers) part of the design
explained.


*Who / How to set this label ?*

Today, anyone who is member of glusterfs project, and has access to
set/unset labels can do it. But the advise is to leave it to general
architects listed in the MAINTAINERS file, mainly as there may be few more
questions pending on the spec. Also, as a top level guideline to the
architects, please write a summary of why you are giving this label, what
are the things you considered etc, without which there may be some conflict
of interests here.

Initial few days/weeks, I along with Shyam would closely monitor this label
business. Every one is free to ask more question to developers if design is
not clear.

In future, like many other projects, we want to automate it, where the
label is give after certain 'comment commands' are given (like 3 people
giving 'i approve' type of message in issue would automatically get it the
label).

*What if I have more questions? or improvement suggestion?*
Please ask, file a bug, write email.. there are many options. Improvements
come only when people suggest / provide feedback etc.

Regards,
Amar

[1] - https://github.com/gluster/glusterdocs
[2] -
http://lists.gluster.org/pipermail/gluster-devel/2018-April/054696.html
[3] - https://build.gluster.org/job/comment-on-issue/
___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Meeting minutes : April 18th Maintainers meeting.

2018-04-18 Thread Amar Tumballi
Meeting date: 04/18/2018 (April 18th, 2018), 19:30 IST, 14:00 UTC, 10:00 EDT
<https://hackmd.io/yTC-un5XT6KUB9V37LG6OQ?both#BJ-Link>BJ Link

   - Bridge: https://bluejeans.com/205933580
   - Download: https://bluejeans.com/s/qjebZ

<https://hackmd.io/yTC-un5XT6KUB9V37LG6OQ?both#Attendance>Attendance

   - Nigel
   - Hari
   - Nithya
   - Kotresh
   - Raghavendra M
   - Shyam
   - Milind
   - Sorry Note: ndevos (travelling), kkeithle, amarts

<https://hackmd.io/yTC-un5XT6KUB9V37LG6OQ?both#Agenda>Agenda

   -

   Format change proposal:
   - Check the bug <https://bugzilla.redhat.com/show_bug.cgi?id=1564149>
  - Provide feedbacks
 - We want to do a big bang format change with clang-format.
 - Which format should we start with as base?
 Google/LLVM/Mozilla/Webkit/Chromium
- Samples present in repo
<https://github.com/nigelbabu/clang-format-sample>. *NOTE*:
Samples generated with indent as 4 spaces.
- Google Style Guide
<https://google.github.io/styleguide/cppguide.html>
- LLVM Style Guide <http://llvm.org/docs/CodingStandards.html>
- Mozilla Style Guide

<https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Coding_Style>
- WebKit style guide <https://webkit.org/code-style-guidelines/>
- Chromium style guide

<https://chromium.googlesource.com/chromium/src/+/master/styleguide/c++/c++.md>
 - When do we want to make this change? Before the 4.1 branching
 seems like a good time to make vast changes.
 - AGREED to do this prior to 4.1
  -

   Gluster’s Adoption of *GPL cure period enforcement*
   - What is it?
  <https://www.redhat.com/en/blog/fostering-greater-open-source-development>
  - How to go about implementing it?
 - Intended patch here
 
<https://github.com/amarts/glusterfs/commit/91918bbe7afaafd8e5bcf4a163ed98ffb39c4d21>
 - The Commitment looks like this
 <https://github.com/amarts/glusterfs/blob/commitment/COMMITMENT>
  - Next steps are to announce this to the lists and implement the same
  into the repository (possibly this week or early next week)[AI: Amar]
   -

   Automation Update:
   - Run regressions with brick multiplex directly from Gerrit now with a
  keyword rather than hacky temporary review requests. Keyword is ‘run
  brick-mux regression’. Example: https://review.gluster.org/#/c/19734/
  - More automation moving to Python so we have the ability to write
  unit tests. If you are going to write a complicated shell script
as a test
  runner, please get approval from CI component maintainers.
  - We’ve been testing Facebook’s distributed test runner and have
  managed to get it working. Time for regression drops with every
new machine
  added to the pool. Targetting a few weeks to bring it to production.
 - Test infrastructure has some minor fixes from FB, so need a sync
 up meeting on the same [AI: Nigel]
 - We can factor in test run times to achieve better overall
 results, and keep workers equally busy
  - Github Label check is now enforced:
 - Need help from others to identify the needs for going to give
 the flag.
 - As ‘ndevos’ asked, we need to highlight this in Developer Guide
 and other places in documentation.
 - Can we fix the ‘gluter spec’ format and ask people to fill the
 github issues in that format? So that it becomes easier to
give the flags.
  -

   [Handled Already]Regression failures
   - trash.t and nfs-mount-auth.t are failing frequently
  - git bisect shows https://review.gluster.org/19837 as possible
  suspect
  - Need to resolve soon as some critical patches are failing regression
   -

   Release timelines:
   - Can we extend branching out by a week or two? to compansate for github
  flag enforcement?
 - Target GA date should remain same.
 - Things we want for 4.1
- clang formatting
- Github flag enforcment for all features
 - We branch out later, but still release at the same time.
 - No concerns during meeting. Shyam to announce.
 -
  -

   Round Table:
   - kkeithle - comment-on-issue test is (still) broken. E.g. see
  https://review.gluster.org/#/c/19871 Can we please get this fixed?
 - This is working as intended. Please see automation updates
  - kkeithle - For the record, I’m opposed to any kind of bulk reformat
  of the source. In my experience this just complicates porting changes
  between branches. And isn’t it also going to break git blame history?
  - Nithya - Can we split experimental xlators into it’s own package
 - AI: Get experimental xlators in a separate package.

[Gluster-Maintainers] Fwd: [Gluster-users] performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs

2018-04-18 Thread Amar Tumballi
FYI. This is a good example for the need to have the 'DocApproved' and
'SpecApproved' flags. Lets get more serious about our docs for features IMO.

-Amar


-- Forwarded message --
From: Artem Russakovskii 
Date: Wed, Apr 18, 2018 at 12:23 PM
Subject: Re: [Gluster-users] performance.cache-size for high-RAM
clients/servers, other tweaks for performance, and improvements to Gluster
docs


OK, thank you. I'll try that.

The reason I was confused about its status is these things in the doc:

How To Test
> TBD.
> Documentation
> TBD
> Status
> Design complete. Implementation done. The only thing pending is the
> compounding of two fops in shd code.



Sincerely,
Artem

--
Founder, Android Police , APK Mirror
, Illogical Robot LLC
beerpla.net | +ArtemRussakovskii
 | @ArtemR


On Tue, Apr 17, 2018 at 11:49 PM, Ravishankar N 
wrote:

>
>
> On 04/18/2018 11:59 AM, Artem Russakovskii wrote:
>
> Btw, I've now noticed at least 5 variations in toggling binary option
> values. Are they all interchangeable, or will using the wrong value not
> work in some cases?
>
> yes/no
> true/false
> True/False
> on/off
> enable/disable
>
> It's quite a confusing/inconsistent practice, especially given that many
> options will accept any value without erroring out/validation.
>
>
> All these options are okay.
>
>
>
> Sincerely,
> Artem
>
> --
> Founder, Android Police , APK Mirror
> , Illogical Robot LLC
> beerpla.net | +ArtemRussakovskii
>  | @ArtemR
> 
>
> On Tue, Apr 17, 2018 at 11:22 PM, Artem Russakovskii 
> wrote:
>
>> Thanks for the link. Looking at the status of that doc, it isn't quite
>> ready yet, and there's no mention of the option.
>>
>
> No, this is a completed feature available since 3.8 IIRC. You can use it
> safely. There is a difference in how to enable it though. Instead of using
> 'gluster volume set ...', you need to use 'gluster volume heal 
> granular-entry-heal enable' to turn it on. If there are no pending heals,
> it will run successfully. Otherwise you need to wait until heals are over
> (i.e. heal info shows zero entries). Just follow what the CLI says and you
> should be fine.
>
> -Ravi
>
>
>> Does it mean that whatever is ready now in 4.0.1 is incomplete but can be
>> enabled via granular-entry-heal=on, and when it is complete, it'll become
>> the default and the flag will simply go away?
>>
>> Is there any risk enabling the option now in 4.0.1?
>>
>>
>> Sincerely,
>>
>
___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Agenda for Maintainer's meeting tomorrow (18th April).

2018-04-17 Thread Amar Tumballi
Please note that This meeting involves 2 big topics related to
developers/community, please try to attend the meeting.

1. Coding style (which can be very personal for many developers, great to
agree and move forward on this).
2. GPL cure discussions
   - The link given explains most of it. But if people need further help
understanding the benefits/impact of this, we can arrange for a legal
presence to explain things out (Best effort, as it depends on their
availability)


On Tue, Apr 17, 2018 at 3:29 PM, Amar Tumballi <atumb...@redhat.com> wrote:

> Meeting date: 14/18/2018 (April 18th, 2018), 19:30 IST, 14:00 UTC, 10:00
> EDT <https://hackmd.io/yTC-un5XT6KUB9V37LG6OQ?both#BJ-Link>BJ Link
>
>- Bridge: https://bluejeans.com/205933580
>- Download:
>
> <https://hackmd.io/yTC-un5XT6KUB9V37LG6OQ?both#Attendance>Attendance
>
>-
>
> <https://hackmd.io/yTC-un5XT6KUB9V37LG6OQ?both#Agenda>Agenda
>
>-
>
>Format change proposal:
>- Check the bug <https://bugzilla.redhat.com/show_bug.cgi?id=1564149>
>   - Provide feedbacks
>  - We want to do a big bang format change with clang-format.
>  - Which format should we start with as base?
>  Google/LLVM/Mozilla/Webkit/Chromium
> - Samples present in repo
> <https://github.com/nigelbabu/clang-format-sample>. *NOTE*:
> Samples generated with indent as 4 spaces.
> - Google Style Guide
> <https://google.github.io/styleguide/cppguide.html>
> - LLVM Style Guide <http://llvm.org/docs/CodingStandards.html>
> - Mozilla Style Guide
> 
> <https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Coding_Style>
> - WebKit style guide
> <https://webkit.org/code-style-guidelines/>
> - Chromium style guide
> 
> <https://chromium.googlesource.com/chromium/src/+/master/styleguide/c++/c++.md>
>  - When do we want to make this change? Before the 4.1 branching
>  seems like a good time to make vast changes.
>
>
>
>-
>
>Gluster’s Adoption of *GPL cure period enforcement*
>- What is it?
>   
> <https://www.redhat.com/en/blog/fostering-greater-open-source-development>
>   - How to go about implementing it?
>  - Intended patch here
>  
> <https://github.com/amarts/glusterfs/commit/91918bbe7afaafd8e5bcf4a163ed98ffb39c4d21>
>  - The Commitment looks like this
>  <https://github.com/amarts/glusterfs/blob/commitment/COMMITMENT>
>
>
>
>-
>
>Automation Update:
>- Run regressions with brick multiplex directly from Gerrit now with a
>   keyword rather than hacky temporary review requests. Keyword is ‘run
>   brick-mux regression’. Example: https://review.
>   gluster.org/#/c/19734/
>   - More automation moving to Python so we have the ability to write
>   unit tests. If you are going to write a complicated shell script as a 
> test
>   runner, please get approval from CI component maintainers.
>   - We’ve been testing Facebook’s distributed test runner and have
>   managed to get it working. Time for regression drops with every new 
> machine
>   added to the pool. Targetting a few weeks to bring it to production.
>   - Github Label check is now enforced:
>  - Need help from others to identify the needs for going to give
>  the flag.
>  - As ‘ndevos’ asked, we need to highlight this in Developer
>  Guide and other places in documentation.
>  - Can we fix the ‘gluter spec’ format and ask people to fill the
>  github issues in that format? So that it becomes easier to give the 
> flags.
>   -
>
>Regression failures
>- trash.t and nfs-mount-auth.t are failing frequently.
>   - git bisect shows https://review.gluster.org/19837 as possible
>   suspect.
>   - Need to resolve soon as some critical patches are failing
>   regression.
>-
>
>Release timelines:
>- Can we extend branching out by a week or two? to compansate for
>   github flag enforcement?
>  - Target GA date should remain same.
>   -
>
>Round Table:
>- [Name] Note
>
> <https://hackmd.io/yTC-un5XT6KUB9V37LG6OQ?both#Decisions>
> -
>
> Feel free to add more topics before meeting @ https://hackmd.io/yTC-
> un5XT6KUB9V37LG6OQ?both
>
>
>


-- 
Amar Tumballi (amarts)
___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Agenda for Maintainer's meeting tomorrow (18th April).

2018-04-17 Thread Amar Tumballi
Meeting date: 14/18/2018 (April 18th, 2018), 19:30 IST, 14:00 UTC, 10:00 EDT
BJ Link

   - Bridge: https://bluejeans.com/205933580
   - Download:

Attendance

   -

Agenda

   -

   Format change proposal:
   - Check the bug 
  - Provide feedbacks
 - We want to do a big bang format change with clang-format.
 - Which format should we start with as base?
 Google/LLVM/Mozilla/Webkit/Chromium
- Samples present in repo
. *NOTE*:
Samples generated with indent as 4 spaces.
- Google Style Guide

- LLVM Style Guide 
- Mozilla Style Guide


- WebKit style guide 
- Chromium style guide


 - When do we want to make this change? Before the 4.1 branching
 seems like a good time to make vast changes.



   -

   Gluster’s Adoption of *GPL cure period enforcement*
   - What is it?
  
  - How to go about implementing it?
 - Intended patch here
 

 - The Commitment looks like this
 



   -

   Automation Update:
   - Run regressions with brick multiplex directly from Gerrit now with a
  keyword rather than hacky temporary review requests. Keyword is ‘run
  brick-mux regression’. Example: https://review.gluster.org/#/c/19734/
  - More automation moving to Python so we have the ability to write
  unit tests. If you are going to write a complicated shell script
as a test
  runner, please get approval from CI component maintainers.
  - We’ve been testing Facebook’s distributed test runner and have
  managed to get it working. Time for regression drops with every
new machine
  added to the pool. Targetting a few weeks to bring it to production.
  - Github Label check is now enforced:
 - Need help from others to identify the needs for going to give
 the flag.
 - As ‘ndevos’ asked, we need to highlight this in Developer Guide
 and other places in documentation.
 - Can we fix the ‘gluter spec’ format and ask people to fill the
 github issues in that format? So that it becomes easier to
give the flags.
  -

   Regression failures
   - trash.t and nfs-mount-auth.t are failing frequently.
  - git bisect shows https://review.gluster.org/19837 as possible
  suspect.
  - Need to resolve soon as some critical patches are failing
  regression.
   -

   Release timelines:
   - Can we extend branching out by a week or two? to compansate for github
  flag enforcement?
 - Target GA date should remain same.
  -

   Round Table:
   - [Name] Note


-

Feel free to add more topics before meeting @
https://hackmd.io/yTC-un5XT6KUB9V37LG6OQ?both
___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Proposal to make Design Spec and Document for a feature mandatory.

2018-04-13 Thread Amar Tumballi
All,

Thanks to Nigel, this is now deployed, and any new patches referencing
github (ie, new features) need the 'DocApproved' and 'SpecApproved' label.

Regards,
Amar

On Mon, Apr 2, 2018 at 10:40 AM, Amar Tumballi <atumb...@redhat.com> wrote:

> Hi all,
>
> A better documentation about the feature, and also information about how
> to use the features are one of the major ask of the community when they
> want to use glusterfs, or want to contribute by helping get the features,
> bug fixes for features, etc.
>
> Finally, we have taken some baby steps to get that ask of having better
> design and documentation resolved. We had discussed this in our automation
> goals [1], to make having design spec, and documentation mandatory for a
> feature patch. Now, thanks to Shyam and Nigel, we have the patch ready to
> automate this process [2].
>
> Feel free to review the patch, and comment on this.
>
> A heads up on how it looks like after this patch gets in.
>
> * A patch for a github reference won't pass smoke unless these labels are
> present on github issue.
> * Everyone, feel free to review and comment on the issue / patch
> regarding the document. But, the label is expected to be provided only by
> Project's general architects, and any industry experts we as community
> nominate for validating feature. Initially for making sure we have a valid
> process, where I don't provides flags quickly, expectation is to have two
> people comment about approving the flags, and then the label can be
> provided.
> * Some may argue, the rate of development can reduce if we make this flag
> mandatory, but what is the use of having a feature without design and
> documentation on how to use it?
>
> For those who want to provide Spec and Doc approved flags, you can have a
> quick link [3], to see all the tests which fail smoke. Not all smoke
> failures would be for missing Spec and Doc flag, but this is just a quick
> start.
>
> [1] - https://docs.google.com/document/d/1AFkZmRRDXRxs21GnGauieI
> yiIiRZ-nTEW8CPi7Gbp3g/edit
> [2] - https://github.com/gluster/glusterfs-patch-acceptance-tests/pull/126
> [3] - https://review.gluster.org/#/dashboard/?foreach=status:
> open%20project:glusterfs%20branch:master%20=Github%2520Validation&&
> Awaiting%2520Reviews=(label:Smoke=-1)
>
> We would like to implement this check soon, and happy to accommodate the
> feedback and suggestions along the way.
>
> Regards,
> Amar
>
>


-- 
Amar Tumballi (amarts)
___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Proposal to make Design Spec and Document for a feature mandatory.

2018-04-01 Thread Amar Tumballi
Hi all,

A better documentation about the feature, and also information about how to
use the features are one of the major ask of the community when they want
to use glusterfs, or want to contribute by helping get the features, bug
fixes for features, etc.

Finally, we have taken some baby steps to get that ask of having better
design and documentation resolved. We had discussed this in our automation
goals [1], to make having design spec, and documentation mandatory for a
feature patch. Now, thanks to Shyam and Nigel, we have the patch ready to
automate this process [2].

Feel free to review the patch, and comment on this.

A heads up on how it looks like after this patch gets in.

* A patch for a github reference won't pass smoke unless these labels are
present on github issue.
* Everyone, feel free to review and comment on the issue / patch regarding
the document. But, the label is expected to be provided only by Project's
general architects, and any industry experts we as community nominate for
validating feature. Initially for making sure we have a valid process,
where I don't provides flags quickly, expectation is to have two people
comment about approving the flags, and then the label can be provided.
* Some may argue, the rate of development can reduce if we make this flag
mandatory, but what is the use of having a feature without design and
documentation on how to use it?

For those who want to provide Spec and Doc approved flags, you can have a
quick link [3], to see all the tests which fail smoke. Not all smoke
failures would be for missing Spec and Doc flag, but this is just a quick
start.

[1] - https://docs.google.com/document/d/1AFkZmRRDXRxs21GnGauieIyiIiRZ-
nTEW8CPi7Gbp3g/edit
[2] - https://github.com/gluster/glusterfs-patch-acceptance-tests/pull/126
[3] -
https://review.gluster.org/#/dashboard/?foreach=status:open%20project:glusterfs%20branch:master%20=Github%2520Validation&%2520Reviews=(label:Smoke=-1)

We would like to implement this check soon, and happy to accommodate the
feedback and suggestions along the way.

Regards,
Amar
___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Release 4.1: LTM release targeted for end of May

2018-03-22 Thread Amar Tumballi
On Thu, Mar 22, 2018 at 11:34 PM, Shyam Ranganathan <srang...@redhat.com>
wrote:

> On 03/21/2018 04:12 AM, Amar Tumballi wrote:
> > Current 4.1 project release lane is empty! I cleaned it up, because I
> > want to hear from all as to what content to add, than add things
> marked
> > with the 4.1 milestone by default.
> >
> >
> > I would like to see we have sane default values for most of the options,
> > or have group options for many use-cases.
>
> Amar, do we have an issue that lists the use-cases and hence the default
> groups to be provided for the same?
>
>
Considering group options' task is more in glusterd2, the issue is @
https://github.com/gluster/glusterd2/issues/614 &
https://github.com/gluster/glusterd2/issues/454


> >
> > Also want to propose that,  we include a release
> > of http://github.com/gluster/gluster-health-report with 4.1, and make
> > the project more usable.
>
> In the theme of including sub-projects that we want to highlight, what
> else should we tag a release for or highlight with 4.1?
>
> @Aravinda, how do you envision releasing this with 4.1? IOW, what
> interop tests and hence sanity can be ensured with 4.1 and how can we
> tag a release that is sane against 4.1?
>
> >
> > Also, we see that some of the patches from FB branch on namespace and
> > throttling are in, so we would like to call that feature out as
> > experimental by then.
>
> I would assume we track this against
> https://github.com/gluster/glusterfs/issues/408 would that be right?
>

Yes, that is right. sorry for missing out the github issues in the first
email.

-Amar
___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Maintainer's meeting minutes (21st March, 2018)

2018-03-22 Thread Amar Tumballi
Meeting date: 03/07/2018 (March 3rd, 2018. 19:30IST, 14:00UTC, 09:00EST)
<https://hackmd.io/yTC-un5XT6KUB9V37LG6OQ?both#BJ-Link>BJ Link

   - Bridge: https://bluejeans.com/205933580
   - Download: https://bluejeans.com/s/huECj

<https://hackmd.io/yTC-un5XT6KUB9V37LG6OQ?both#Attendance>Attendance

   - Amar, Kaleb, PPai, Nithya, Sac, Deepshika, Shyam (joined late)

<https://hackmd.io/yTC-un5XT6KUB9V37LG6OQ?both#Agenda>Agenda

   -

   AI from previous meeting:
   - Email on version numbers: [Done]
  - Email on 4.1 features: [Done]
  - Email on bugzilla automation etc: [Done]
   -

   Any more features required, wanted in 4.1?
   - Question to FB team: are you fine with features being merged? Any more
  pending features, bug fixes from FB branch?
  - Question to Red Hat: are the features fine?
  - Question to community: Is there any concerns?
  - GD2: Can we update the community about the proposal for 4.1
 - [PPai] we can send the github link to the project to community.
  -

   https://bugzilla.redhat.com/show_bug.cgi?id=1193929
   -

   If agreed for option change, what would be good version number next?
   - [Sac] Need more discipline to have calendar based releases
  - [ppai] http://semver.org is more practised
   -

   Round Table
   - [kaleb] fyi, I’m NOT making progress with debian packaging of gd2.
  I’ve been told that Marcus (a nfs-ganesha/ceph dev) has Debian pkging
  skillz. When he returns from the Ceph dev conf in China I’ll see if I can
  get some of his time. Also my appeals for help from debian packages and
  from Patrick Matthaie have gone unanswered. Debian packaging is already
  voodoo black magic; golang makes it even harder. ;-)
  - [amarts] auto tunable options is the future requirement for us.
  Everyone, please consider figuring out for your components work for it.
  - [shyam] I see a good response for features this time. Expectation
  is to meet what we promised. See github projects page for seeing whats
  agreed for 4.1




-- 
Amar Tumballi (amarts)
___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Proposal to change the version numbers of Gluster project

2018-03-21 Thread Amar Tumballi
>>> >>
>>>>> >>
>>>>> >> +1 to the overall release cadence change proposal and what Kaleb
>>>>> >> mentions here.
>>>>> >>
>>>>> >> Tying op-versions to release numbers seems like an easier approach
>>>>> >> than others & one to which we are accustomed to. What are the
>>>>> benefits
>>>>> >> of breaking this model?
>>>>> >>
>>>>> > There is a bit of confusion among the user base when a release
>>>>> happens
>>>>> > but the op-version doesn't have a commensurate bump. People ask why
>>>>> they
>>>>> > can't set the op-version to match the gluster release version they
>>>>> have
>>>>> > installed. If it was completely disconnected from the release
>>>>> version,
>>>>> > that might be a great enough mental disconnect that the expectation
>>>>> > could go away which would actually cause less confusion.
>>>>>
>>>>> Above is the reason I state it as well (the breaking of the mental
>>>>> model
>>>>> around this), why tie it together when it is not totally related. I
>>>>> also
>>>>> agree that, the notion is present that it is tied together and hence
>>>>> related, but it may serve us better to break it.
>>>>>
>>>>>
>>>>
>>>> I see your perspective. Another related reason for not introducing an
>>>> op-version bump in a new release would be that there are no incompatible
>>>> features introduced (in the new release). Hence it makes sense to preserve
>>>> the older op-version.
>>>>
>>>> To make everyone's lives simpler, would it be useful to introduce a
>>>> command that provides the max op-version to release number mapping? The
>>>> output of the command could look like:
>>>>
>>>> op-version X: 3.7.0 to 3.7.11
>>>> op-version Y: 3.7.12 to x.y.z
>>>>
>>>
>>> We already have introduced an option called cluster.max-op-version where
>>> one can run a command like "gluster v get all cluster.max-op-version" to
>>> determine what highest op-version the cluster can be bumped up to. IMO,
>>> this helps users not to look at the document for at given x.y.z release the
>>> op-version has to be bumped up to X .  Isn't that sufficient for this
>>> requirement?
>>>
>>
>>
>> I think it is a more elegant solution than what I described.  Do we have
>> a single interface to determine the current & max op-versions of all
>> members in the trusted storage pool? If not, it might be an useful
>> enhancement to add at some point in time.
>>
>
> We do have a way to get to that details:
>
> root@a7f4b3e96fde:/home/glusterfs# gluster v get all all | grep op-version
> cluster.op-version  40100
>
> cluster.max-op-version  40100
>
>
>> If we don't hear much complaints about op-version mismatches from users,
>> I think the CLI you described could be sufficient for understanding the
>> cluster operating version.
>>
>>
>> Thanks,
>> Vijay
>>
>
>
> ___
> maintainers mailing list
> maintainers@gluster.org
> http://lists.gluster.org/mailman/listinfo/maintainers
>
>


-- 
Amar Tumballi (amarts)
___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Release 4.1: LTM release targeted for end of May

2018-03-21 Thread Amar Tumballi
On Tue, Mar 13, 2018 at 7:07 AM, Shyam Ranganathan <srang...@redhat.com>
wrote:

> Hi,
>
> As we wind down on 4.0 activities (waiting on docs to hit the site, and
> packages to be available in CentOS repositories before announcing the
> release), it is time to start preparing for the 4.1 release.
>
> 4.1 is where we have GD2 fully functional and shipping with migration
> tools to aid Glusterd to GlusterD2 migrations.
>
> Other than the above, this is a call out for features that are in the
> works for 4.1. Please *post* the github issues to the *devel lists* that
> you would like as a part of 4.1, and also mention the current state of
> development.
>
> Further, as we hit end of March, we would make it mandatory for features
> to have required spec and doc labels, before the code is merged, so
> factor in efforts for the same if not already done.
>
> Current 4.1 project release lane is empty! I cleaned it up, because I
> want to hear from all as to what content to add, than add things marked
> with the 4.1 milestone by default.
>
>
I would like to see we have sane default values for most of the options, or
have group options for many use-cases.

Also want to propose that,  we include a release of
http://github.com/gluster/gluster-health-report with 4.1, and make the
project more usable.

Also, we see that some of the patches from FB branch on namespace and
throttling are in, so we would like to call that feature out as
experimental by then.

Regards,
Amar


> Thanks,
> Shyam
> P.S: Also any volunteers to shadow/participate/run 4.1 as a release owner?
> ___
> maintainers mailing list
> maintainers@gluster.org
> http://lists.gluster.org/mailman/listinfo/maintainers
>



-- 
Amar Tumballi (amarts)
___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


  1   2   >