Re: [Gluster-Maintainers] Release 4.1: LTM release targeted for end of May

2018-03-22 Thread Amar Tumballi
On Thu, Mar 22, 2018 at 11:34 PM, Shyam Ranganathan 
wrote:

> On 03/21/2018 04:12 AM, Amar Tumballi wrote:
> > Current 4.1 project release lane is empty! I cleaned it up, because I
> > want to hear from all as to what content to add, than add things
> marked
> > with the 4.1 milestone by default.
> >
> >
> > I would like to see we have sane default values for most of the options,
> > or have group options for many use-cases.
>
> Amar, do we have an issue that lists the use-cases and hence the default
> groups to be provided for the same?
>
>
Considering group options' task is more in glusterd2, the issue is @
https://github.com/gluster/glusterd2/issues/614 &
https://github.com/gluster/glusterd2/issues/454


> >
> > Also want to propose that,  we include a release
> > of http://github.com/gluster/gluster-health-report with 4.1, and make
> > the project more usable.
>
> In the theme of including sub-projects that we want to highlight, what
> else should we tag a release for or highlight with 4.1?
>
> @Aravinda, how do you envision releasing this with 4.1? IOW, what
> interop tests and hence sanity can be ensured with 4.1 and how can we
> tag a release that is sane against 4.1?
>
> >
> > Also, we see that some of the patches from FB branch on namespace and
> > throttling are in, so we would like to call that feature out as
> > experimental by then.
>
> I would assume we track this against
> https://github.com/gluster/glusterfs/issues/408 would that be right?
>

Yes, that is right. sorry for missing out the github issues in the first
email.

-Amar
___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [gluster-packaging] glusterfs-4.0.1 released

2018-03-22 Thread Niels de Vos
On Thu, Mar 22, 2018 at 04:15:38PM -0400, Kaleb S. KEITHLEY wrote:
> On 03/22/2018 02:19 PM, Shyam Ranganathan wrote:
> > On 03/21/2018 02:45 PM, Kaleb S. KEITHLEY wrote:
> >> * RHEL and CentOS el7 and el6 (el6 client-side only) in CentOS Storage
> >> SIG at [4].
> >>
> >> All the LATEST and STM-4.0 symlinks have been created or updated to
> >> point to the 4.0.1 release.
> >>
> >> Please test the CentOS packages and give feedback so that packages can
> >> be tagged for release.
> > 
> > Tested, works fine, good to go.
> 
> 4.0.1 has been tagged for release

Great job guys, thanks! Remember that the CentOS team normally does not
do their signing+pushing of packages on Fridays or weekends. We can
expect the repository to receive the updates on Monday.

Niels
___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Build failed in Jenkins: regression-test-burn-in #3926

2018-03-22 Thread jenkins
See 


Changes:

[R.Shyamsundar] rfc.sh: provide a unified way to update bugs or github issues ID

--
[...truncated 927.86 KB...]
./tests/bugs/distribute/bug-961615.t  -  9 second
./tests/bugs/distribute/bug-1086228.t  -  9 second
./tests/bugs/changelog/bug-1208470.t  -  9 second
./tests/bugs/access-control/bug-958691.t  -  9 second
./tests/basic/stats-dump.t  -  9 second
./tests/basic/quota-nfs.t  -  9 second
./tests/basic/quota_aux_mount.t  -  9 second
./tests/basic/pgfid-feat.t  -  9 second
./tests/basic/ios-dump.t  -  9 second
./tests/basic/inode-quota-enforcing.t  -  9 second
./tests/basic/glusterd/arbiter-volume-probe.t  -  9 second
./tests/basic/gfapi/mandatory-lock-optimal.t  -  9 second
./tests/basic/fop-sampling.t  -  9 second
./tests/basic/ec/ec-anonymous-fd.t  -  9 second
./tests/basic/afr/arbiter-statfs.t  -  9 second
./tests/performance/open-behind.t  -  8 second
./tests/gfid2path/get-gfid-to-path.t  -  8 second
./tests/bugs/upcall/bug-1458127.t  -  8 second
./tests/bugs/upcall/bug-1227204.t  -  8 second
./tests/bugs/snapshot/bug-1260848.t  -  8 second
./tests/bugs/shard/shard-inode-refcount-test.t  -  8 second
./tests/bugs/shard/bug-1488546.t  -  8 second
./tests/bugs/shard/bug-1468483.t  -  8 second
./tests/bugs/replicate/bug-1250170-fsync.t  -  8 second
./tests/bugs/quota/bug-1250582-volume-reset-should-not-remove-quota-quota-deem-statfs.t
  -  8 second
./tests/bugs/quota/bug-1243798.t  -  8 second
./tests/bugs/glusterfs/bug-902610.t  -  8 second
./tests/bugs/glusterfs/bug-861015-log.t  -  8 second
./tests/bugs/glusterd/bug-949930.t  -  8 second
./tests/bugs/glusterd/bug-1242875-do-not-pass-volinfo-quota.t  -  8 second
./tests/bugs/fuse/bug-985074.t  -  8 second
./tests/bugs/distribute/bug-884597.t  -  8 second
./tests/bugs/core/bug-908146.t  -  8 second
./tests/bugs/bitrot/1209818-vol-info-show-scrub-process-properly.t  -  8 second
./tests/basic/volume-status.t  -  8 second
./tests/basic/posix/shared-statfs.t  -  8 second
./tests/basic/ec/ec-read-policy.t  -  8 second
./tests/basic/afr/heal-info.t  -  8 second
./tests/gfid2path/block-mount-access.t  -  7 second
./tests/features/ssl-authz.t  -  7 second
./tests/bugs/tier/bug-1205545-CTR-and-trash-integration.t  -  7 second
./tests/bugs/replicate/bug-1448804-check-quorum-type-values.t  -  7 second
./tests/bugs/replicate/bug-1365455.t  -  7 second
./tests/bugs/quota/bug-1104692.t  -  7 second
./tests/bugs/posix/bug-1360679.t  -  7 second
./tests/bugs/md-cache/bug-1211863.t  -  7 second
./tests/bugs/io-cache/bug-read-hang.t  -  7 second
./tests/bugs/io-cache/bug-858242.t  -  7 second
./tests/bugs/glusterd/bug-948729/bug-948729-force.t  -  7 second
./tests/bugs/gfapi/bug-1447266/1460514.t  -  7 second
./tests/bugs/fuse/bug-963678.t  -  7 second
./tests/bugs/ec/bug-1227869.t  -  7 second
./tests/bugs/distribute/bug-882278.t  -  7 second
./tests/bugs/distribute/bug-1368012.t  -  7 second
./tests/bugs/core/bug-986429.t  -  7 second
./tests/bugs/core/bug-949242.t  -  7 second
./tests/bugs/cli/bug-1087487.t  -  7 second
./tests/bugs/bug-1371806_2.t  -  7 second
./tests/bugs/bug-1258069.t  -  7 second
./tests/bugs/bitrot/1207029-bitrot-daemon-should-start-on-valid-node.t  -  7 
second
./tests/bitrot/br-stub.t  -  7 second
./tests/basic/tier/ctr-rename-overwrite.t  -  7 second
./tests/basic/gfapi/glfs_xreaddirplus_r.t  -  7 second
./tests/basic/afr/gfid-heal.t  -  7 second
./tests/basic/afr/arbiter-remove-brick.t  -  7 second
./tests/features/lock-migration/lkmigration-set-option.t  -  6 second
./tests/bugs/upcall/bug-upcall-stat.t  -  6 second
./tests/bugs/upcall/bug-1369430.t  -  6 second
./tests/bugs/snapshot/bug-1178079.t  -  6 second
./tests/bugs/snapshot/bug-1064768.t  -  6 second
./tests/bugs/shard/bug-1258334.t  -  6 second
./tests/bugs/shard/bug-1256580.t  -  6 second
./tests/bugs/replicate/bug-767585-gfid.t  -  6 second
./tests/bugs/replicate/bug-1101647.t  -  6 second
./tests/bugs/quota/bug-1287996.t  -  6 second
./tests/bugs/nfs/subdir-trailing-slash.t  -  6 second
./tests/bugs/nfs/bug-915280.t  -  6 second
./tests/bugs/nfs/bug-1143880-fix-gNFSd-auth-crash.t  -  6 second
./tests/bugs/nfs/bug-1116503.t  -  6 second
./tests/bugs/md-cache/bug-1211863_unlink.t  -  6 second
./tests/bugs/glusterfs/bug-893378.t  -  6 second
./tests/bugs/glusterd/bug-948729/bug-948729.t  -  6 second
./tests/bugs/glusterd/bug-1482906-peer-file-blank-line.t  -  6 second
./tests/bugs/distribute/bug-912564.t  -  6 second
./tests/bugs/distribute/bug-1088231.t  -  6 second
./tests/bugs/cli/bug-1022905.t  -  6 second
./tests/bugs/bitrot/bug-1229134-bitd-not-support-vol-set.t  -  6 second
./tests/bitrot/bug-1221914.t  -  6 second
./tests/basic/gfapi/glfd-lkowner.t  -  6 second
./tests/basic/gfapi/gfapi-dup.t  -  6 second
./tests/basic/gfapi/bug-1241104.t  -  6 second
./tests/basic/gfapi/anonymous_fd.t  -  6 second
./tests/basic/ec/nfs.t  -  6 

Re: [Gluster-Maintainers] [gluster-packaging] glusterfs-4.0.1 released

2018-03-22 Thread Kaleb S. KEITHLEY
On 03/22/2018 02:19 PM, Shyam Ranganathan wrote:
> On 03/21/2018 02:45 PM, Kaleb S. KEITHLEY wrote:
>> * RHEL and CentOS el7 and el6 (el6 client-side only) in CentOS Storage
>> SIG at [4].
>>
>> All the LATEST and STM-4.0 symlinks have been created or updated to
>> point to the 4.0.1 release.
>>
>> Please test the CentOS packages and give feedback so that packages can
>> be tagged for release.
> 
> Tested, works fine, good to go.

4.0.1 has been tagged for release

--

Kaleb

___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Release 4.0.2: Planned for the 20th of Apr, 2018

2018-03-22 Thread Shyam Ranganathan
Hi,

As release 4.0.1 is (to be) announced, here is are the needed details
for 4.0.2

Release date: 20th Apr, 2018
Tracker bug for blockers:
https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-4.0.2

Shyam
___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [gluster-packaging] glusterfs-4.0.1 released

2018-03-22 Thread Shyam Ranganathan
On 03/21/2018 02:45 PM, Kaleb S. KEITHLEY wrote:
> * RHEL and CentOS el7 and el6 (el6 client-side only) in CentOS Storage
> SIG at [4].
> 
> All the LATEST and STM-4.0 symlinks have been created or updated to
> point to the 4.0.1 release.
> 
> Please test the CentOS packages and give feedback so that packages can
> be tagged for release.

Tested, works fine, good to go.
___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Release 4.1: LTM release targeted for end of May

2018-03-22 Thread Shyam Ranganathan
On 03/21/2018 04:12 AM, Amar Tumballi wrote:
> Current 4.1 project release lane is empty! I cleaned it up, because I
> want to hear from all as to what content to add, than add things marked
> with the 4.1 milestone by default.
> 
> 
> I would like to see we have sane default values for most of the options,
> or have group options for many use-cases.

Amar, do we have an issue that lists the use-cases and hence the default
groups to be provided for the same?

> 
> Also want to propose that,  we include a release
> of http://github.com/gluster/gluster-health-report with 4.1, and make
> the project more usable.

In the theme of including sub-projects that we want to highlight, what
else should we tag a release for or highlight with 4.1?

@Aravinda, how do you envision releasing this with 4.1? IOW, what
interop tests and hence sanity can be ensured with 4.1 and how can we
tag a release that is sane against 4.1?

> 
> Also, we see that some of the patches from FB branch on namespace and
> throttling are in, so we would like to call that feature out as
> experimental by then.

I would assume we track this against
https://github.com/gluster/glusterfs/issues/408 would that be right?
___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Build failed in Jenkins: experimental-periodic #268

2018-03-22 Thread jenkins
See 

--
[...truncated 982.41 KB...]
./tests/bugs/snapshot/bug-1260848.t  -  8 second
./tests/bugs/shard/bug-1488546.t  -  8 second
./tests/bugs/replicate/bug-1365455.t  -  8 second
./tests/bugs/quota/bug-1287996.t  -  8 second
./tests/bugs/quota/bug-1104692.t  -  8 second
./tests/bugs/md-cache/bug-1211863.t  -  8 second
./tests/bugs/io-cache/bug-858242.t  -  8 second
./tests/bugs/glusterd/bug-1499509-disconnect-in-brick-mux.t  -  8 second
./tests/bugs/glusterd/bug-1323287-real_path-handshake-test.t  -  8 second
./tests/bugs/glusterd/bug-1242875-do-not-pass-volinfo-quota.t  -  8 second
./tests/bugs/glusterd/bug-1223213-peerid-fix.t  -  8 second
./tests/bugs/glusterd/bug-1104642.t  -  8 second
./tests/bugs/glusterd/bug-1046308.t  -  8 second
./tests/bugs/fuse/bug-985074.t  -  8 second
./tests/bugs/fuse/bug-963678.t  -  8 second
./tests/bugs/ec/bug-1179050.t  -  8 second
./tests/bugs/distribute/bug-884597.t  -  8 second
./tests/bugs/distribute/bug-882278.t  -  8 second
./tests/bugs/distribute/bug-1368012.t  -  8 second
./tests/bugs/core/bug-949242.t  -  8 second
./tests/bugs/cli/bug-1087487.t  -  8 second
./tests/bugs/cli/bug-1022905.t  -  8 second
./tests/bugs/bug-1258069.t  -  8 second
./tests/bugs/bitrot/1209818-vol-info-show-scrub-process-properly.t  -  8 second
./tests/bitrot/br-stub.t  -  8 second
./tests/basic/pgfid-feat.t  -  8 second
./tests/basic/ec/ec-anonymous-fd.t  -  8 second
./tests/basic/afr/gfid-mismatch.t  -  8 second
./tests/basic/afr/gfid-heal.t  -  8 second
./tests/basic/afr/arbiter-statfs.t  -  8 second
./tests/gfid2path/get-gfid-to-path.t  -  7 second
./tests/gfid2path/block-mount-access.t  -  7 second
./tests/bugs/replicate/bug-767585-gfid.t  -  7 second
./tests/bugs/replicate/bug-1448804-check-quorum-type-values.t  -  7 second
./tests/bugs/replicate/bug-1101647.t  -  7 second
./tests/bugs/posix/bug-1360679.t  -  7 second
./tests/bugs/nfs/bug-1143880-fix-gNFSd-auth-crash.t  -  7 second
./tests/bugs/nfs/bug-1116503.t  -  7 second
./tests/bugs/glusterfs/bug-856455.t  -  7 second
./tests/bugs/glusterfs/bug-848251.t  -  7 second
./tests/bugs/glusterd/bug-948729/bug-948729-mode-script.t  -  7 second
./tests/bugs/glusterd/bug-948729/bug-948729-force.t  -  7 second
./tests/bugs/glusterd/bug-1482906-peer-file-blank-line.t  -  7 second
./tests/bugs/glusterd/bug-1446172-brick-mux-reset-brick.t  -  7 second
./tests/bugs/glusterd/bug-1094119-remove-replace-brick-support-from-glusterd.t  
-  7 second
./tests/bugs/gfapi/bug-1447266/1460514.t  -  7 second
./tests/bugs/ec/bug-1227869.t  -  7 second
./tests/bugs/distribute/bug-1088231.t  -  7 second
./tests/bugs/core/bug-986429.t  -  7 second
./tests/bugs/core/bug-908146.t  -  7 second
./tests/bugs/core/bug-834465.t  -  7 second
./tests/bugs/bitrot/bug-1210684-scrub-pause-resume-error-handling.t  -  7 second
./tests/basic/tier/ctr-rename-overwrite.t  -  7 second
./tests/basic/gfapi/upcall-cache-invalidate.t  -  7 second
./tests/basic/ec/ec-fallocate.t  -  7 second
./tests/basic/afr/compounded-write-txns.t  -  7 second
./tests/basic/afr/arbiter-remove-brick.t  -  7 second
./tests/features/readdir-ahead.t  -  6 second
./tests/features/lock-migration/lkmigration-set-option.t  -  6 second
./tests/bugs/upcall/bug-1369430.t  -  6 second
./tests/bugs/transport/bug-873367.t  -  6 second
./tests/bugs/shard/bug-1258334.t  -  6 second
./tests/bugs/replicate/bug-966018.t  -  6 second
./tests/bugs/read-only/bug-1134822-read-only-default-in-graph.t  -  6 second
./tests/bugs/posix/bug-1122028.t  -  6 second
./tests/bugs/nfs/subdir-trailing-slash.t  -  6 second
./tests/bugs/nfs/bug-915280.t  -  6 second
./tests/bugs/nfs/bug-847622.t  -  6 second
./tests/bugs/nfs/bug-1210338.t  -  6 second
./tests/bugs/nfs/bug-1166862.t  -  6 second
./tests/bugs/md-cache/afr-stale-read.t  -  6 second
./tests/bugs/io-cache/bug-read-hang.t  -  6 second
./tests/bugs/glusterfs-server/bug-873549.t  -  6 second
./tests/bugs/glusterfs-server/bug-864222.t  -  6 second
./tests/bugs/glusterfs/bug-893378.t  -  6 second
./tests/bugs/glusterfs/bug-893338.t  -  6 second
./tests/bugs/glusterd/bug-948729/bug-948729.t  -  6 second
./tests/bugs/glusterd/bug-1179175-uss-option-validation.t  -  6 second
./tests/bugs/glusterd/bug-1102656.t  -  6 second
./tests/bugs/glusterd/bug-1022055.t  -  6 second
./tests/bugs/bug-1371806_1.t  -  6 second
./tests/bugs/bitrot/bug-1229134-bitd-not-support-vol-set.t  -  6 second
./tests/bitrot/bug-1221914.t  -  6 second
./tests/basic/nl-cache.t  -  6 second
./tests/basic/md-cache/bug-1317785.t  -  6 second
./tests/basic/hardlink-limit.t  -  6 second
./tests/basic/gfapi/glfs_xreaddirplus_r.t  -  6 second
./tests/basic/gfapi/glfd-lkowner.t  -  6 second
./tests/basic/gfapi/bug-1241104.t  -  6 second
./tests/basic/gfapi/anonymous_fd.t  -  6 second
./tests/basic/ec/nfs.t  -  6 second
./tests/basic/ec/ec-internal-xattrs.t  -  6 second
./tests/basic/ec/dht-rename.t  

[Gluster-Maintainers] Jenkins build is back to normal : regression-test-burn-in #3925

2018-03-22 Thread jenkins
See 


___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] glusterfs-3.12.7 released

2018-03-22 Thread Jiffin Tony Thottan



On Thursday 22 March 2018 01:38 PM, Jiffin Tony Thottan wrote:




On Thursday 22 March 2018 01:07 PM, Atin Mukherjee wrote:



On Thu, Mar 22, 2018 at 12:38 PM, Jiffin Tony Thottan 
mailto:jthot...@redhat.com>> wrote:




On Thursday 22 March 2018 12:29 PM, Jiffin Tony Thottan wrote:




On Wednesday 21 March 2018 09:06 AM, Atin Mukherjee wrote:



On Wed, Mar 21, 2018 at 12:18 AM, Shyam Ranganathan
mailto:srang...@redhat.com>> wrote:

On 03/20/2018 01:10 PM, Jiffin Thottan wrote:
> Hi Shyam,
>
> Actually I planned to do the release on March 8th(posted
the release note on that day). But it didn't happen.
> I didn't merge any patches post sending the release
note(blocker bug had some merge conflict issue on that so I
skipped AFAIR).
> I performed 3.12.7 tagging yesterday and ran the build
job today.
>
> Can u please provide a suggestion here ? Do I need to
perform a 3.12.7-1 for the blocker bug ?

I see that the bug is marked against the tracker, but is not a
regression or an issue that is serious enough that it
cannot wait for
the next minor release.

Copied Atin to the mail, who opened that issue for his
comments. If he
agrees, let's get this moving and get the fix into the next
minor release.


Even though it's not a regression and a day 1 bug with brick
multiplexing, the issue is severe enough to consider this to be
fixed *asap* . In this scenario, if you're running a multi node
cluster with brick multiplexing enabled and one node down and
there're some volume operations performed and post that when
the node comes back, brick processes fail to come up.


Issue is impact only with glusterd, whether any other component
needs this fix?


Sorry I meant brick multiplexing not glusterd
--
Jiffin


If it is issue not report from upstream user/community, I prefer
to take it for next release.



IMO, assessment of an issue should be done based on its merit, not 
based on where it originates from. It might be a fair question to ask 
that "do we have users who have brick multiplexing enabled" and based 
on that take a call to fix it immediately or as part of next update 
but at the same time, you're still exposing a known problem with out 
flagging a warning that don't use brick multiplexing till this bug is 
fixed.


I have not yet sent the announcement mail for the release nor sent 
release notes to https://docs.gluster.org/en. I can mention about it 
over there

--
Jiffin




Can u please tell me whether it works for u?
--
Jiffin





Regards,
Jiffin



>
> --
> Regards,
> Jiffin
>
>
>
>
> - Original Message -
> From: "Shyam Ranganathan" mailto:srang...@redhat.com>>
> To: jenk...@build.gluster.org
, packag...@gluster.org
, maintainers@gluster.org

> Sent: Tuesday, March 20, 2018 9:06:57 PM
> Subject: Re: [Gluster-Maintainers] glusterfs-3.12.7 released
>
> On 03/20/2018 11:19 AM, jenk...@build.gluster.org
 wrote:
>> SRC:

https://build.gluster.org/job/release-new/47/artifact/glusterfs-3.12.7.tar.gz


>> HASH:

https://build.gluster.org/job/release-new/47/artifact/glusterfs-3.12.7.sha512sum


>>
>> This release is made off jenkins-release-47
>
> Jiffin, there are about 6 patches ready in the 3.12
queue, that are not
> merged for this release, why?
>

https://review.gluster.org/#/projects/glusterfs,dashboards/dashboard:3-12-dashboard


>
> The tracker bug for 3.12.7 calls out
> https://bugzilla.redhat.com/show_bug.cgi?id=1543708
 as a
blocker, and
> has a patch, which is not merged.
>
> Was this some test packaging job?
>
>
>
>
>>
>>
>>
>> ___
>> maintainers mailing list
>> maintainers@gluster.org 
>> http://lists.gluster.org/mailman/listinfo/maintainers

>>
> ___
> maintainers mailing list
> maintainers@gluster.org

[Gluster-Maintainers] Maintainer's meeting minutes (21st March, 2018)

2018-03-22 Thread Amar Tumballi
Meeting date: 03/07/2018 (March 3rd, 2018. 19:30IST, 14:00UTC, 09:00EST)
BJ Link

   - Bridge: https://bluejeans.com/205933580
   - Download: https://bluejeans.com/s/huECj

Attendance

   - Amar, Kaleb, PPai, Nithya, Sac, Deepshika, Shyam (joined late)

Agenda

   -

   AI from previous meeting:
   - Email on version numbers: [Done]
  - Email on 4.1 features: [Done]
  - Email on bugzilla automation etc: [Done]
   -

   Any more features required, wanted in 4.1?
   - Question to FB team: are you fine with features being merged? Any more
  pending features, bug fixes from FB branch?
  - Question to Red Hat: are the features fine?
  - Question to community: Is there any concerns?
  - GD2: Can we update the community about the proposal for 4.1
 - [PPai] we can send the github link to the project to community.
  -

   https://bugzilla.redhat.com/show_bug.cgi?id=1193929
   -

   If agreed for option change, what would be good version number next?
   - [Sac] Need more discipline to have calendar based releases
  - [ppai] http://semver.org is more practised
   -

   Round Table
   - [kaleb] fyi, I’m NOT making progress with debian packaging of gd2.
  I’ve been told that Marcus (a nfs-ganesha/ceph dev) has Debian pkging
  skillz. When he returns from the Ceph dev conf in China I’ll see if I can
  get some of his time. Also my appeals for help from debian packages and
  from Patrick Matthaie have gone unanswered. Debian packaging is already
  voodoo black magic; golang makes it even harder. ;-)
  - [amarts] auto tunable options is the future requirement for us.
  Everyone, please consider figuring out for your components work for it.
  - [shyam] I see a good response for features this time. Expectation
  is to meet what we promised. See github projects page for seeing whats
  agreed for 4.1




-- 
Amar Tumballi (amarts)
___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] glusterfs-3.12.7 released

2018-03-22 Thread Jiffin Tony Thottan



On Thursday 22 March 2018 01:07 PM, Atin Mukherjee wrote:



On Thu, Mar 22, 2018 at 12:38 PM, Jiffin Tony Thottan 
mailto:jthot...@redhat.com>> wrote:




On Thursday 22 March 2018 12:29 PM, Jiffin Tony Thottan wrote:




On Wednesday 21 March 2018 09:06 AM, Atin Mukherjee wrote:



On Wed, Mar 21, 2018 at 12:18 AM, Shyam Ranganathan
mailto:srang...@redhat.com>> wrote:

On 03/20/2018 01:10 PM, Jiffin Thottan wrote:
> Hi Shyam,
>
> Actually I planned to do the release on March 8th(posted
the release note on that day). But it didn't happen.
> I didn't merge any patches post sending the release
note(blocker bug had some merge conflict issue on that so I
skipped AFAIR).
> I performed 3.12.7 tagging yesterday and ran the build job
today.
>
> Can u please provide a suggestion here ? Do I need to
perform a 3.12.7-1 for the blocker bug ?

I see that the bug is marked against the tracker, but is not a
regression or an issue that is serious enough that it cannot
wait for
the next minor release.

Copied Atin to the mail, who opened that issue for his
comments. If he
agrees, let's get this moving and get the fix into the next
minor release.


Even though it's not a regression and a day 1 bug with brick
multiplexing, the issue is severe enough to consider this to be
fixed *asap* . In this scenario, if you're running a multi node
cluster with brick multiplexing enabled and one node down and
there're some volume operations performed and post that when the
node comes back, brick processes fail to come up.


Issue is impact only with glusterd, whether any other component
needs this fix?


Sorry I meant brick multiplexing not glusterd
--
Jiffin


If it is issue not report from upstream user/community, I prefer
to take it for next release.



IMO, assessment of an issue should be done based on its merit, not 
based on where it originates from. It might be a fair question to ask 
that "do we have users who have brick multiplexing enabled" and based 
on that take a call to fix it immediately or as part of next update 
but at the same time, you're still exposing a known problem with out 
flagging a warning that don't use brick multiplexing till this bug is 
fixed.


I have not yet sent the announcement mail for the release nor sent 
release notes to https://docs.gluster.org/en.  I can mention about it 
over there

--
Jiffin






Regards,
Jiffin



>
> --
> Regards,
> Jiffin
>
>
>
>
> - Original Message -
> From: "Shyam Ranganathan" mailto:srang...@redhat.com>>
> To: jenk...@build.gluster.org
, packag...@gluster.org
, maintainers@gluster.org

> Sent: Tuesday, March 20, 2018 9:06:57 PM
> Subject: Re: [Gluster-Maintainers] glusterfs-3.12.7 released
>
> On 03/20/2018 11:19 AM, jenk...@build.gluster.org
 wrote:
>> SRC:

https://build.gluster.org/job/release-new/47/artifact/glusterfs-3.12.7.tar.gz


>> HASH:

https://build.gluster.org/job/release-new/47/artifact/glusterfs-3.12.7.sha512sum


>>
>> This release is made off jenkins-release-47
>
> Jiffin, there are about 6 patches ready in the 3.12 queue,
that are not
> merged for this release, why?
>

https://review.gluster.org/#/projects/glusterfs,dashboards/dashboard:3-12-dashboard


>
> The tracker bug for 3.12.7 calls out
> https://bugzilla.redhat.com/show_bug.cgi?id=1543708
 as a
blocker, and
> has a patch, which is not merged.
>
> Was this some test packaging job?
>
>
>
>
>>
>>
>>
>> ___
>> maintainers mailing list
>> maintainers@gluster.org 
>> http://lists.gluster.org/mailman/listinfo/maintainers

>>
> ___
> maintainers mailing list
> maintainers@gluster.org 
> http://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Release 4.1: LTM release targeted for end of May

2018-03-22 Thread Mohammed Rafi K C
Hi Shyam,

Myself and Sunny are working on snapshot integration effort to gd2. So
we wanted to propose the feature for 4.1,

issue : https://github.com/gluster/glusterd2/issues/461

status : There are were two patch sets targeted, one is merged, one is
under review. One is under development.


Regards

Rafi KC



>
> On Wed, Mar 21, 2018 at 2:05 PM, Ravishankar N  > wrote:
>
>
>
> On 03/20/2018 07:07 PM, Shyam Ranganathan wrote:
>
> On 03/12/2018 09:37 PM, Shyam Ranganathan wrote:
>
> Hi,
>
> As we wind down on 4.0 activities (waiting on docs to hit
> the site, and
> packages to be available in CentOS repositories before
> announcing the
> release), it is time to start preparing for the 4.1 release.
>
> 4.1 is where we have GD2 fully functional and shipping
> with migration
> tools to aid Glusterd to GlusterD2 migrations.
>
> Other than the above, this is a call out for features that
> are in the
> works for 4.1. Please *post* the github issues to the
> *devel lists* that
> you would like as a part of 4.1, and also mention the
> current state of
> development.
>
> Thanks for those who responded. The github lane and milestones
> for the
> said features are updated, request those who mentioned issues
> being
> tracked for 4.1 check that these are reflected in the project
> lane [1].
>
> I have few requests as follows that if picked up would be a
> good thing
> to achieve by 4.1, volunteers welcome!
>
> - Issue #224: Improve SOS report plugin maintenance
>- https://github.com/gluster/glusterfs/issues/224
> 
>
> - Issue #259: Compilation warnings with gcc 7.x
>- https://github.com/gluster/glusterfs/issues/259
> 
>
> - Issue #411: Ensure python3 compatibility across code base
>- https://github.com/gluster/glusterfs/issues/411
> 
>
> - NFS Ganesha HA (storhaug)
>- Does this need an issue for Gluster releases to track?
> (maybe packaging)
>
> I will close the call for features by Monday 26th Mar, 2018.
> Post this,
> I would request that features that need to make it into 4.1 be
> raised as
> exceptions to the devel and maintainers list for evaluation.
>
>
> Hi Shyam,
>
> I want to add https://github.com/gluster/glusterfs/issues/363
>  also for 4.1. It
> is not a new feature but rather an enhancement to a volume option
> in AFR. I don't think it can qualify as a bug fix, so mentioning
> it here just in case it needs to be tracked too. The (only) patch
> is undergoing review cycles.
>
> Regards,
> Ravi
>
>
> Further, as we hit end of March, we would make it
> mandatory for features
> to have required spec and doc labels, before the code is
> merged, so
> factor in efforts for the same if not already done.
>
> Current 4.1 project release lane is empty! I cleaned it
> up, because I
> want to hear from all as to what content to add, than add
> things marked
> with the 4.1 milestone by default.
>
> [1] 4.1 Release lane:
> https://github.com/gluster/glusterfs/projects/1#column-1075416
> 
>
> Thanks,
> Shyam
> P.S: Also any volunteers to shadow/participate/run 4.1 as
> a release owner?
>
> Calling this out again!
>
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org 
> http://lists.gluster.org/mailman/listinfo/gluster-devel
> 
>
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org 
> http://lists.gluster.org/mailman/listinfo/gluster-devel
> 
>
>
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org 
> http://lists.gluster.org/mailman/listinfo/gluster-devel
> 
>
>
>
>
> -- 
> Thanks and Regards,
> Kotresh H R
>
>
> _

Re: [Gluster-Maintainers] glusterfs-3.12.7 released

2018-03-22 Thread Atin Mukherjee
On Thu, Mar 22, 2018 at 12:38 PM, Jiffin Tony Thottan 
wrote:

>
>
> On Thursday 22 March 2018 12:29 PM, Jiffin Tony Thottan wrote:
>
>
>
> On Wednesday 21 March 2018 09:06 AM, Atin Mukherjee wrote:
>
>
>
> On Wed, Mar 21, 2018 at 12:18 AM, Shyam Ranganathan 
> wrote:
>
>> On 03/20/2018 01:10 PM, Jiffin Thottan wrote:
>> > Hi Shyam,
>> >
>> > Actually I planned to do the release on March 8th(posted the release
>> note on that day). But it didn't happen.
>> > I didn't merge any patches post sending the release note(blocker bug
>> had some merge conflict issue on that so I skipped AFAIR).
>> > I performed 3.12.7 tagging yesterday and ran the build job today.
>> >
>> > Can u please provide a suggestion here ? Do I need to perform a
>> 3.12.7-1 for the blocker bug ?
>>
>> I see that the bug is marked against the tracker, but is not a
>> regression or an issue that is serious enough that it cannot wait for
>> the next minor release.
>>
>> Copied Atin to the mail, who opened that issue for his comments. If he
>> agrees, let's get this moving and get the fix into the next minor
>> release.
>>
>>
> Even though it's not a regression and a day 1 bug with brick multiplexing,
> the issue is severe enough to consider this to be fixed *asap* . In this
> scenario, if you're running a multi node cluster with brick multiplexing
> enabled and one node down and there're some volume operations performed and
> post that when the node comes back, brick processes fail to come up.
>
>
> Issue is impact only with glusterd, whether any other component needs this
> fix?
>
>
> Sorry I meant brick multiplexing not glusterd
> --
> Jiffin
>
> If it is issue not report from upstream user/community, I prefer to take
> it for next release.
>
>
IMO, assessment of an issue should be done based on its merit, not based on
where it originates from. It might be a fair question to ask that "do we
have users who have brick multiplexing enabled" and based on that take a
call to fix it immediately or as part of next update but at the same time,
you're still exposing a known problem with out flagging a warning that
don't use brick multiplexing till this bug is fixed.


>
> Regards,
> Jiffin
>
>
> >
>> > --
>> > Regards,
>> > Jiffin
>> >
>> >
>> >
>> >
>> > - Original Message -
>> > From: "Shyam Ranganathan" 
>> > To: jenk...@build.gluster.org, packag...@gluster.org,
>> maintainers@gluster.org
>> > Sent: Tuesday, March 20, 2018 9:06:57 PM
>> > Subject: Re: [Gluster-Maintainers] glusterfs-3.12.7 released
>> >
>> > On 03/20/2018 11:19 AM, jenk...@build.gluster.org wrote:
>> >> SRC: https://build.gluster.org/job/release-new/47/artifact/gluste
>> rfs-3.12.7.tar.gz
>> >> HASH: https://build.gluster.org/job/release-new/47/artifact/gluste
>> rfs-3.12.7.sha512sum
>> >>
>> >> This release is made off jenkins-release-47
>> >
>> > Jiffin, there are about 6 patches ready in the 3.12 queue, that are not
>> > merged for this release, why?
>> > https://review.gluster.org/#/projects/glusterfs,dashboards/d
>> ashboard:3-12-dashboard
>> >
>> > The tracker bug for 3.12.7 calls out
>> > https://bugzilla.redhat.com/show_bug.cgi?id=1543708 as a blocker, and
>> > has a patch, which is not merged.
>> >
>> > Was this some test packaging job?
>> >
>> >
>> >
>> >
>> >>
>> >>
>> >>
>> >> ___
>> >> maintainers mailing list
>> >> maintainers@gluster.org
>> >> http://lists.gluster.org/mailman/listinfo/maintainers
>> >>
>> > ___
>> > maintainers mailing list
>> > maintainers@gluster.org
>> > http://lists.gluster.org/mailman/listinfo/maintainers
>> >
>>
>
>
>
>
> ___
> maintainers mailing 
> listmaintainers@gluster.orghttp://lists.gluster.org/mailman/listinfo/maintainers
>
>
>
> ___
> maintainers mailing list
> maintainers@gluster.org
> http://lists.gluster.org/mailman/listinfo/maintainers
>
>
___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] glusterfs-3.12.7 released

2018-03-22 Thread Jiffin Tony Thottan



On Thursday 22 March 2018 12:29 PM, Jiffin Tony Thottan wrote:




On Wednesday 21 March 2018 09:06 AM, Atin Mukherjee wrote:



On Wed, Mar 21, 2018 at 12:18 AM, Shyam Ranganathan 
mailto:srang...@redhat.com>> wrote:


On 03/20/2018 01:10 PM, Jiffin Thottan wrote:
> Hi Shyam,
>
> Actually I planned to do the release on March 8th(posted the
release note on that day). But it didn't happen.
> I didn't merge any patches post sending the release
note(blocker bug had some merge conflict issue on that so I
skipped AFAIR).
> I performed 3.12.7 tagging yesterday and ran the build job today.
>
> Can u please provide a suggestion here ? Do I need to perform a
3.12.7-1 for the blocker bug ?

I see that the bug is marked against the tracker, but is not a
regression or an issue that is serious enough that it cannot wait for
the next minor release.

Copied Atin to the mail, who opened that issue for his comments.
If he
agrees, let's get this moving and get the fix into the next minor
release.


Even though it's not a regression and a day 1 bug with brick 
multiplexing, the issue is severe enough to consider this to be fixed 
*asap* . In this scenario, if you're running a multi node cluster 
with brick multiplexing enabled and one node down and there're some 
volume operations performed and post that when the node comes back, 
brick processes fail to come up.


Issue is impact only with glusterd, whether any other component needs 
this fix?


Sorry I meant brick multiplexing not glusterd
--
Jiffin
If it is issue not report from upstream user/community, I prefer to 
take it for next release.


Regards,
Jiffin



>
> --
> Regards,
> Jiffin
>
>
>
>
> - Original Message -
> From: "Shyam Ranganathan" mailto:srang...@redhat.com>>
> To: jenk...@build.gluster.org
, packag...@gluster.org
, maintainers@gluster.org

> Sent: Tuesday, March 20, 2018 9:06:57 PM
> Subject: Re: [Gluster-Maintainers] glusterfs-3.12.7 released
>
> On 03/20/2018 11:19 AM, jenk...@build.gluster.org
 wrote:
>> SRC:

https://build.gluster.org/job/release-new/47/artifact/glusterfs-3.12.7.tar.gz


>> HASH:

https://build.gluster.org/job/release-new/47/artifact/glusterfs-3.12.7.sha512sum


>>
>> This release is made off jenkins-release-47
>
> Jiffin, there are about 6 patches ready in the 3.12 queue, that
are not
> merged for this release, why?
>

https://review.gluster.org/#/projects/glusterfs,dashboards/dashboard:3-12-dashboard


>
> The tracker bug for 3.12.7 calls out
> https://bugzilla.redhat.com/show_bug.cgi?id=1543708
 as a
blocker, and
> has a patch, which is not merged.
>
> Was this some test packaging job?
>
>
>
>
>>
>>
>>
>> ___
>> maintainers mailing list
>> maintainers@gluster.org 
>> http://lists.gluster.org/mailman/listinfo/maintainers

>>
> ___
> maintainers mailing list
> maintainers@gluster.org 
> http://lists.gluster.org/mailman/listinfo/maintainers

>






___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers