Re: [Gluster-Maintainers] [Gluster-devel] Proposal to change the version numbers of Gluster project

2018-03-15 Thread Vijay Bellur
On Wed, Mar 14, 2018 at 9:48 PM, Atin Mukherjee  wrote:

>
>
> On Thu, Mar 15, 2018 at 9:45 AM, Vijay Bellur  wrote:
>
>>
>>
>> On Wed, Mar 14, 2018 at 5:40 PM, Shyam Ranganathan 
>> wrote:
>>
>>> On 03/14/2018 07:04 PM, Joe Julian wrote:
>>> >
>>> >
>>> > On 03/14/2018 02:25 PM, Vijay Bellur wrote:
>>> >>
>>> >>
>>> >> On Tue, Mar 13, 2018 at 4:25 AM, Kaleb S. KEITHLEY
>>> >> > wrote:
>>> >>
>>> >> On 03/12/2018 02:32 PM, Shyam Ranganathan wrote:
>>> >> > On 03/12/2018 10:34 AM, Atin Mukherjee wrote:
>>> >> >>   *
>>> >> >>
>>> >> >> After 4.1, we want to move to either continuous
>>> >> numbering (like
>>> >> >> Fedora), or time based (like ubuntu etc) release
>>> >> numbers. Which
>>> >> >> is the model we pick is not yet finalized. Happy to
>>> >> hear opinions.
>>> >> >>
>>> >> >>
>>> >> >> Not sure how the time based release numbers would make more
>>> >> sense than
>>> >> >> the one which Fedora follows. But before I comment further on
>>> >> this I
>>> >> >> need to first get a clarity on how the op-versions will be
>>> >> managed. I'm
>>> >> >> assuming once we're at GlusterFS 4.1, post that the releases
>>> >> will be
>>> >> >> numbered as GlusterFS5, GlusterFS6 ... So from that
>>> >> perspective, are we
>>> >> >> going to stick to our current numbering scheme of op-version
>>> >> where for
>>> >> >> GlusterFS5 the op-version will be 5?
>>> >> >
>>> >> > Say, yes.
>>> >> >
>>> >> > The question is why tie the op-version to the release number?
>>> That
>>> >> > mental model needs to break IMO.
>>> >> >
>>> >> > With current options like,
>>> >> > https://docs.gluster.org/en/latest/Upgrade-Guide/op_version/
>>> >> 
>>> it is
>>> >> > easier to determine the op-version of the cluster and what it
>>> >> should be,
>>> >> > and hence this need not be tied to the gluster release version.
>>> >> >
>>> >> > Thoughts?
>>> >>
>>> >> I'm okay with that, but——
>>> >>
>>> >> Just to play the Devil's Advocate, having an op-version that bears
>>> >> some
>>> >> resemblance to the _version_ number may make it easy/easier to
>>> >> determine
>>> >> what the op-version ought to be.
>>> >>
>>> >> We aren't going to run out of numbers, so there's no reason to be
>>> >> "efficient" here. Let's try to make it easy. (Easy to not make a
>>> >> mistake.)
>>> >>
>>> >> My 2¢
>>> >>
>>> >>
>>> >> +1 to the overall release cadence change proposal and what Kaleb
>>> >> mentions here.
>>> >>
>>> >> Tying op-versions to release numbers seems like an easier approach
>>> >> than others & one to which we are accustomed to. What are the benefits
>>> >> of breaking this model?
>>> >>
>>> > There is a bit of confusion among the user base when a release happens
>>> > but the op-version doesn't have a commensurate bump. People ask why
>>> they
>>> > can't set the op-version to match the gluster release version they have
>>> > installed. If it was completely disconnected from the release version,
>>> > that might be a great enough mental disconnect that the expectation
>>> > could go away which would actually cause less confusion.
>>>
>>> Above is the reason I state it as well (the breaking of the mental model
>>> around this), why tie it together when it is not totally related. I also
>>> agree that, the notion is present that it is tied together and hence
>>> related, but it may serve us better to break it.
>>>
>>>
>>
>> I see your perspective. Another related reason for not introducing an
>> op-version bump in a new release would be that there are no incompatible
>> features introduced (in the new release). Hence it makes sense to preserve
>> the older op-version.
>>
>> To make everyone's lives simpler, would it be useful to introduce a
>> command that provides the max op-version to release number mapping? The
>> output of the command could look like:
>>
>> op-version X: 3.7.0 to 3.7.11
>> op-version Y: 3.7.12 to x.y.z
>>
>
> We already have introduced an option called cluster.max-op-version where
> one can run a command like "gluster v get all cluster.max-op-version" to
> determine what highest op-version the cluster can be bumped up to. IMO,
> this helps users not to look at the document for at given x.y.z release the
> op-version has to be bumped up to X .  Isn't that sufficient for this
> requirement?
>


I think it is a more elegant solution than what I described.  Do we have a
single interface to determine the current & max op-versions of all members
in the trusted storage pool? If not, it might be an useful enhancement to
add at some point in time.

If we don't hear much complaints about op-version mismatches from users, I

Re: [Gluster-Maintainers] [gluster-packaging] glusterfs-4.0.0 released (v4.0.0-2 respin)

2018-03-15 Thread Shyam Ranganathan
On 03/12/2018 03:40 PM, Shyam Ranganathan wrote:
>> Please test the CentOS packages and give feedback so that packages can
>> be tagged for release.
> Tested and the test passes! We are good to publish.

Humble, are the container images done and published?

Further, I am thinking we provide a readme of sorts on the download
server, on how to get to the container image, here [1]. Thoughts?

Shyam

[1] https://download.gluster.org/pub/gluster/glusterfs/4.0/4.0.0-2/
___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Build failed in Jenkins: experimental-periodic #261

2018-03-15 Thread jenkins
See 

--
[...truncated 989.56 KB...]
reg = 0x1fa4ac0
sleepts = {tv_sec = 1, tv_nsec = 0}
event = 0x7fb4fc000b20
tmp = 0x7fb4fc000da0
old_THIS = 0x7fb535a90280 
#2  0x7fb5345e2e25 in start_thread () from /lib64/libpthread.so.0
No symbol table info available.
#3  0x7fb533eaf34d in clone () from /lib64/libc.so.6
No symbol table info available.

Thread 10 (Thread 0x7fb51cff9700 (LWP 25391)):
#0  0x7fb5345e6945 in pthread_cond_wait@@GLIBC_2.3.2 () from 
/lib64/libpthread.so.0
No symbol table info available.
#1  0x7fb5271e2efe in pick_event_ordered (ev=0x7fb5c008, 
event=0x7fb51cff8ea8) at 
:226
No locals.
#2  0x7fb5271e302f in gf_changelog_callback_invoker (arg=0x7fb5c008) at 
:258
this = 0x7fb520020310
entry = 0x7fb5aef0
vec = 0x0
event = 0x0
ev = 0x7fb5c008
#3  0x7fb5345e2e25 in start_thread () from /lib64/libpthread.so.0
No symbol table info available.
#4  0x7fb533eaf34d in clone () from /lib64/libc.so.6
No symbol table info available.

Thread 9 (Thread 0x7fb51f7fe700 (LWP 25249)):
#0  0x7fb5345e6945 in pthread_cond_wait@@GLIBC_2.3.2 () from 
/lib64/libpthread.so.0
No symbol table info available.
#1  0x7fb5273ed1cf in __br_pick_object (priv=0x7fb52001bf30) at 
:637
object = 0x0
#2  0x7fb5273ed278 in br_process_object (arg=0x7fb52000db30) at 
:666
this = 0x7fb52000db30
object = 0x0
priv = 0x7fb52001bf30
ret = -1
__FUNCTION__ = "br_process_object"
#3  0x7fb5345e2e25 in start_thread () from /lib64/libpthread.so.0
No symbol table info available.
#4  0x7fb533eaf34d in clone () from /lib64/libc.so.6
No symbol table info available.

Thread 8 (Thread 0x7fb524e9d700 (LWP 25247)):
#0  0x7fb5345e6945 in pthread_cond_wait@@GLIBC_2.3.2 () from 
/lib64/libpthread.so.0
No symbol table info available.
#1  0x7fb5273ed1cf in __br_pick_object (priv=0x7fb52001bf30) at 
:637
object = 0x0
#2  0x7fb5273ed278 in br_process_object (arg=0x7fb52000db30) at 
:666
this = 0x7fb52000db30
object = 0x0
priv = 0x7fb52001bf30
ret = -1
__FUNCTION__ = "br_process_object"
#3  0x7fb5345e2e25 in start_thread () from /lib64/libpthread.so.0
No symbol table info available.
#4  0x7fb533eaf34d in clone () from /lib64/libc.so.6
No symbol table info available.

Thread 7 (Thread 0x7fb525e9f700 (LWP 25245)):
#0  0x7fb5345e6945 in pthread_cond_wait@@GLIBC_2.3.2 () from 
/lib64/libpthread.so.0
No symbol table info available.
#1  0x7fb5271e2744 in gf_changelog_connection_janitor (arg=0x7fb520020310) 
at 
:47
ret = 0
this = 0x7fb520020310
priv = 0x7fb520012120
entry = 0x0
event = 0x0
ev = 0x0
drained = 0
__FUNCTION__ = "gf_changelog_connection_janitor"
#2  0x7fb5345e2e25 in start_thread () from /lib64/libpthread.so.0
No symbol table info available.
#3  0x7fb533eaf34d in clone () from /lib64/libc.so.6
No symbol table info available.

Thread 6 (Thread 0x7fb5266a0700 (LWP 25244)):
#0  0x7fb533e761ad in nanosleep () from /lib64/libc.so.6
No symbol table info available.
#1  0x7fb533ea6ec4 in usleep () from /lib64/libc.so.6
No symbol table info available.
#2  0x7fb5358285c4 in tbf_tokengenerator (arg=0x7fb520012060) at 
:102
tokenrate = 131072
maxtokens = 524288
token_gen_interval = 60
bucket = 0x7fb520012060
#3  0x7fb5345e2e25 in start_thread () from /lib64/libpthread.so.0
No symbol table info available.
#4  0x7fb533eaf34d in clone () from /lib64/libc.so.6
No symbol table info available.

Thread 5 (Thread 0x7fb52a588700 (LWP 25231)):
#0  0x7fb533ea67a3 in select () from /lib64/libc.so.6
No symbol table info available.
#1  0x7fb5358225f2 in runner (arg=0x1fa9690) at 
:179
tv = {tv_sec = 0, tv_usec = 949117}
base = 0x1fa9690
#2  

[Gluster-Maintainers] Build failed in Jenkins: regression-test-burn-in #3918

2018-03-15 Thread jenkins
See 


--
[...truncated 2.18 MB...]
this = 0x7f98e01ee4d0
c_clnt = 0x7f98e024dff8
crpc = 0x0
__FUNCTION__ = "changelog_ev_connector"
#2  0x7f990b4efe25 in start_thread () from /lib64/libpthread.so.0
No symbol table info available.
#3  0x7f990adbc34d in clone () from /lib64/libc.so.6
No symbol table info available.

Thread 6 (Thread 0x7f96b2568700 (LWP 18299)):
#0  0x7f990b4f3945 in pthread_cond_wait@@GLIBC_2.3.2 () from 
/lib64/libpthread.so.0
No symbol table info available.
#1  0x7f98fd9e7295 in br_stub_worker (data=0x7f98e01efef0) at 
:327
priv = 0x7f98e0247010
this = 0x7f98e01efef0
stub = 0x0
#2  0x7f990b4efe25 in start_thread () from /lib64/libpthread.so.0
No symbol table info available.
#3  0x7f990adbc34d in clone () from /lib64/libc.so.6
No symbol table info available.

Thread 5 (Thread 0x7f96b2d69700 (LWP 18298)):
#0  0x7f990b4f3945 in pthread_cond_wait@@GLIBC_2.3.2 () from 
/lib64/libpthread.so.0
No symbol table info available.
#1  0x7f98fd9eb500 in br_stub_signth (arg=0x7f98e01efef0) at 
:868
this = 0x7f98e01efef0
priv = 0x7f98e0247010
sigstub = 0x0
#2  0x7f990b4efe25 in start_thread () from /lib64/libpthread.so.0
No symbol table info available.
#3  0x7f990adbc34d in clone () from /lib64/libc.so.6
No symbol table info available.

Thread 4 (Thread 0x7f96b2daa700 (LWP 18297)):
#0  0x7f990b4f3cf2 in pthread_cond_timedwait@@GLIBC_2.3.2 () from 
/lib64/libpthread.so.0
No symbol table info available.
#1  0x7f98fcb15727 in iot_worker (data=0x7f98e022ae90) at 
:195
conf = 0x7f98e022ae90
this = 0x7f98e01f9970
stub = 0x0
sleep_till = {tv_sec = 1521129092, tv_nsec = 191787234}
ret = 0
pri = -1
bye = false
__FUNCTION__ = "iot_worker"
#2  0x7f990b4efe25 in start_thread () from /lib64/libpthread.so.0
No symbol table info available.
#3  0x7f990adbc34d in clone () from /lib64/libc.so.6
No symbol table info available.

Thread 3 (Thread 0x7f96b2eab700 (LWP 18296)):
#0  0x7f990b4f3945 in pthread_cond_wait@@GLIBC_2.3.2 () from 
/lib64/libpthread.so.0
No symbol table info available.
#1  0x7f98fc2b8329 in index_worker (data=0x7f98e01ffb80) at 
:218
priv = 0x7f98e021a660
this = 0x7f98e01ffb80
stub = 0x0
bye = false
#2  0x7f990b4efe25 in start_thread () from /lib64/libpthread.so.0
No symbol table info available.
#3  0x7f990adbc34d in clone () from /lib64/libc.so.6
No symbol table info available.

Thread 2 (Thread 0x7f96b3fac700 (LWP 18295)):
#0  0x7f990b4f3945 in pthread_cond_wait@@GLIBC_2.3.2 () from 
/lib64/libpthread.so.0
No symbol table info available.
#1  0x7f990c45ee28 in rpcsvc_request_handler (arg=0x7f98f8046670) at 
:1976
program = 0x7f98f8046670
req = 0x0
actor = 0x0
done = false
ret = 0
__FUNCTION__ = "rpcsvc_request_handler"
#2  0x7f990b4efe25 in start_thread () from /lib64/libpthread.so.0
No symbol table info available.
#3  0x7f990adbc34d in clone () from /lib64/libc.so.6
No symbol table info available.

Thread 1 (Thread 0x7f96b47ad700 (LWP 18294)):
#0  0x7f98fc08682d in quota_lookup (frame=0x7f98c689c1c8, 
this=0x7f98bc01acf0, loc=0x7f96b47ac8d0, xattr_req=0x0) at 
:1641
priv = 0x0
ret = -1
local = 0x0
__FUNCTION__ = "quota_lookup"
#1  0x7f98f7de427d in io_stats_lookup (frame=0x7f98c68aef08, 
this=0x7f98bc01c4e0, loc=0x7f96b47ac8d0, xdata=0x0) at 
:2784
_new = 0x7f98c689c1c8
old_THIS = 0x7f98bc01c4e0
next_xl_fn = 0x7f98fc0867d8 
tmp_cbk = 0x7f98f7dd8142 
__FUNCTION__ = "io_stats_lookup"
#2  0x7f990c772dfb in default_lookup (frame=0x7f98c68aef08, 
this=0x7f98bc01e040, loc=0x7f96b47ac8d0, xdata=0x0) at defaults.c:2714
old_THIS = 0x7f98bc01e040
next_xl = 0x7f98bc01c4e0
next_xl_fn = 0x7f98f7de3e62 
opn = 27
__FUNCTION__ = "default_lookup"
#3  0x7f990c6ef700 in syncop_lookup (subvol=0x7f98bc01e040, 
loc=0x7f96b47ac8d0, iatt=0x7f96b47ac830, parent=0x0, xdata_in=0x0, 
xdata_out=0x0) at 

[Gluster-Maintainers] Build failed in Jenkins: regression-test-burn-in #3917

2018-03-15 Thread jenkins
See 


--
[...truncated 920.15 KB...]
./tests/bugs/glusterfs-server/bug-904300.t  -  9 second
./tests/bugs/glusterfs-server/bug-887145.t  -  9 second
./tests/bugs/geo-replication/bug-877293.t  -  9 second
./tests/bugs/ec/bug-1179050.t  -  9 second
./tests/bugs/cli/bug-1030580.t  -  9 second
./tests/bugs/bitrot/1209752-volume-status-should-show-bitrot-scrub-info.t  -  9 
second
./tests/bugs/access-control/bug-958691.t  -  9 second
./tests/basic/stats-dump.t  -  9 second
./tests/basic/quota-nfs.t  -  9 second
./tests/basic/quota_aux_mount.t  -  9 second
./tests/basic/inode-quota-enforcing.t  -  9 second
./tests/performance/open-behind.t  -  8 second
./tests/bugs/upcall/bug-1458127.t  -  8 second
./tests/bugs/upcall/bug-1227204.t  -  8 second
./tests/bugs/tier/bug-1205545-CTR-and-trash-integration.t  -  8 second
./tests/bugs/snapshot/bug-1260848.t  -  8 second
./tests/bugs/shard/bug-1488546.t  -  8 second
./tests/bugs/quota/bug-1250582-volume-reset-should-not-remove-quota-quota-deem-statfs.t
  -  8 second
./tests/bugs/quota/bug-1243798.t  -  8 second
./tests/bugs/glusterfs/bug-902610.t  -  8 second
./tests/bugs/glusterfs/bug-861015-log.t  -  8 second
./tests/bugs/glusterd/bug-949930.t  -  8 second
./tests/bugs/fuse/bug-985074.t  -  8 second
./tests/bugs/distribute/bug-1086228.t  -  8 second
./tests/bugs/core/bug-986429.t  -  8 second
./tests/bugs/core/bug-908146.t  -  8 second
./tests/bugs/cli/bug-1087487.t  -  8 second
./tests/bugs/changelog/bug-1208470.t  -  8 second
./tests/bugs/bitrot/1209818-vol-info-show-scrub-process-properly.t  -  8 second
./tests/bitrot/br-stub.t  -  8 second
./tests/basic/volume-status.t  -  8 second
./tests/basic/pgfid-feat.t  -  8 second
./tests/basic/glusterd/arbiter-volume-probe.t  -  8 second
./tests/basic/gfapi/mandatory-lock-optimal.t  -  8 second
./tests/basic/fop-sampling.t  -  8 second
./tests/basic/ec/ec-read-policy.t  -  8 second
./tests/basic/ec/ec-anonymous-fd.t  -  8 second
./tests/basic/afr/arbiter-statfs.t  -  8 second
./tests/gfid2path/get-gfid-to-path.t  -  7 second
./tests/gfid2path/block-mount-access.t  -  7 second
./tests/bugs/replicate/bug-1448804-check-quorum-type-values.t  -  7 second
./tests/bugs/replicate/bug-1365455.t  -  7 second
./tests/bugs/replicate/bug-1250170-fsync.t  -  7 second
./tests/bugs/posix/bug-1360679.t  -  7 second
./tests/bugs/md-cache/bug-1211863.t  -  7 second
./tests/bugs/glusterd/bug-948729/bug-948729-force.t  -  7 second
./tests/bugs/glusterd/bug-1242875-do-not-pass-volinfo-quota.t  -  7 second
./tests/bugs/gfapi/bug-1447266/1460514.t  -  7 second
./tests/bugs/fuse/bug-963678.t  -  7 second
./tests/bugs/ec/bug-1227869.t  -  7 second
./tests/bugs/distribute/bug-884597.t  -  7 second
./tests/bugs/distribute/bug-882278.t  -  7 second
./tests/bugs/distribute/bug-1368012.t  -  7 second
./tests/bugs/distribute/bug-1088231.t  -  7 second
./tests/bugs/core/bug-949242.t  -  7 second
./tests/bugs/bug-1371806_2.t  -  7 second
./tests/bugs/bug-1258069.t  -  7 second
./tests/bugs/bitrot/1207029-bitrot-daemon-should-start-on-valid-node.t  -  7 
second
./tests/basic/tier/ctr-rename-overwrite.t  -  7 second
./tests/basic/posix/shared-statfs.t  -  7 second
./tests/basic/ec/nfs.t  -  7 second
./tests/basic/distribute/throttle-rebal.t  -  7 second
./tests/basic/afr/heal-info.t  -  7 second
./tests/basic/afr/gfid-mismatch.t  -  7 second
./tests/basic/afr/gfid-heal.t  -  7 second
./tests/basic/afr/arbiter-remove-brick.t  -  7 second
./tests/features/lock-migration/lkmigration-set-option.t  -  6 second
./tests/bugs/upcall/bug-upcall-stat.t  -  6 second
./tests/bugs/upcall/bug-1369430.t  -  6 second
./tests/bugs/transport/bug-873367.t  -  6 second
./tests/bugs/snapshot/bug-1064768.t  -  6 second
./tests/bugs/shard/bug-1259651.t  -  6 second
./tests/bugs/shard/bug-1258334.t  -  6 second
./tests/bugs/replicate/bug-767585-gfid.t  -  6 second
./tests/bugs/replicate/bug-1101647.t  -  6 second
./tests/bugs/quota/bug-1287996.t  -  6 second
./tests/bugs/quota/bug-1104692.t  -  6 second
./tests/bugs/posix/bug-1122028.t  -  6 second
./tests/bugs/nfs/bug-915280.t  -  6 second
./tests/bugs/nfs/bug-1143880-fix-gNFSd-auth-crash.t  -  6 second
./tests/bugs/nfs/bug-1116503.t  -  6 second
./tests/bugs/md-cache/bug-1211863_unlink.t  -  6 second
./tests/bugs/io-cache/bug-read-hang.t  -  6 second
./tests/bugs/io-cache/bug-858242.t  -  6 second
./tests/bugs/glusterfs-server/bug-873549.t  -  6 second
./tests/bugs/glusterfs/bug-856455.t  -  6 second
./tests/bugs/glusterfs/bug-848251.t  -  6 second
./tests/bugs/glusterd/bug-948729/bug-948729.t  -  6 second
./tests/bugs/glusterd/bug-1482906-peer-file-blank-line.t  -  6 second
./tests/bugs/glusterd/bug-1091935-brick-order-check-from-cli-to-glusterd.t  -  
6 second
./tests/bugs/cli/bug-1022905.t  -  6 second
./tests/bugs/bitrot/bug-1210684-scrub-pause-resume-error-handling.t  -  6 second
./tests/bitrot/bug-1221914.t  

Re: [Gluster-Maintainers] [Gluster-devel] Release 4.1: LTM release targeted for end of May

2018-03-15 Thread Susant Palai
Hi,
Would like to propose Cloudsync Xlator(for Archival use-case) for 4.1.
(github  issue-id #387).
Initial patch (under review) is posted here:
https://review.gluster.org/#/c/18532/.
Spec file: https://review.gluster.org/#/c/18854/

Thanks,
Susant


On Thu, Mar 15, 2018 at 4:05 PM, Ravishankar N 
wrote:

>
>
> On 03/13/2018 07:07 AM, Shyam Ranganathan wrote:
>
>> Hi,
>>
>> As we wind down on 4.0 activities (waiting on docs to hit the site, and
>> packages to be available in CentOS repositories before announcing the
>> release), it is time to start preparing for the 4.1 release.
>>
>> 4.1 is where we have GD2 fully functional and shipping with migration
>> tools to aid Glusterd to GlusterD2 migrations.
>>
>> Other than the above, this is a call out for features that are in the
>> works for 4.1. Please *post* the github issues to the *devel lists* that
>> you would like as a part of 4.1, and also mention the current state of
>> development.
>>
> Hi,
>
> We are targeting the 'thin-arbiter' feature for 4.1 :
> https://github.com/gluster/glusterfs/issues/352
> Status: High level design is there in the github issue.
> Thin arbiter xlator patch https://review.gluster.org/#/c/19545/ is
> undergoing reviews.
> Implementation details on AFR and glusterd(2) related changes are being
> discussed.  Will make sure all patches are posted against issue 352.
>
> Thanks,
> Ravi
>
>
>
>> Further, as we hit end of March, we would make it mandatory for features
>> to have required spec and doc labels, before the code is merged, so
>> factor in efforts for the same if not already done.
>>
>> Current 4.1 project release lane is empty! I cleaned it up, because I
>> want to hear from all as to what content to add, than add things marked
>> with the 4.1 milestone by default.
>>
>> Thanks,
>> Shyam
>> P.S: Also any volunteers to shadow/participate/run 4.1 as a release owner?
>> ___
>> Gluster-devel mailing list
>> gluster-de...@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-devel
>>
>
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>
___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Proposal to change the version numbers of Gluster project

2018-03-15 Thread Shyam Ranganathan
On 03/15/2018 12:48 AM, Atin Mukherjee wrote:
> 
> 
> On Thu, Mar 15, 2018 at 9:45 AM, Vijay Bellur  > wrote:
> 
> 
> 
> On Wed, Mar 14, 2018 at 5:40 PM, Shyam Ranganathan
> > wrote:
> 
> On 03/14/2018 07:04 PM, Joe Julian wrote:
> >
> >
> > On 03/14/2018 02:25 PM, Vijay Bellur wrote:
> >>
> >>
> >> On Tue, Mar 13, 2018 at 4:25 AM, Kaleb S. KEITHLEY
> >> 
> >> wrote:
> >>
> >>     On 03/12/2018 02:32 PM, Shyam Ranganathan wrote:
> >>     > On 03/12/2018 10:34 AM, Atin Mukherjee wrote:
> >>     >>       *
> >>     >>
> >>     >>         After 4.1, we want to move to either continuous
> >>     numbering (like
> >>     >>         Fedora), or time based (like ubuntu etc) release
> >>     numbers. Which
> >>     >>         is the model we pick is not yet finalized.
> Happy to
> >>     hear opinions.
> >>     >>
> >>     >>
> >>     >> Not sure how the time based release numbers would make
> more
> >>     sense than
> >>     >> the one which Fedora follows. But before I comment
> further on
> >>     this I
> >>     >> need to first get a clarity on how the op-versions will be
> >>     managed. I'm
> >>     >> assuming once we're at GlusterFS 4.1, post that the
> releases
> >>     will be
> >>     >> numbered as GlusterFS5, GlusterFS6 ... So from that
> >>     perspective, are we
> >>     >> going to stick to our current numbering scheme of
> op-version
> >>     where for
> >>     >> GlusterFS5 the op-version will be 5?
> >>     >
> >>     > Say, yes.
> >>     >
> >>     > The question is why tie the op-version to the release
> number? That
> >>     > mental model needs to break IMO.
> >>     >
> >>     > With current options like,
> >>     >
> https://docs.gluster.org/en/latest/Upgrade-Guide/op_version/
> 
> >>   
>   >
> it is
> >>     > easier to determine the op-version of the cluster and
> what it
> >>     should be,
> >>     > and hence this need not be tied to the gluster release
> version.
> >>     >
> >>     > Thoughts?
> >>
> >>     I'm okay with that, but——
> >>
> >>     Just to play the Devil's Advocate, having an op-version
> that bears
> >>     some
> >>     resemblance to the _version_ number may make it
> easy/easier to
> >>     determine
> >>     what the op-version ought to be.
> >>
> >>     We aren't going to run out of numbers, so there's no
> reason to be
> >>     "efficient" here. Let's try to make it easy. (Easy to not
> make a
> >>     mistake.)
> >>
> >>     My 2¢
> >>
> >>
> >> +1 to the overall release cadence change proposal and what Kaleb
> >> mentions here. 
> >>
> >> Tying op-versions to release numbers seems like an easier
> approach
> >> than others & one to which we are accustomed to. What are the
> benefits
> >> of breaking this model?
> >>
> > There is a bit of confusion among the user base when a release
> happens
> > but the op-version doesn't have a commensurate bump. People
> ask why they
> > can't set the op-version to match the gluster release version
> they have
> > installed. If it was completely disconnected from the release
> version,
> > that might be a great enough mental disconnect that the
> expectation
> > could go away which would actually cause less confusion.
> 
> Above is the reason I state it as well (the breaking of the
> mental model
> around this), why tie it together when it is not totally
> related. I also
> agree that, the notion is present that it is tied together and hence
> related, but it may serve us better to break it.
> 
> 
> 
> I see your perspective. Another related reason for not introducing
> an op-version bump in a new release would be that there are no
> incompatible features introduced (in the new release). Hence it
> makes sense to preserve the older op-version.
> 
> To make 

Re: [Gluster-Maintainers] [Gluster-devel] Release 4.1: LTM release targeted for end of May

2018-03-15 Thread Ravishankar N



On 03/13/2018 07:07 AM, Shyam Ranganathan wrote:

Hi,

As we wind down on 4.0 activities (waiting on docs to hit the site, and
packages to be available in CentOS repositories before announcing the
release), it is time to start preparing for the 4.1 release.

4.1 is where we have GD2 fully functional and shipping with migration
tools to aid Glusterd to GlusterD2 migrations.

Other than the above, this is a call out for features that are in the
works for 4.1. Please *post* the github issues to the *devel lists* that
you would like as a part of 4.1, and also mention the current state of
development.

Hi,

We are targeting the 'thin-arbiter' feature for 4.1 
:https://github.com/gluster/glusterfs/issues/352

Status: High level design is there in the github issue.
Thin arbiter xlator patch https://review.gluster.org/#/c/19545/ is 
undergoing reviews.
Implementation details on AFR and glusterd(2) related changes are being 
discussed.  Will make sure all patches are posted against issue 352.


Thanks,
Ravi



Further, as we hit end of March, we would make it mandatory for features
to have required spec and doc labels, before the code is merged, so
factor in efforts for the same if not already done.

Current 4.1 project release lane is empty! I cleaned it up, because I
want to hear from all as to what content to add, than add things marked
with the 4.1 milestone by default.

Thanks,
Shyam
P.S: Also any volunteers to shadow/participate/run 4.1 as a release owner?
___
Gluster-devel mailing list
gluster-de...@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers