Re: [Gluster-devel] [Gluster-Maintainers] Release 4.0: Schedule and scope clarity (responses needed)

2017-11-20 Thread Amar Tumballi
Thanks for this compilation. Response inline.

On Tue, Nov 21, 2017 at 2:34 AM, Shyam Ranganathan 
wrote:

> Hi,
>
> As this is a longish mail, there are a few asks below, that I request
> folks to focus and answer.
>
> 4.0 is a STM release (Short Term Maintenance), further, 4.1 is also slated
> as a STM release (although the web pages read differently and will be
> corrected shortly). Finally 4.2 would be the first LTM (Long Term ...) in
> the 4.x release line for Gluster.
>
> * Schedule *
> The above also considers that 4.0 will release 2 months from 3.13, which
> puts 4.0 branching (also read as feature freeze deadline) around
> mid-December (4 weeks from now).
>
> 4.0/1/2 release calendar hence looks as follows,
>
> - Release 4.0: (STM)
>   - Feature freeze/branching: mid-December
>   - Release date: Jan, 31st 2018
> - Release 4.1: (STM)
>   - Feature freeze/branching: mid-March
>   - Release date: Apr, 30th 2018
> - Release 4.2: (LTM, release 3.10 EOL'd)
>   - Feature freeze/branching: mid-June
>   - Release date: Jul, 31st 2018
>
> * Scope *
>
> The main focus in 4.0 is landing GlusterD2, and all efforts towards this
> take priority.
>
>
Main thing pending on Filesystem side of maintainers is getting the xlators
options sorted out (with default options etc). I have sent few
non-conflicting patches from experimental branch to master and many of them
fail regression tests right now.

GD2 team is working on getting the milestones setup, with their own issues
marked against them. Once complete will post the links here for other's
support.


> Further big features in 4.0 are around GFProxy, protocol layer changes,
> monitoring and usability changes, FUSE catchup, +1 scaling.
>
>
+1 scaling is more of an enhancement with GD2.


> Also, some code cleanup/debt areas are in focus.
>
> Now, glusterfs/github [1] reads ~50 issues as being targets in 4.0, and
> among this about 2-4 are marked closed (or done).
>
> Ask1: Request each of you to go through the issue list and coordinate with
> a maintainer, to either mark an issues milestone correctly (i.e retain it
> in 4.0 or move it out) and also leave a comment on the issue about its
> readiness.
>
> Ask 2: If there are issues that you are working on and are not marked
> against the 4.0 milestone, please do the needful for the same.
>
> Ask 3: Please mail the devel list, on features that are making it to 4.0,
> so that the project board can be rightly populated with the issue.
>
> Ask 4: If the 4.0 branching date was extended by another 4 weeks, would
> that enable you to finish additional features that are already marked for
> 4.0? This helps us move the needle on branching to help land the right set
> of features.
>
>
The main concerns I have right now for the Dec 15th timeline is the rate at
which regressions are failing. Also holiday time during end of the year may
cause some delay. I am not proposing for the time change yet, but warning
early for possible delay.

Regards,
Amar
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Unplanned Jenkins restart

2017-11-20 Thread Michael Scherer
Le lundi 20 novembre 2017 à 12:28 +0530, Nigel Babu a écrit :
> I noticed that Jenkins wasn't loading up this morning. Further
> debugging
> showed a java heap size problem. I tried to debug it, but eventually
> just
> restarted Jenkins. This means any running job or any job triggered
> was
> stopped. Please re-trigger your jobs.

So we had again the same problem (since 19h UTC, but I was out for the
night and came back at midninght UTC). 

So I restarted jenkins, please reschedule the job. I also added 2G of
swap space, as it can't really hurt.
-- 
Michael Scherer
Sysadmin, Community Infrastructure and Platform, OSAS



signature.asc
Description: This is a digitally signed message part
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Release 4.0: Schedule and scope clarity (responses needed)

2017-11-20 Thread Shyam Ranganathan

Hi,

As this is a longish mail, there are a few asks below, that I request 
folks to focus and answer.


4.0 is a STM release (Short Term Maintenance), further, 4.1 is also 
slated as a STM release (although the web pages read differently and 
will be corrected shortly). Finally 4.2 would be the first LTM (Long 
Term ...) in the 4.x release line for Gluster.


* Schedule *
The above also considers that 4.0 will release 2 months from 3.13, which 
puts 4.0 branching (also read as feature freeze deadline) around 
mid-December (4 weeks from now).


4.0/1/2 release calendar hence looks as follows,

- Release 4.0: (STM)
  - Feature freeze/branching: mid-December
  - Release date: Jan, 31st 2018
- Release 4.1: (STM)
  - Feature freeze/branching: mid-March
  - Release date: Apr, 30th 2018
- Release 4.2: (LTM, release 3.10 EOL'd)
  - Feature freeze/branching: mid-June
  - Release date: Jul, 31st 2018

* Scope *

The main focus in 4.0 is landing GlusterD2, and all efforts towards this 
take priority.


Further big features in 4.0 are around GFProxy, protocol layer changes, 
monitoring and usability changes, FUSE catchup, +1 scaling.


Also, some code cleanup/debt areas are in focus.

Now, glusterfs/github [1] reads ~50 issues as being targets in 4.0, and 
among this about 2-4 are marked closed (or done).


Ask1: Request each of you to go through the issue list and coordinate 
with a maintainer, to either mark an issues milestone correctly (i.e 
retain it in 4.0 or move it out) and also leave a comment on the issue 
about its readiness.


Ask 2: If there are issues that you are working on and are not marked 
against the 4.0 milestone, please do the needful for the same.


Ask 3: Please mail the devel list, on features that are making it to 
4.0, so that the project board can be rightly populated with the issue.


Ask 4: If the 4.0 branching date was extended by another 4 weeks, would 
that enable you to finish additional features that are already marked 
for 4.0? This helps us move the needle on branching to help land the 
right set of features.


Thanks,
Shyam
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Release 3.13: Release notes (Please read and contribute)

2017-11-20 Thread Shyam Ranganathan

Hi,

3.13 RC0 is around the corner (possibly tomorrow). Towards this and the 
final 3.13.0 release, I was compiling the features that are a part of 
3.13 and also attempted to write out the release notes for the same [1].


Some features have data and other do not (either in the commit message 
or in the github issue) and it is increasingly difficult to write the 
release notes by myself.


So here is calling out folks who have committed the following features, 
to provide release notes as a patch to [1] to aid closing this activity out.


Please refer older release notes, for what data goes into the respective 
sections [2]. Also, please provide CLI examples where required and/or 
command outputs when required.


1) Addition of summary option to the heal info CLI (@karthik-us)
2) Support for max-port range in glusterd.vol (@atin)
3) Prevention of other processes accessing the mounted brick snapshots 
(@sunnykumar)

4) Ability to reserve backend storage space (@amarts)
5) List all the connected clients for a brick and also exported
bricks/snapshots from each brick process (@harigowtham)
6) Imporved write performance with Disperse xlator, by intorducing 
parallel writes to file (@pranith/@xavi)

7) Disperse xlator now supports discard operations (@sunil)
8) Included details about memory pools in statedumps (@nixpanic)
9) Gluster APIs added to register callback functions for upcalls (@soumya)
10) Gluster API added with a glfs_mem_header for exported memory (@nixpanic)
11) Provided a new xlator to delay fops, to aid slow brick response 
simulation and debugging (@pranith)


Thanks,
Shyam

[1] gerrit link to release-notes: https://review.gluster.org/#/c/18815/

[2] Release 3.12.0 notes for reference: 
https://github.com/gluster/glusterfs/blob/release-3.12/doc/release-notes/3.12.0.md

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] IPC test is failing due to Segmentation Fault

2017-11-20 Thread Shyam Ranganathan

On 11/15/2017 10:47 AM, Shyam Ranganathan wrote:

On 11/14/2017 01:57 AM, Vaibhav Vaingankar wrote:

Hi

I am building GlusterFS from source on s390x, while testing it is seen 
*features/ipc.t *is failing cause of segmentation fault. here is the 
bt log:
Please let me know if the issue is caused by bug in the source or bug 
in one of the dependent program.


This is a problem in master as well (same stack and all), when I tested 
it on a Fedora x86-64 machine.


But, the IPC test has been deprecated, as the IPC FOP itself has been 
deprecated (see [1]), hence this test is not run on the code anymore.


There still is the crash when accessing GFAPI via python as stated below 
(without IPC code in place) and so there is some bug either in the 
python code or otherwise (that I am currently ill equipped to debug).


This mail is informational, just closing the loop for myself.

Did some more digging (with John), as the python code still crashed, 
when I removed references to call glfs_ipc from the python code.


The original crash was due to a truncated fs pointer value passed to 
glfs_set_volfile_server from the python program (which is the return 
value from glfs_new).


The issue seems to be reproducible with python version 2.7.14, and not 
2.7.5 (which is what is present in centOS7, and possibly CentOS6 as 
well, which the regression test machines run).


The problem can be averted if we declare the argtypes for the various C 
calls that we are making from the python program, which for whatever 
reason works fine in 2.7.5 and not in 2.7.14.


IOW, adding the following lines, moved the segmentation fault from the 
set_volfile to the glfs_init,


> api.glfs_ipc.restype = ctypes.c_int
api.glfs_new.restype = ctypes.c_void_p
api.glfs_set_volfile_server.argtypes = [ ctypes.c_void_p, 
ctypes.c_char_p, ctypes.c_char_p, ctypes.c_int ]


Applying this recursively, would solve the problem, when we bring back 
IPC support and enable this test case.




Request that you raise a but for the same, and we can look at getting 
the right eyes on it.



waiting for positive response.

(gdb) bt
#0 0x03fffd8a1904 in pub_glfs_set_volfile_server (fs=0x33efc0, 
transport=0x3fffdc20dc4 "tcp", host=0x3fffdc10524 "73da538ef028", 
port=) at glfs.c:422


Shyam

[1] Git hub issue dealing with IPC FOP being disabled in gfapi: 
https://github.com/gluster/glusterfs/issues/269

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Coverity covscan for 2017-11-20-dbd94d5b (master branch)

2017-11-20 Thread staticanalysis
GlusterFS Coverity covscan results are available from
http://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2017-11-20-dbd94d5b
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Changes in handling logs from (centos) regressions and smoke

2017-11-20 Thread Nigel Babu
Hello folks,

We're making some changes in how we handle logs from Centos regression and
smoke tests. Instead of having them available via HTTP access to the node
itself, it will be available via the Jenkins job as artifacts.

For example:
Smoke job: https://build.gluster.org/job/smoke/38523/console
Logs: https://build.gluster.org/job/smoke/38523/artifact/ (link available
from the main page)

We clear out regression logs every 30 days, so if you can see a regression
on build.gluster.org, logs for that should be available. This reduces the
need for space or HTTP access on our nodes and for separate deletion
process.

We also archive builds and cores. This is still available the old-fashioned
way, however, I intend to change that in the next few weeks to centralize
it to a file server.

-- 
nigelb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel