Re: [Gluster-Maintainers] Build failed in Jenkins: regression-test-with-multiplex #1611

2020-02-17 Thread Ravishankar N



On 17/02/20 6:26 pm, sankarshan wrote:

Looking at https://fstat.gluster.org/summary I see quite a few tests
still failing - are these failures being looked into?

./tests/basic/fencing/afr-lock-heal-basic.t
./tests/basic/fencing/afr-lock-heal-advanced.t


I will take a look at these two.

-Ravi


./tests/bugs/glusterd/quorum-validation.t

[Multiple bits in-line]

On Mon, 17 Feb 2020 at 14:38, Deepshikha Khandelwal  wrote:


04:08:01 Skipping script  : #!/bin/bash
04:08:01
04:08:01 ARCHIVE_BASE="/archives"
04:08:01 ARCHIVED_LOGS="logs"
04:08:01 UNIQUE_ID="${JOB_NAME}-${BUILD_ID}"
04:08:01 SERVER=$(hostname)
04:08:01
04:08:01 filename="${ARCHIVED_LOGS}/glusterfs-logs-${UNIQUE_ID}.tgz"
04:08:01 sudo -E tar -czf "${ARCHIVE_BASE}/${filename}" /var/log/glusterfs 
/var/log/messages*;
04:08:01 echo "Logs archived in http://${SERVER}/${filename};
04:08:01 sudo reboot


FYI, here we are skipping the post build script!`echo "Logs archived in 
http://${SERVER}/${filename}"` is a  part of the script.


You can find the logs in Build artifacts 
https://build.gluster.org/job/regression-test-with-multiplex/1611/




Sunil, were you able to access the logs - please acknowledge on this list.


On Thu, Jan 16, 2020 at 2:24 PM Sunil Kumar Heggodu Gopala Acharya 
 wrote:

Hi,

Please have the log location fixed so that others can take look into the 
failures.

https://build.gluster.org/job/regression-test-with-multiplex/1611/consoleFull

21:10:48 echo "Logs archived in http://${SERVER}/${filename};



On Thu, Jan 16, 2020 at 2:18 PM Sanju Rakonde  wrote:

The below glusterd test cases are constantly failing in brick-mux regression:
./tests/bugs/glusterd/bug-857330/normal.t
./tests/bugs/glusterd/bug-857330/xml.t
./tests/bugs/glusterd/quorum-validation.t

./tests/bugs/glusterd/bug-857330/normal.t and ./tests/bugs/glusterd/bug-857330/xml.t are 
timed-out after 200 seconds. I don't find any abnormality in the logs. we need to 
increase the time(?). I'm unable to run these tests in my local setup as they are always 
failing saying "ModuleNotFoundError: No module named 'xattr' " Is the same 
happening in CI as well?


I am not a big fan of arbitrarily increasing the time. Do we know from
the logs as to why it might need more than 200s (that's 3+ mins -
quite a bit of time)?


Also, we don't print the output of "prove -vf  " when the test gets 
timed-out. It will be great if we print the output. It will help us to debug and to check which 
step took more time.


Was there a patch merged to enable this?


./tests/bugs/glusterd/quorum-validation.t is failing because of the regression caused by 
https://review.gluster.org/#/c/glusterfs/+/21651/. Rafi is looking into this issue. To 
explain the issue in brief, "after a reboot, glusterd is spawning multiple brick 
processes for a single brick instance and volume status shows the brick as offline".


Was this concept not obvious during the review of the patch?





___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Modifying gluster's logging mechanism

2019-11-22 Thread Ravishankar N


On 22/11/19 3:13 pm, Barak Sason Rofman wrote:
This is actually one of the main reasons I wanted to bring this up for 
discussion - will it be fine with the community to run a dedicated 
tool to reorder the logs offline?


I think it is a bad idea to log without ordering and later relying on an 
external tool to sort it.  This is definitely not something I would want 
to do while doing test and development or debugging field issues.  
Structured logging  is definitely useful for gathering statistics and 
post-processing to make reports and charts and what not,  but from a 
debugging point of view, maintaining causality of messages and working 
with command line text editors and tools on log files is important IMO.


I had a similar concerns when  brick multiplexing feature was developed 
where a single log file was used for logging all multiplexed bricks' 
logs.  So much extra work to weed out messages of 99 processes to read 
the log of the 1 process you are interested in.


Regards,
Ravi

___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Release 5: Branched and further dates

2018-10-08 Thread Ravishankar N




On 10/05/2018 08:29 PM, Shyam Ranganathan wrote:

On 10/04/2018 11:33 AM, Shyam Ranganathan wrote:

On 09/13/2018 11:10 AM, Shyam Ranganathan wrote:

RC1 would be around 24th of Sep. with final release tagging around 1st
of Oct.

RC1 now stands to be tagged tomorrow, and patches that are being
targeted for a back port include,

We still are awaiting release notes (other than the bugs section) to be
closed.

There is one new bug that needs attention from the replicate team.
https://bugzilla.redhat.com/show_bug.cgi?id=1636502

The above looks important to me to be fixed before the release, @ravi or
@pranith can you take a look?

I've attempted a fix @ https://review.gluster.org/#/c/glusterfs/+/21366/
-Ravi



1) https://review.gluster.org/c/glusterfs/+/21314 (snapshot volfile in
mux cases)

@RaBhat working on this.

Done


2) Py3 corrections in master

@Kotresh are all changes made to master backported to release-5 (may not
be merged, but looking at if they are backported and ready for merge)?

Done, release notes amend pending


3) Release notes review and updates with GD2 content pending

@Kaushal/GD2 team can we get the updates as required?
https://review.gluster.org/c/glusterfs/+/21303

Still awaiting this.


4) This bug [2] was filed when we released 4.0.

The issue has not bitten us in 4.0 or in 4.1 (yet!) (i.e the options
missing and hence post-upgrade clients failing the mount). This is
possibly the last chance to fix it.

Glusterd and protocol maintainers, can you chime in, if this bug needs
to be and can be fixed? (thanks to @anoopcs for pointing it out to me)

Release notes to be corrected to call this out.


The tracker bug [1] does not have any other blockers against it, hence
assuming we are not tracking/waiting on anything other than the set above.

Thanks,
Shyam

[1] Tracker: https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-5.0
[2] Potential upgrade bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1540659
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers



___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Clang-Formatter for GlusterFS.

2018-09-18 Thread Ravishankar N




On 09/18/2018 02:02 PM, Hari Gowtham wrote:

I see that the procedure mentioned in the coding standard document is buggy.

git show --pretty="format:" --name-only | grep -v "contrib/" | egrep
"*\.[ch]$" | xargs clang-format -i

The above command edited the whole file. which is not supposed to happen.
It works fine on fedora 28 (clang version 6.0.1). I had the same problem 
you faced on fedora 26 though, presumably because of the older clang 
version.

-Ravi



+1 for the readability of the code having been affected.
On Mon, Sep 17, 2018 at 10:45 AM Amar Tumballi  wrote:



On Mon, Sep 17, 2018 at 10:00 AM, Ravishankar N  wrote:


On 09/13/2018 03:34 PM, Niels de Vos wrote:

On Thu, Sep 13, 2018 at 02:25:22PM +0530, Ravishankar N wrote:
...

What rules does clang impose on function/argument wrapping and alignment? I
somehow found the new code wrapping to be random and highly unreadable. An
example of 'before and after' the clang format patches went in:
https://paste.fedoraproject.org/paste/dC~aRCzYgliqucGYIzxPrQ Wondering if
this is just me or is it some problem of spurious clang fixes.

I agree that this example looks pretty ugly. Looking at random changes
to the code where I am most active does not show this awkward
formatting.


So one of my recent patches is failing smoke and clang-format is insisting 
[https://build.gluster.org/job/clang-format/22/console] on wrapping function 
arguments in an unsightly manner. Should I resend my patch with this new style 
of wrapping ?


I would say yes! We will get better, by changing options of clang-format once 
we get better options there. But for now, just following the option suggested 
by clang-format job is good IMO.

-Amar


Regards,
Ravi




However, I was expecting to see enforcing of the
single-line-if-statements like this (and while/for/.. loops):

  if (need_to_do_it) {
   do_it();
  }

instead of

  if (need_to_do_it)
   do_it();

At least the conversion did not take care of this. But, maybe I'm wrong
as I can not find the discussion in https://bugzilla.redhat.com/1564149
about this. Does someone remember what was decided in the end?

Thanks,
Niels





--
Amar Tumballi (amarts)
___
Gluster-devel mailing list
gluster-de...@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel





___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Clang-Formatter for GlusterFS.

2018-09-16 Thread Ravishankar N



On 09/13/2018 03:34 PM, Niels de Vos wrote:

On Thu, Sep 13, 2018 at 02:25:22PM +0530, Ravishankar N wrote:
...

What rules does clang impose on function/argument wrapping and alignment? I
somehow found the new code wrapping to be random and highly unreadable. An
example of 'before and after' the clang format patches went in:
https://paste.fedoraproject.org/paste/dC~aRCzYgliqucGYIzxPrQ Wondering if
this is just me or is it some problem of spurious clang fixes.

I agree that this example looks pretty ugly. Looking at random changes
to the code where I am most active does not show this awkward
formatting.


So one of my recent patches is failing smoke and clang-format is 
insisting [https://build.gluster.org/job/clang-format/22/console] on 
wrapping function arguments in an unsightly manner. Should I resend my 
patch with this new style of wrapping ?


Regards,
Ravi




However, I was expecting to see enforcing of the
single-line-if-statements like this (and while/for/.. loops):

 if (need_to_do_it) {
  do_it();
 }

instead of

 if (need_to_do_it)
  do_it();

At least the conversion did not take care of this. But, maybe I'm wrong
as I can not find the discussion in https://bugzilla.redhat.com/1564149
about this. Does someone remember what was decided in the end?

Thanks,
Niels


___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Clang-Formatter for GlusterFS.

2018-09-13 Thread Ravishankar N


On 09/12/2018 07:31 PM, Amar Tumballi wrote:

Top posting:

All is well at the tip of glusterfs master branch now.

We will post a postmortem report of events and what went wrong in this 
activity, later.


With this, Shyam, you can go ahead with release-v5.0 branching.

-Amar

On Wed, Sep 12, 2018 at 6:21 PM, Amar Tumballi > wrote:




On Wed, Sep 12, 2018 at 5:36 PM, Amar Tumballi
mailto:atumb...@redhat.com>> wrote:



On Mon, Aug 27, 2018 at 8:47 AM, Amar Tumballi
mailto:atumb...@redhat.com>> wrote:



On Wed, Aug 22, 2018 at 12:35 PM, Amar Tumballi
mailto:atumb...@redhat.com>> wrote:

Hi All,

Below is an update about the project’s move towards
using clang-formatter for imposing few coding-standards.

Gluster project, since inception followed certain
basic coding standard, which was (at that time) easy
to follow, and easy to review.

Over the time, with inclusion of many more developers
and working with other communities, as the coding
standards are different across projects, we got
different type of code into source. After 11+years,
now is the time we should be depending on tool for it
more than ever, and hence we have decided to depend on
clang-formatter for this.

Below are some highlights of this activity. We expect
each of you to actively help us in this move, so it is
smooth for all of us.

  * We kickstarted this activity sometime aroundApril
2018

  * There was a repo created for trying out the
options, and validating the code.Link to Repo

  * Now, with the latest|.clang-format|file, we have
made the whole GlusterFS codebase changes.The
change here 
  * We will be running regression with the changes,
multiple times, so we don’t want to miss something
getting in without our notice.
  * As it is a very big change (Almost 6 lakh lines
changed), we will not put this commit through
gerrit, but directly pushing to the repo.
  * Once this patch gets in (ETA: 28th August), all
the pending patches needs to go through rebase.


All, as Shyam has proposed to change the branch out date
for release-5.0 as Sept 10th [1], we are now targeting
Sept 7th for this activity.


We are finally Done!

We delayed in by another 4 days to make sure we pass the
regression properly with clang changes, and it doesn't break
anything.

Also note, from now, it is always better to format the changes
with below command before committing.

 sh$ cd glusterfs-git-repo/
 sh$ clang-format -i $(list_of_files_changed)
 sh$ git commit # and usual steps to publish your changes.

Also note, all the changes which were present earlier, needs
to be rebased with clang-format too.

One of the quick and dirty way to get your changes rebased in
the case if your patch is significantly large, is by applying
the patches on top of the commit before the clang-changes, and
copy the files over, and run clang-format -i on them, and
checking the diff. As no code other coding style changes
happened, this should work fine.

Please post if you have any concerns.




What rules does clang impose on function/argument wrapping and 
alignment? I somehow found the new code wrapping to be random and highly 
unreadable. An example of 'before and after' the clang format patches 
went in: https://paste.fedoraproject.org/paste/dC~aRCzYgliqucGYIzxPrQ 
Wondering if this is just me or is it some problem of spurious clang fixes.


Regards,
Ravi





Noticed some glitches! Stand with us till we handle the situation...

meantime, found that below command for git am works better for
applying smaller patches:

 $ git am --ignore-whitespace --ignore-space-change --reject
0001-patch
-Amar

Regards,
Amar

[1] -

https://lists.gluster.org/pipermail/gluster-devel/2018-August/055308.html



What are the next steps:

  * Thepatch
of

Re: [Gluster-Maintainers] [Gluster-devel] Master branch lock down: RCA for tests (remove-brick-testcases.t)

2018-08-13 Thread Ravishankar N



On 08/13/2018 06:12 AM, Shyam Ranganathan wrote:

As a means of keeping the focus going and squashing the remaining tests
that were failing sporadically, request each test/component owner to,

- respond to this mail changing the subject (testname.t) to the test
name that they are responding to (adding more than one in case they have
the same RCA)
- with the current RCA and status of the same

List of tests and current owners as per the spreadsheet that we were
tracking are:

TBD

./tests/bugs/glusterd/remove-brick-testcases.t  TBD
In this case, the .t passed but self-heal-daemon (which btw does not 
have any role in this test because there is no I/O or heals in this .t) 
has crashed with the following bt:


Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x7ff8c6bc0b4f in _IO_cleanup () from ./lib64/libc.so.6
[Current thread is 1 (LWP 17530)]
(gdb)
(gdb) bt
#0  0x7ff8c6bc0b4f in _IO_cleanup () from ./lib64/libc.so.6
#1  0x7ff8c6b7cb8b in __run_exit_handlers () from ./lib64/libc.so.6
#2  0x7ff8c6b7cc27 in exit () from ./lib64/libc.so.6
#3  0x0040b14d in cleanup_and_exit (signum=15) at glusterfsd.c:1570
#4  0x0040de71 in glusterfs_sigwaiter (arg=0x7ffd5f270d20) at 
glusterfsd.c:2332

#5  0x7ff8c757ce25 in start_thread () from ./lib64/libpthread.so.0
#6  0x7ff8c6c41bad in clone () from ./lib64/libc.so.6

Not able to find out the reason of the crash. Any pointers are 
appreciated. Regression run/core can be found at 
https://build.gluster.org/job/line-coverage/432/consoleFull .


Thanks,
Ravi

___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Master branch lock down status (Fri, August 9th)

2018-08-10 Thread Ravishankar N




On 08/11/2018 07:29 AM, Shyam Ranganathan wrote:

./tests/bugs/replicate/bug-1408712.t (one retry)
I'll take a look at this. But it looks like archiving the artifacts 
(logs) for this run 
(https://build.gluster.org/job/regression-on-demand-full-run/44/consoleFull) 
was a failure.

Thanks,
Ravi
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Master branch lock down status

2018-08-08 Thread Ravishankar N



On 08/08/2018 05:07 AM, Shyam Ranganathan wrote:

5) Current test failures
We still have the following tests failing and some without any RCA or
attention, (If something is incorrect, write back).

./tests/basic/afr/add-brick-self-heal.t (needs attention)
From the runs captured at https://review.gluster.org/#/c/20637/ , I saw 
that the latest runs where this particular .t failed were at 
https://build.gluster.org/job/line-coverage/415 and 
https://build.gluster.org/job/line-coverage/421/.
In both of these runs, there are no gluster 'regression' logs available 
at https://build.gluster.org/job/line-coverage//artifact. 
I have raised BZ 1613721 for it.


Also, Shyam was saying that in case of retries, the old (failure) logs 
get overwritten by the retries which are successful. Can we disable 
re-trying the .ts when they fail just for this lock down period alone so 
that we do have the logs?


Regards,
Ravi

___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Release 4.1: LTM release targeted for end of May

2018-03-21 Thread Ravishankar N



On 03/20/2018 07:07 PM, Shyam Ranganathan wrote:

On 03/12/2018 09:37 PM, Shyam Ranganathan wrote:

Hi,

As we wind down on 4.0 activities (waiting on docs to hit the site, and
packages to be available in CentOS repositories before announcing the
release), it is time to start preparing for the 4.1 release.

4.1 is where we have GD2 fully functional and shipping with migration
tools to aid Glusterd to GlusterD2 migrations.

Other than the above, this is a call out for features that are in the
works for 4.1. Please *post* the github issues to the *devel lists* that
you would like as a part of 4.1, and also mention the current state of
development.

Thanks for those who responded. The github lane and milestones for the
said features are updated, request those who mentioned issues being
tracked for 4.1 check that these are reflected in the project lane [1].

I have few requests as follows that if picked up would be a good thing
to achieve by 4.1, volunteers welcome!

- Issue #224: Improve SOS report plugin maintenance
   - https://github.com/gluster/glusterfs/issues/224

- Issue #259: Compilation warnings with gcc 7.x
   - https://github.com/gluster/glusterfs/issues/259

- Issue #411: Ensure python3 compatibility across code base
   - https://github.com/gluster/glusterfs/issues/411

- NFS Ganesha HA (storhaug)
   - Does this need an issue for Gluster releases to track? (maybe packaging)

I will close the call for features by Monday 26th Mar, 2018. Post this,
I would request that features that need to make it into 4.1 be raised as
exceptions to the devel and maintainers list for evaluation.


Hi Shyam,

I want to add https://github.com/gluster/glusterfs/issues/363 also for 
4.1. It is not a new feature but rather an enhancement to a volume 
option in AFR. I don't think it can qualify as a bug fix, so mentioning 
it here just in case it needs to be tracked too. The (only) patch is 
undergoing review cycles.


Regards,
Ravi



Further, as we hit end of March, we would make it mandatory for features
to have required spec and doc labels, before the code is merged, so
factor in efforts for the same if not already done.

Current 4.1 project release lane is empty! I cleaned it up, because I
want to hear from all as to what content to add, than add things marked
with the 4.1 milestone by default.

[1] 4.1 Release lane:
https://github.com/gluster/glusterfs/projects/1#column-1075416


Thanks,
Shyam
P.S: Also any volunteers to shadow/participate/run 4.1 as a release owner?

Calling this out again!


___
Gluster-devel mailing list
gluster-de...@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
gluster-de...@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Release 4.1: LTM release targeted for end of May

2018-03-15 Thread Ravishankar N



On 03/13/2018 07:07 AM, Shyam Ranganathan wrote:

Hi,

As we wind down on 4.0 activities (waiting on docs to hit the site, and
packages to be available in CentOS repositories before announcing the
release), it is time to start preparing for the 4.1 release.

4.1 is where we have GD2 fully functional and shipping with migration
tools to aid Glusterd to GlusterD2 migrations.

Other than the above, this is a call out for features that are in the
works for 4.1. Please *post* the github issues to the *devel lists* that
you would like as a part of 4.1, and also mention the current state of
development.

Hi,

We are targeting the 'thin-arbiter' feature for 4.1 
:https://github.com/gluster/glusterfs/issues/352

Status: High level design is there in the github issue.
Thin arbiter xlator patch https://review.gluster.org/#/c/19545/ is 
undergoing reviews.
Implementation details on AFR and glusterd(2) related changes are being 
discussed.  Will make sure all patches are posted against issue 352.


Thanks,
Ravi



Further, as we hit end of March, we would make it mandatory for features
to have required spec and doc labels, before the code is merged, so
factor in efforts for the same if not already done.

Current 4.1 project release lane is empty! I cleaned it up, because I
want to hear from all as to what content to add, than add things marked
with the 4.1 milestone by default.

Thanks,
Shyam
P.S: Also any volunteers to shadow/participate/run 4.1 as a release owner?
___
Gluster-devel mailing list
gluster-de...@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Release 4.0: Unable to complete rolling upgrade tests

2018-03-01 Thread Ravishankar N



On 03/02/2018 11:04 AM, Anoop C S wrote:

On Fri, 2018-03-02 at 10:11 +0530, Ravishankar N wrote:

+ Anoop.

It looks like clients on the old (3.12) nodes are not able to talk to
the upgraded (4.0) node. I see messages like these on the old clients:

   [2018-03-02 03:49:13.483458] W [MSGID: 114007]
[client-handshake.c:1197:client_setvolume_cbk] 0-testvol-client-2:
failed to find key 'clnt-lk-version' in the options

Seems like we need to set clnt-lk-version from server side too similar to what 
we did for client via
https://review.gluster.org/#/c/19560/. Can you try with the attached patch?
Thanks, self-heal works with this. You might want to get it merged in 
4.0 ASAP.


I still got the mkdir error on a plain distribute volume that I referred 
to in the other email in this thread. Anyone who is interested in trying 
it out, the steps are:

- Create a 2 node 2x1 plain distribute vol on 3.13 and fuse mount on node-1
- Upgrade 2nd node to 4.0 and once it is up and running,
- Perform mkdir from the mount on node1-->this returns EIO

Thanks
Ravi
PS: Feeling a bit under the weather, so I might not be online today again.




Is there something more to be done on BZ 1544366?

-Ravi
On 03/02/2018 08:44 AM, Ravishankar N wrote:

On 03/02/2018 07:26 AM, Shyam Ranganathan wrote:

Hi Pranith/Ravi,

So, to keep a long story short, post upgrading 1 node in a 3 node 3.13
cluster, self-heal is not able to catch the heal backlog and this is a
very simple synthetic test anyway, but the end result is that upgrade
testing is failing.

Let me try this now and get back. I had done some thing similar when
testing the FIPS patch and the rolling upgrade had worked.
Thanks,
Ravi

Here are the details,

- Using
https://hackmd.io/GYIwTADCDsDMCGBaArAUxAY0QFhBAbIgJwCMySIwJmAJvGMBvNEA#
I setup 3 server containers to install 3.13 first as follows (within the
containers)

(inside the 3 server containers)
yum -y update; yum -y install centos-release-gluster313; yum install
glusterfs-server; glusterd

(inside centos-glfs-server1)
gluster peer probe centos-glfs-server2
gluster peer probe centos-glfs-server3
gluster peer status
gluster v create patchy replica 3 centos-glfs-server1:/d/brick1
centos-glfs-server2:/d/brick2 centos-glfs-server3:/d/brick3
centos-glfs-server1:/d/brick4 centos-glfs-server2:/d/brick5
centos-glfs-server3:/d/brick6 force
gluster v start patchy
gluster v status

Create a client container as per the document above, and mount the above
volume and create 1 file, 1 directory and a file within that directory.

Now we start the upgrade process (as laid out for 3.13 here
http://docs.gluster.org/en/latest/Upgrade-Guide/upgrade_to_3.13/ ):
- killall glusterfs glusterfsd glusterd
- yum install
http://cbs.centos.org/kojifiles/work/tasks/1548/311548/centos-release-gluster40-0.9-1.el7.cent
os.x86_64.rpm

- yum upgrade --enablerepo=centos-gluster40-test glusterfs-server

< Go back to the client and edit the contents of one of the files and
change the permissions of a directory, so that there are things to heal
when we bring up the newly upgraded server>

- gluster --version
- glusterd
- gluster v status
- gluster v heal patchy

The above starts failing as follows,
[root@centos-glfs-server1 /]# gluster v heal patchy
Launching heal operation to perform index self heal on volume patchy has
been unsuccessful:
Commit failed on centos-glfs-server2.glfstest20. Please check log file
for details.
Commit failed on centos-glfs-server3. Please check log file for details.

  From here, if further files or directories are created from the client,
they just get added to the heal backlog, and heal does not catchup.

As is obvious, I cannot proceed, as the upgrade procedure is broken. The
issue itself may not be selfheal deamon, but something around
connections, but as the process fails here, looking to you guys to
unblock this as soon as possible, as we are already running a day's slip
in the release.

Thanks,
Shyam


___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Release 4.0: Unable to complete rolling upgrade tests

2018-03-01 Thread Ravishankar N


On 03/02/2018 10:11 AM, Ravishankar N wrote:

+ Anoop.

It looks like clients on the old (3.12) nodes are not able to talk to 
the upgraded (4.0) node. I see messages like these on the old clients:


 [2018-03-02 03:49:13.483458] W [MSGID: 114007] 
[client-handshake.c:1197:client_setvolume_cbk] 0-testvol-client-2: 
failed to find key 'clnt-lk-version' in the options


I see this in a 2x1 plain distribute also. I see ENOTCONN for the 
upgraded brick on the old client:


[2018-03-02 04:58:54.559446] E [MSGID: 114058] 
[client-handshake.c:1571:client_query_portmap_cbk] 0-testvol-client-1: 
failed to get the port number for remote subvolume. Please run 'gluster 
volume status' on server to see if brick process is running.
[2018-03-02 04:58:54.559618] I [MSGID: 114018] 
[client.c:2285:client_rpc_notify] 0-testvol-client-1: disconnected from 
testvol-client-1. Client process will keep trying to connect to glusterd 
until brick's port is available
[2018-03-02 04:58:56.973199] I [rpc-clnt.c:1994:rpc_clnt_reconfig] 
0-testvol-client-1: changing port to 49152 (from 0)
[2018-03-02 04:58:56.975844] I [MSGID: 114057] 
[client-handshake.c:1484:select_server_supported_programs] 
0-testvol-client-1: Using Program GlusterFS 3.3, Num (1298437), Version 
(330)
[2018-03-02 04:58:56.978114] W [MSGID: 114007] 
[client-handshake.c:1197:client_setvolume_cbk] 0-testvol-client-1: 
failed to find key 'clnt-lk-version' in the options
[2018-03-02 04:58:46.618036] E [MSGID: 114031] 
[client-rpc-fops.c:2768:client3_3_opendir_cbk] 0-testvol-client-1: 
remote operation failed. Path: / (----0001) 
[Transport endpoint is not connected]
The message "W [MSGID: 114031] 
[client-rpc-fops.c:2577:client3_3_readdirp_cbk] 0-testvol-client-1: 
remote operation failed [Transport endpoint is not connected]" repeated 
3 times between [2018-03-02 04:58:46.609529] and [2018-03-02 
04:58:46.618683]


Also, mkdir fails on the old mount with EIO, though physically 
succeeding on both bricks. Can the rpc folks offer a helping hand?


-Ravi

Is there something more to be done on BZ 1544366?

-Ravi
On 03/02/2018 08:44 AM, Ravishankar N wrote:


On 03/02/2018 07:26 AM, Shyam Ranganathan wrote:

Hi Pranith/Ravi,

So, to keep a long story short, post upgrading 1 node in a 3 node 3.13
cluster, self-heal is not able to catch the heal backlog and this is a
very simple synthetic test anyway, but the end result is that upgrade
testing is failing.


Let me try this now and get back. I had done some thing similar when 
testing the FIPS patch and the rolling upgrade had worked.

Thanks,
Ravi


Here are the details,

- Using
https://hackmd.io/GYIwTADCDsDMCGBaArAUxAY0QFhBAbIgJwCMySIwJmAJvGMBvNEA#
I setup 3 server containers to install 3.13 first as follows (within 
the

containers)

(inside the 3 server containers)
yum -y update; yum -y install centos-release-gluster313; yum install
glusterfs-server; glusterd

(inside centos-glfs-server1)
gluster peer probe centos-glfs-server2
gluster peer probe centos-glfs-server3
gluster peer status
gluster v create patchy replica 3 centos-glfs-server1:/d/brick1
centos-glfs-server2:/d/brick2 centos-glfs-server3:/d/brick3
centos-glfs-server1:/d/brick4 centos-glfs-server2:/d/brick5
centos-glfs-server3:/d/brick6 force
gluster v start patchy
gluster v status

Create a client container as per the document above, and mount the 
above

volume and create 1 file, 1 directory and a file within that directory.

Now we start the upgrade process (as laid out for 3.13 here
http://docs.gluster.org/en/latest/Upgrade-Guide/upgrade_to_3.13/ ):
- killall glusterfs glusterfsd glusterd
- yum install
http://cbs.centos.org/kojifiles/work/tasks/1548/311548/centos-release-gluster40-0.9-1.el7.centos.x86_64.rpm 


- yum upgrade --enablerepo=centos-gluster40-test glusterfs-server

< Go back to the client and edit the contents of one of the files and
change the permissions of a directory, so that there are things to heal
when we bring up the newly upgraded server>

- gluster --version
- glusterd
- gluster v status
- gluster v heal patchy

The above starts failing as follows,
[root@centos-glfs-server1 /]# gluster v heal patchy
Launching heal operation to perform index self heal on volume patchy 
has

been unsuccessful:
Commit failed on centos-glfs-server2.glfstest20. Please check log file
for details.
Commit failed on centos-glfs-server3. Please check log file for 
details.


 From here, if further files or directories are created from the 
client,

they just get added to the heal backlog, and heal does not catchup.

As is obvious, I cannot proceed, as the upgrade procedure is broken. 
The

issue itself may not be selfheal deamon, but something around
connections, but as the process fails here, looking to you guys to
unblock this as soon as possible, as we are already running a day's 
slip

in the release.

Thanks,
Shyam






___
maintainers mailing list
maintainers@

Re: [Gluster-Maintainers] Release 4.0: Unable to complete rolling upgrade tests

2018-03-01 Thread Ravishankar N

+ Anoop.

It looks like clients on the old (3.12) nodes are not able to talk to 
the upgraded (4.0) node. I see messages like these on the old clients:


 [2018-03-02 03:49:13.483458] W [MSGID: 114007] 
[client-handshake.c:1197:client_setvolume_cbk] 0-testvol-client-2: 
failed to find key 'clnt-lk-version' in the options


Is there something more to be done on BZ 1544366?

-Ravi
On 03/02/2018 08:44 AM, Ravishankar N wrote:


On 03/02/2018 07:26 AM, Shyam Ranganathan wrote:

Hi Pranith/Ravi,

So, to keep a long story short, post upgrading 1 node in a 3 node 3.13
cluster, self-heal is not able to catch the heal backlog and this is a
very simple synthetic test anyway, but the end result is that upgrade
testing is failing.


Let me try this now and get back. I had done some thing similar when 
testing the FIPS patch and the rolling upgrade had worked.

Thanks,
Ravi


Here are the details,

- Using
https://hackmd.io/GYIwTADCDsDMCGBaArAUxAY0QFhBAbIgJwCMySIwJmAJvGMBvNEA#
I setup 3 server containers to install 3.13 first as follows (within the
containers)

(inside the 3 server containers)
yum -y update; yum -y install centos-release-gluster313; yum install
glusterfs-server; glusterd

(inside centos-glfs-server1)
gluster peer probe centos-glfs-server2
gluster peer probe centos-glfs-server3
gluster peer status
gluster v create patchy replica 3 centos-glfs-server1:/d/brick1
centos-glfs-server2:/d/brick2 centos-glfs-server3:/d/brick3
centos-glfs-server1:/d/brick4 centos-glfs-server2:/d/brick5
centos-glfs-server3:/d/brick6 force
gluster v start patchy
gluster v status

Create a client container as per the document above, and mount the above
volume and create 1 file, 1 directory and a file within that directory.

Now we start the upgrade process (as laid out for 3.13 here
http://docs.gluster.org/en/latest/Upgrade-Guide/upgrade_to_3.13/ ):
- killall glusterfs glusterfsd glusterd
- yum install
http://cbs.centos.org/kojifiles/work/tasks/1548/311548/centos-release-gluster40-0.9-1.el7.centos.x86_64.rpm 


- yum upgrade --enablerepo=centos-gluster40-test glusterfs-server

< Go back to the client and edit the contents of one of the files and
change the permissions of a directory, so that there are things to heal
when we bring up the newly upgraded server>

- gluster --version
- glusterd
- gluster v status
- gluster v heal patchy

The above starts failing as follows,
[root@centos-glfs-server1 /]# gluster v heal patchy
Launching heal operation to perform index self heal on volume patchy has
been unsuccessful:
Commit failed on centos-glfs-server2.glfstest20. Please check log file
for details.
Commit failed on centos-glfs-server3. Please check log file for details.

 From here, if further files or directories are created from the client,
they just get added to the heal backlog, and heal does not catchup.

As is obvious, I cannot proceed, as the upgrade procedure is broken. The
issue itself may not be selfheal deamon, but something around
connections, but as the process fails here, looking to you guys to
unblock this as soon as possible, as we are already running a day's slip
in the release.

Thanks,
Shyam




___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Release 4.0: Unable to complete rolling upgrade tests

2018-03-01 Thread Ravishankar N


On 03/02/2018 07:26 AM, Shyam Ranganathan wrote:

Hi Pranith/Ravi,

So, to keep a long story short, post upgrading 1 node in a 3 node 3.13
cluster, self-heal is not able to catch the heal backlog and this is a
very simple synthetic test anyway, but the end result is that upgrade
testing is failing.


Let me try this now and get back. I had done some thing similar when 
testing the FIPS patch and the rolling upgrade had worked.

Thanks,
Ravi


Here are the details,

- Using
https://hackmd.io/GYIwTADCDsDMCGBaArAUxAY0QFhBAbIgJwCMySIwJmAJvGMBvNEA#
I setup 3 server containers to install 3.13 first as follows (within the
containers)

(inside the 3 server containers)
yum -y update; yum -y install centos-release-gluster313; yum install
glusterfs-server; glusterd

(inside centos-glfs-server1)
gluster peer probe centos-glfs-server2
gluster peer probe centos-glfs-server3
gluster peer status
gluster v create patchy replica 3 centos-glfs-server1:/d/brick1
centos-glfs-server2:/d/brick2 centos-glfs-server3:/d/brick3
centos-glfs-server1:/d/brick4 centos-glfs-server2:/d/brick5
centos-glfs-server3:/d/brick6 force
gluster v start patchy
gluster v status

Create a client container as per the document above, and mount the above
volume and create 1 file, 1 directory and a file within that directory.

Now we start the upgrade process (as laid out for 3.13 here
http://docs.gluster.org/en/latest/Upgrade-Guide/upgrade_to_3.13/ ):
- killall glusterfs glusterfsd glusterd
- yum install
http://cbs.centos.org/kojifiles/work/tasks/1548/311548/centos-release-gluster40-0.9-1.el7.centos.x86_64.rpm
- yum upgrade --enablerepo=centos-gluster40-test glusterfs-server

< Go back to the client and edit the contents of one of the files and
change the permissions of a directory, so that there are things to heal
when we bring up the newly upgraded server>

- gluster --version
- glusterd
- gluster v status
- gluster v heal patchy

The above starts failing as follows,
[root@centos-glfs-server1 /]# gluster v heal patchy
Launching heal operation to perform index self heal on volume patchy has
been unsuccessful:
Commit failed on centos-glfs-server2.glfstest20. Please check log file
for details.
Commit failed on centos-glfs-server3. Please check log file for details.

 From here, if further files or directories are created from the client,
they just get added to the heal backlog, and heal does not catchup.

As is obvious, I cannot proceed, as the upgrade procedure is broken. The
issue itself may not be selfheal deamon, but something around
connections, but as the process fails here, looking to you guys to
unblock this as soon as possible, as we are already running a day's slip
in the release.

Thanks,
Shyam


___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Release 4.0: Release notes (please read and contribute)

2018-02-02 Thread Ravishankar N



On 02/01/2018 11:02 PM, Shyam Ranganathan wrote:

On 01/29/2018 05:10 PM, Shyam Ranganathan wrote:

Hi,

I have posted an initial draft version of the release notes here [1].

I would like to *suggest* the following contributors to help improve and
finish the release notes by 06th Feb, 2017. As you read this mail, if
you feel you cannot contribute, do let us know, so that we can find the
appropriate contributors for the same.

Reminder (1)

Request a response if you would be able to provide the release notes.
Release notes itself can come in later.

Helps plan for contingency in case you are unable to generate the
required notes.

Thanks!


NOTE: Please use the release tracker to post patches that modify the
release notes, the bug ID is *1539842* (see [2]).

1) Aravinda/Kotresh: Geo-replication section in the release notes

2) Kaushal/Aravinda/ppai: GD2 section in the release notes

3) Du/Poornima/Pranith: Performance section in the release notes

4) Amar: monitoring section in the release notes

Following are individual call outs for certain features:

1) "Ability to force permissions while creating files/directories on a
volume" - Niels

2) "Replace MD5 usage to enable FIPS support" - Ravi, Amar


+ Kotresh who has done most (all to be precise) of the patches listed in 
https://github.com/gluster/glusterfs/issues/230 in case he would like to 
add anything.


There is a pending work for this w.r.t rolling upgrade support.  I hope 
to work on this next week, but I cannot commit anything looking at other 
things in my queue :(.
To add more clarity, for fresh setup (clients + servers) in 4.0, 
enabling FIPS works fine. But we need to handle case of old servers and 
new clients and vice versa. If this can be considered a bug fix, then 
here is my attempt at the release notes for this fix:


"Previously, if gluster was run on a FIPS enabled system, it used to 
crash because gluster used MD5 checksum in various places like self-heal 
and geo-rep. This has been fixed by replacing MD5 with SHA256 which is 
FIPS compliant."


I'm happy to update the above text in doc/release-notes/4.0.0.md and 
send it on gerrit for review.



Regards,
Ravi






3) "Dentry fop serializer xlator on brick stack" - Du

4) "Add option to disable nftw() based deletes when purging the landfill
directory" - Amar

5) "Enhancements for directory listing in readdirp" - Nithya

6) "xlators should not provide init(), fini() and others directly, but
have class_methods" - Amar

7) "New on-wire protocol (XDR) needed to support iattx and cleaner
dictionary structure" - Amar

8) "The protocol xlators should prevent sending binary values in a dict
over the networks" - Amar

9) "Translator to handle 'global' options" - Amar

Thanks,
Shyam

[1] github link to draft release notes:
https://github.com/gluster/glusterfs/blob/release-4.0/doc/release-notes/4.0.0.md

[2] Initial gerrit patch for the release notes:
https://review.gluster.org/#/c/19370/
___
Gluster-devel mailing list
gluster-de...@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
gluster-de...@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Release 3.13.2: Planned for 19th of Jan, 2018

2018-01-18 Thread Ravishankar N



On 01/19/2018 06:19 AM, Shyam Ranganathan wrote:

On 01/18/2018 07:34 PM, Ravishankar N wrote:


On 01/18/2018 11:53 PM, Shyam Ranganathan wrote:

On 01/02/2018 11:08 AM, Shyam Ranganathan wrote:

Hi,

As release 3.13.1 is announced, here is are the needed details for
3.13.2

Release date: 19th Jan, 2018 (20th is a Saturday)

Heads up, this is tomorrow.


Tracker bug for blockers:
https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.13.2

The one blocker bug has had its patch merged, so I am assuming there are
no more that should block this release.

As usual, shout out in case something needs attention.

Hi Shyam,

1. There is one patch https://review.gluster.org/#/c/19218/ which
introduces full locks for afr writevs. We're introducing this as a
GD_OP_VERSION_3_13_2 option. Please wait for it to be merged on 3.13
branch today. Karthik, please back port the patch.

Do we need this behind an option, if existing behavior causes split
brains?
Yes this is for split-brain prevention. Arbiter volumes already take 
full locks but not normal replica volumes. This is for normal replica 
volumes. See Pranith's comment in 
https://review.gluster.org/#/c/19218/1/xlators/mgmt/glusterd/src/glusterd-volume-set.c@1557

Or is the option being added for workloads that do not have
multiple clients or clients writing to non-overlapping regions (and thus
need not suffer a penalty in performance maybe? But they should not
anyway as a single client and AFR eager locks should ensure this is done
only once for the lifetime of the file being accesses, right?)
Yes, single writers take eager lock which is always a full lock 
regardless of this change.

Regards
Ravi

Basically I would like to keep options out it possible in backports, as
that changes the gluster op-version and involves other upgrade steps to
be sure users can use this option etc. Which means more reading and
execution of upgrade steps for our users. Hence the concern!


2. I'm also backporting https://review.gluster.org/#/c/18571/. Please
consider merging it too today if it is ready.

This should be fine.


We will attach the relevant BZs to the tracker bug.

Thanks
Ravi

Shyam
___
Gluster-devel mailing list
gluster-de...@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
gluster-de...@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Release 3.13.2: Planned for 19th of Jan, 2018

2018-01-18 Thread Ravishankar N



On 01/18/2018 11:53 PM, Shyam Ranganathan wrote:

On 01/02/2018 11:08 AM, Shyam Ranganathan wrote:

Hi,

As release 3.13.1 is announced, here is are the needed details for 3.13.2

Release date: 19th Jan, 2018 (20th is a Saturday)

Heads up, this is tomorrow.


Tracker bug for blockers:
https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.13.2

The one blocker bug has had its patch merged, so I am assuming there are
no more that should block this release.

As usual, shout out in case something needs attention.


Hi Shyam,

1. There is one patch https://review.gluster.org/#/c/19218/ which 
introduces full locks for afr writevs. We're introducing this as a 
GD_OP_VERSION_3_13_2 option. Please wait for it to be merged on 3.13 
branch today. Karthik, please back port the patch.


2. I'm also backporting https://review.gluster.org/#/c/18571/. Please 
consider merging it too today if it is ready.


We will attach the relevant BZs to the tracker bug.

Thanks
Ravi



Shyam
___
Gluster-devel mailing list
gluster-de...@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
gluster-de...@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Tests failing on master

2017-11-16 Thread Ravishankar N



On 11/17/2017 11:25 AM, Nithya Balachandran wrote:



On 16 November 2017 at 18:41, Ravishankar N <ravishan...@redhat.com 
<mailto:ravishan...@redhat.com>> wrote:


Hi,

In yesterday's maintainers' meeting, I said the are many tests
that are failing on the master branch on my laptop/ VMs. It turns
out many of them were due to setup issues: some due to obvious
errors on my part like not having dbench installed, not enabling
gnfs etc., and some due to 'extras' not being installed when I do
a source compile which I'm not sure why.

So there are only 2 tests  now which fail on master, and they
don't seem to be due to setup issues and are not marked as bad
tests either:

1.tests/bugs/cli/bug-1169302.t
 Failed test:  14
2.tests/bugs/distribute/bug-1247563.t (Wstat: 0 Tests: 12 Failed: 2)
  Failed tests:  11-12


Please file a bug for this one. And many thanks for the system .

Thanks for checking Nithya, Filed 1514329 on DHT and 1514331 on infra.
Regards
Ravi


Regards,
Nithya


Request the respective maintainers/peers to take a look. If they
are indeed failing because of problems in the test itself, then I
will probably file a bug on infra to investigate why they are
passing on the jenkins slaves.

Thanks,
Ravi
___
maintainers mailing list
maintainers@gluster.org <mailto:maintainers@gluster.org>
http://lists.gluster.org/mailman/listinfo/maintainers
<http://lists.gluster.org/mailman/listinfo/maintainers>




___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Tests failing on master

2017-11-16 Thread Ravishankar N

Hi,

In yesterday's maintainers' meeting, I said the are many tests that are 
failing on the master branch on my laptop/ VMs. It turns out many of 
them were due to setup issues: some due to obvious errors on my part 
like not having dbench installed, not enabling gnfs etc., and some due 
to 'extras' not being installed when I do a source compile which I'm not 
sure why.


So there are only 2 tests  now which fail on master, and they don't seem 
to be due to setup issues and are not marked as bad tests either:


1.tests/bugs/cli/bug-1169302.t
 Failed test:  14
2.tests/bugs/distribute/bug-1247563.t (Wstat: 0 Tests: 12 Failed: 2)
  Failed tests:  11-12

Request the respective maintainers/peers to take a look. If they are 
indeed failing because of problems in the test itself, then I will 
probably file a bug on infra to investigate why they are passing on the 
jenkins slaves.


Thanks,
Ravi
___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Huge AFR change for 3.8 proposed, needs careful review and explicit approval

2017-01-03 Thread Ravishankar N

On 12/15/2016 10:36 AM, Niels de Vos wrote:

>Can we take this patch in for next 3.8.x release based on:
>https://bugzilla.redhat.com/show_bug.cgi?id=1378547#c6  from the ubisoft
>guys who tested different cases with the fix?

Yes, that should be ok. Thanks for requesting the additional testing.

Niels



Hi Niels,
Could you please take this in?I want to backport another fix on top of 
this as mentioned in the review comments.

Thanks,
Ravi
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Patches being posted by Facebook and plans thereof

2016-12-22 Thread Ravishankar N

On 12/22/2016 11:31 AM, Shyam wrote:
1) Facebook will port all their patches to the special branch 
release-3.8-fb, where they have exclusive merge rights. 


i) I see that the Bugzilla IDs they are using for these patches are the 
same as the BZ ID of the corresponding 3.8 branch patches. These BZs are 
in MODIFIED/ CLOSED CURRENTRELEASE. Is that alright?


ii) Do we need to review these backports? (Asking because they have 
added the folks who sent the original patch as reviewers).


-Ravi


___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] 'Reviewd-by' tag for commits

2016-10-02 Thread Ravishankar N

On 10/03/2016 06:58 AM, Pranith Kumar Karampuri wrote:



On Mon, Oct 3, 2016 at 6:41 AM, Pranith Kumar Karampuri 
<pkara...@redhat.com <mailto:pkara...@redhat.com>> wrote:




On Fri, Sep 30, 2016 at 8:50 PM, Ravishankar N
<ravishan...@redhat.com <mailto:ravishan...@redhat.com>> wrote:

On 09/30/2016 06:38 PM, Niels de Vos wrote:

On Fri, Sep 30, 2016 at 07:11:51AM +0530, Pranith Kumar Karampuri wrote:

hi,
  At the moment 'Reviewed-by' tag comes only if a +1 is given on the
final version of the patch. But for most of the patches, different 
people
would spend time on different versions making the patch better, they may
not get time to do the review for every version of the patch. Is it
possible to change the gerrit script to add 'Reviewed-by' for all the
people who participated in the review?

+1 to this. For the argument that this *might* encourage
me-too +1s, it only exposes
such persons in bad light.

Or removing 'Reviewed-by' tag completely would also help to make sure it
doesn't give skewed counts.

I'm not going to lie, for me, that takes away the incentive of
doing any reviews at all.


Could you elaborate why? May be you should also talk about your
primary motivation for doing reviews.


I guess it is probably because the effort needs to be recognized? I 
think there is an option to recognize it so it is probably not a good 
idea to remove the tag I guess.


Yes, numbers provide good motivation for me:
Motivation for looking at patches and finding bugs for known components 
even though I am not its maintainer.
Motivation to learning new components because a bug and a fix is usually 
when I look at code for unknown components.

Motivation to level-up when statistics indicate I'm behind my peers.

I think even you said some time back in an ML thread that what can be 
measured can be improved.


-Ravi




I would not feel comfortable automatically adding Reviewed-by tags for
people that did not review the last version. They may not agree with the
last version, so adding their "approved stamp" on it may not be correct.
See the description of Reviewed-by in the Linux kernel sources [0].

While the Linux kernel model is the poster child for projects
to draw standards
from, IMO, their email based review system is certainly not
one to emulate. It
does not provide a clean way to view patch-set diffs, does not
present a single
URL based history that tracks all review comments, relies on
the sender to
provide information on what changed between versions, allows a
variety of
'Komedians' [1] to add random tags which may or may not be
picked up
by the maintainer who takes patches in etc.

Maybe we can add an additional tag that mentions all the people that
did do reviews of older versions of the patch. Not sure what the tag
would be, maybe just CC?

It depends on what tags would be processed to obtain
statistics on review contributions.
I agree that not all reviewers might be okay with the latest
revision but that
% might be miniscule (zero, really) compared to the normal
case where the reviewer spent
considerable time and effort to provide feedback (and an
eventual +1) on previous
revisions. If converting all +1s into 'Reviewed-by's is not
feasible in gerrit
or is not considered acceptable, then the maintainer could
wait for a reasonable
time for reviewers to give +1 for the final revision before
he/she goes ahead
with a +2 and merges it. While we cannot wait indefinitely for
all acks, a comment
like 'LGTM, will wait for a day for other acks before I go
ahead and merge' would be
appreciated.

Enough of bike-shedding from my end I suppose.:-)
Ravi

[1] https://lwn.net/Articles/503829/
<https://lwn.net/Articles/503829/>


Niels


0.http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/SubmittingPatches#n552

<http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/SubmittingPatches#n552>

___
Gluster-devel mailing list
gluster-de...@gluster.org <mailto:gluster-de...@gluster.org>
http://www.gluster.org/mailman/listinfo/gluster-devel
<http://www.gluster.org/mailman/listinfo/gluster-devel>


-- 
Pranith


--
Pranith


___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] 'Reviewd-by' tag for commits

2016-09-30 Thread Ravishankar N

On 09/30/2016 06:38 PM, Niels de Vos wrote:

On Fri, Sep 30, 2016 at 07:11:51AM +0530, Pranith Kumar Karampuri wrote:

hi,
  At the moment 'Reviewed-by' tag comes only if a +1 is given on the
final version of the patch. But for most of the patches, different people
would spend time on different versions making the patch better, they may
not get time to do the review for every version of the patch. Is it
possible to change the gerrit script to add 'Reviewed-by' for all the
people who participated in the review?
+1 to this. For the argument that this *might* encourage me-too +1s, it 
only exposes

such persons in bad light.

Or removing 'Reviewed-by' tag completely would also help to make sure it
doesn't give skewed counts.
I'm not going to lie, for me, that takes away the incentive of doing any 
reviews at all.

I would not feel comfortable automatically adding Reviewed-by tags for
people that did not review the last version. They may not agree with the
last version, so adding their "approved stamp" on it may not be correct.
See the description of Reviewed-by in the Linux kernel sources [0].
While the Linux kernel model is the poster child for projects to draw 
standards
from, IMO, their email based review system is certainly not one to 
emulate. It
does not provide a clean way to view patch-set diffs, does not present a 
single

URL based history that tracks all review comments, relies on the sender to
provide information on what changed between versions, allows a variety of
'Komedians' [1] to add random tags which may or may not be picked up
by the maintainer who takes patches in etc.

Maybe we can add an additional tag that mentions all the people that
did do reviews of older versions of the patch. Not sure what the tag
would be, maybe just CC?
It depends on what tags would be processed to obtain statistics on 
review contributions.
I agree that not all reviewers might be okay with the latest revision 
but that
% might be miniscule (zero, really) compared to the normal case where 
the reviewer spent
considerable time and effort to provide feedback (and an eventual +1) on 
previous
revisions. If converting all +1s into 'Reviewed-by's is not feasible in 
gerrit
or is not considered acceptable, then the maintainer could wait for a 
reasonable
time for reviewers to give +1 for the final revision before he/she goes 
ahead
with a +2 and merges it. While we cannot wait indefinitely for all acks, 
a comment
like 'LGTM, will wait for a day for other acks before I go ahead and 
merge' would be

appreciated.

Enough of bike-shedding from my end I suppose.:-)
Ravi

[1] https://lwn.net/Articles/503829/



Niels

0. 
http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/SubmittingPatches#n552


___
Gluster-devel mailing list
gluster-de...@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel



___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Another regression in release-3.7 and master

2016-04-07 Thread Ravishankar N

On 04/07/2016 05:11 PM, Kaushal M wrote:

As earlier, please don't merge any more changes on the release-3.7
branch till this is fixed and 3.7.11 is released.
http://review.gluster.org/#/c/13925/ (and its corresponding patch in 
master) has to be merged for 3.7.11. It fixes a performance issue in 
arbiter. The patch does not affect any other component (including 
non-arbiter replicate volumes).


-Ravi


___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers