Re: [Gluster-devel] Release 4.0: Making it happen!

2018-01-18 Thread Jiffin Tony Thottan



On Wednesday 17 January 2018 04:55 PM, Jiffin Tony Thottan wrote:



On Tuesday 16 January 2018 08:57 PM, Shyam Ranganathan wrote:

On 01/10/2018 01:14 PM, Shyam Ranganathan wrote:

Hi,

4.0 branching date is slated on the 16th of Jan 2018 and release is
slated for the end of Feb (28th), 2018.

This is today! So read on...

Short update: I am going to wait a couple more days before branching, to
settle release content and exceptions. Branching is hence on Jan, 18th
(Thursday).


We are at the phase when we need to ensure our release scope is correct
and *must* release features are landing. Towards this we need the
following information for all contributors.

1) Features that are making it to the release by branching date

- There are currently 35 open github issues marked as 4.0 milestone [1]
- Need contributors to look at this list and let us know which will 
meet

the branching date

Other than the protocol changes (from Amar), I did not receive any
requests for features that are making it to the release. I have compiled
a list of features based on patches in gerrit that are open, to check
what features are viable to make it to 4.0. This can be found here [3].

NOTE: All features, other than the ones in [3] are being moved out of
the 4.0 milestone.


- Need contributors to let us know which may slip and hence needs a
backport exception to 4.0 branch (post branching).
- Need milestone corrections on features that are not making it to the
4.0 release

I need the following contributors to respond and state if the feature in
[3] should still be tracked against 4.0 and how much time is possibly
needed to make it happen.

- Poornima, Amar, Jiffin, Du, Susant, Sanoj, Vijay


Hi,

The two gfapi[1,2] related changes have ack from poornima and Niels 
mentioned that he will do the review by EOD.


[1] https://review.gluster.org/#/c/18784/
[2] https://review.gluster.org/#/c/18785/




Niels has few comments on above patch. I need to have one week 
extension(26th Jan 2018)

--
Jiffin


Regards,
Jiffin




NOTE: Slips are accepted if they fall 1-1.5 weeks post branching, not
post that, and called out before branching!

2) Reviews needing priority

- There could be features that are up for review, and considering we
have about 6-7 days before branching, we need a list of these commits,
that you want review attention on.
- This will be added to this [2] dashboard, easing contributor 
access to

top priority reviews before branching

As of now, I am adding a few from the list in [3] for further review
attention as I see things evolving, more will be added as the point
above is answered by the respective contributors.


3) Review help!

- This link [2] contains reviews that need attention, as they are
targeted for 4.0. Request maintainers and contributors to pay close
attention to this list on a daily basis and help out with reviews.

Thanks,
Shyam

[1] github issues marked for 4.0:
https://github.com/gluster/glusterfs/milestone/3

[2] Review focus for features planned to land in 4.0:
https://review.gluster.org/#/q/owner:srangana%2540redhat.com+is:starred
[3] Release 4.0 features with pending code reviews: 
http://bit.ly/2rbjcl8



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] a link issue maybe introduced in a bug fix " Don't let NFS cache stat after writes"

2018-01-18 Thread Lian, George (NSB - CN/Hangzhou)
Hi,
>>> Cool, this works for me too. Send me a mail off-list once you are available 
>>> and we can figure out a way to get into a call and work on this.

Have you reproduced the issue per the step I listed in 
https://bugzilla.redhat.com/show_bug.cgi?id=1531457 and last mail?

If not, I would like you could try it yourself , which the difference between 
yours and mine is just create only 2 bricks instead of 6 bricks.

And Cynthia could have a session with you if you needed when I am not available 
in next Monday and Tuesday.

Thanks & Best Regards,
George

From: gluster-devel-boun...@gluster.org 
[mailto:gluster-devel-boun...@gluster.org] On Behalf Of Pranith Kumar Karampuri
Sent: Thursday, January 18, 2018 6:03 PM
To: Lian, George (NSB - CN/Hangzhou) 
Cc: Zhou, Cynthia (NSB - CN/Hangzhou) ; Li, 
Deqian (NSB - CN/Hangzhou) ; 
Gluster-devel@gluster.org; Sun, Ping (NSB - CN/Hangzhou) 

Subject: Re: [Gluster-devel] a link issue maybe introduced in a bug fix " Don't 
let NFS cache stat after writes"



On Thu, Jan 18, 2018 at 12:17 PM, Lian, George (NSB - CN/Hangzhou) 
> wrote:
Hi,
>>>I actually tried it with replica-2 and replica-3 and then distributed 
>>>replica-2 before replying to the earlier mail. We can have a debugging 
>>>session if you are okay with it.

It is fine if you can’t reproduce the issue in your ENV.
And I has attached the detail reproduce log in the Bugzilla FYI

But I am sorry I maybe OOO at Monday and Tuesday next week, so debug session 
will be fine to me at next Wednesday.

Cool, this works for me too. Send me a mail off-list once you are available and 
we can figure out a way to get into a call and work on this.



Paste the detail reproduce log FYI here:
root@ubuntu:~# gluster peer probe ubuntu
peer probe: success. Probe on localhost not needed
root@ubuntu:~# gluster v create test replica 2 ubuntu:/home/gfs/b1 
ubuntu:/home/gfs/b2 force
volume create: test: success: please start the volume to access data
root@ubuntu:~# gluster v start test
volume start: test: success
root@ubuntu:~# gluster v info test

Volume Name: test
Type: Replicate
Volume ID: fef5fca3-81d9-46d3-8847-74cde6f701a5
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: ubuntu:/home/gfs/b1
Brick2: ubuntu:/home/gfs/b2
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
root@ubuntu:~# gluster v status
Status of volume: test
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick ubuntu:/home/gfs/b1   49152 0  Y   7798
Brick ubuntu:/home/gfs/b2   49153 0  Y   7818
Self-heal Daemon on localhost   N/A   N/AY   7839

Task Status of Volume test
--
There are no active volume tasks


root@ubuntu:~# gluster v set test cluster.consistent-metadata on
volume set: success

root@ubuntu:~# ls /mnt/test
ls: cannot access '/mnt/test': No such file or directory
root@ubuntu:~# mkdir -p /mnt/test
root@ubuntu:~# mount -t glusterfs ubuntu:/test /mnt/test

root@ubuntu:~# cd /mnt/test
root@ubuntu:/mnt/test# echo "abc">aaa
root@ubuntu:/mnt/test# cp aaa bbb;link bbb ccc

root@ubuntu:/mnt/test# kill -9 7818
root@ubuntu:/mnt/test# cp aaa ddd;link ddd eee
link: cannot create link 'eee' to 'ddd': No such file or directory


Best Regards,
George

From: 
gluster-devel-boun...@gluster.org 
[mailto:gluster-devel-boun...@gluster.org]
 On Behalf Of Pranith Kumar Karampuri
Sent: Thursday, January 18, 2018 2:40 PM

To: Lian, George (NSB - CN/Hangzhou) 
>
Cc: Zhou, Cynthia (NSB - CN/Hangzhou) 
>; 
Gluster-devel@gluster.org; Li, Deqian (NSB - 
CN/Hangzhou) >; 
Sun, Ping (NSB - CN/Hangzhou) 
>
Subject: Re: [Gluster-devel] a link issue maybe introduced in a bug fix " Don't 
let NFS cache stat after writes"



On Thu, Jan 18, 2018 at 6:33 AM, Lian, George (NSB - CN/Hangzhou) 
> wrote:
Hi,
I suppose the brick numbers in your testing is six, and you just shut down the 
3 process.
When I reproduce the issue, I only create a replicate volume with 2 bricks, 
only let ONE brick working and set cluster.consistent-metadata on,
With this 2 test condition, the issue could 100% reproducible.

Hi,
  I 

Re: [Gluster-devel] Release 3.13.2: Planned for 19th of Jan, 2018

2018-01-18 Thread Ravishankar N



On 01/19/2018 06:19 AM, Shyam Ranganathan wrote:

On 01/18/2018 07:34 PM, Ravishankar N wrote:


On 01/18/2018 11:53 PM, Shyam Ranganathan wrote:

On 01/02/2018 11:08 AM, Shyam Ranganathan wrote:

Hi,

As release 3.13.1 is announced, here is are the needed details for
3.13.2

Release date: 19th Jan, 2018 (20th is a Saturday)

Heads up, this is tomorrow.


Tracker bug for blockers:
https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.13.2

The one blocker bug has had its patch merged, so I am assuming there are
no more that should block this release.

As usual, shout out in case something needs attention.

Hi Shyam,

1. There is one patch https://review.gluster.org/#/c/19218/ which
introduces full locks for afr writevs. We're introducing this as a
GD_OP_VERSION_3_13_2 option. Please wait for it to be merged on 3.13
branch today. Karthik, please back port the patch.

Do we need this behind an option, if existing behavior causes split
brains?
Yes this is for split-brain prevention. Arbiter volumes already take 
full locks but not normal replica volumes. This is for normal replica 
volumes. See Pranith's comment in 
https://review.gluster.org/#/c/19218/1/xlators/mgmt/glusterd/src/glusterd-volume-set.c@1557

Or is the option being added for workloads that do not have
multiple clients or clients writing to non-overlapping regions (and thus
need not suffer a penalty in performance maybe? But they should not
anyway as a single client and AFR eager locks should ensure this is done
only once for the lifetime of the file being accesses, right?)
Yes, single writers take eager lock which is always a full lock 
regardless of this change.

Regards
Ravi

Basically I would like to keep options out it possible in backports, as
that changes the gluster op-version and involves other upgrade steps to
be sure users can use this option etc. Which means more reading and
execution of upgrade steps for our users. Hence the concern!


2. I'm also backporting https://review.gluster.org/#/c/18571/. Please
consider merging it too today if it is ready.

This should be fine.


We will attach the relevant BZs to the tracker bug.

Thanks
Ravi

Shyam
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Release 3.13.2: Planned for 19th of Jan, 2018

2018-01-18 Thread Shyam Ranganathan
On 01/18/2018 07:34 PM, Ravishankar N wrote:
> 
> 
> On 01/18/2018 11:53 PM, Shyam Ranganathan wrote:
>> On 01/02/2018 11:08 AM, Shyam Ranganathan wrote:
>>> Hi,
>>>
>>> As release 3.13.1 is announced, here is are the needed details for
>>> 3.13.2
>>>
>>> Release date: 19th Jan, 2018 (20th is a Saturday)
>> Heads up, this is tomorrow.
>>
>>> Tracker bug for blockers:
>>> https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.13.2
>> The one blocker bug has had its patch merged, so I am assuming there are
>> no more that should block this release.
>>
>> As usual, shout out in case something needs attention.
> 
> Hi Shyam,
> 
> 1. There is one patch https://review.gluster.org/#/c/19218/ which
> introduces full locks for afr writevs. We're introducing this as a
> GD_OP_VERSION_3_13_2 option. Please wait for it to be merged on 3.13
> branch today. Karthik, please back port the patch.

Do we need this behind an option, if existing behavior causes split
brains? Or is the option being added for workloads that do not have
multiple clients or clients writing to non-overlapping regions (and thus
need not suffer a penalty in performance maybe? But they should not
anyway as a single client and AFR eager locks should ensure this is done
only once for the lifetime of the file being accesses, right?)

Basically I would like to keep options out it possible in backports, as
that changes the gluster op-version and involves other upgrade steps to
be sure users can use this option etc. Which means more reading and
execution of upgrade steps for our users. Hence the concern!

> 
> 2. I'm also backporting https://review.gluster.org/#/c/18571/. Please
> consider merging it too today if it is ready.

This should be fine.

> 
> We will attach the relevant BZs to the tracker bug.
> 
> Thanks
> Ravi
>>
>>> Shyam
>>> ___
>>> Gluster-devel mailing list
>>> Gluster-devel@gluster.org
>>> http://lists.gluster.org/mailman/listinfo/gluster-devel
>>>
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-devel
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Release 3.13.2: Planned for 19th of Jan, 2018

2018-01-18 Thread Ravishankar N



On 01/18/2018 11:53 PM, Shyam Ranganathan wrote:

On 01/02/2018 11:08 AM, Shyam Ranganathan wrote:

Hi,

As release 3.13.1 is announced, here is are the needed details for 3.13.2

Release date: 19th Jan, 2018 (20th is a Saturday)

Heads up, this is tomorrow.


Tracker bug for blockers:
https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.13.2

The one blocker bug has had its patch merged, so I am assuming there are
no more that should block this release.

As usual, shout out in case something needs attention.


Hi Shyam,

1. There is one patch https://review.gluster.org/#/c/19218/ which 
introduces full locks for afr writevs. We're introducing this as a 
GD_OP_VERSION_3_13_2 option. Please wait for it to be merged on 3.13 
branch today. Karthik, please back port the patch.


2. I'm also backporting https://review.gluster.org/#/c/18571/. Please 
consider merging it too today if it is ready.


We will attach the relevant BZs to the tracker bug.

Thanks
Ravi



Shyam
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Release 4.0: Making it happen!

2018-01-18 Thread Shyam Ranganathan
On 01/17/2018 10:35 PM, Amar Tumballi wrote:
> 220 
> I1cccb304a I6e25dbb69
> If13913fa9
>    “[Cleanup] Dictionary data
> structure and memory allocations” Amar
> 384 
> I6111c13cf I1448fbe9d
> I549b5e912
>    “RFE: new on-wire protocol
> (XDR) needed to support iattx and cleaner dictionary structure”   Amar
> 
>   
> 203 
> I4d1235b9e “the protocol
> xlators should prevent sending binary values in a dict over the
> networks” Amar
> 
> 
> Out of the above, 
> 
> 220 - will be an on-going effort. Some effort towards that is already
> started, and merged.
> 
> 384 - https://review.gluster.org/19098 is the patch with most of the
> changes. I have now split the patch into two parts. Current one mostly
> will pass the regression and just introduces all the new functionality.
> (best way to do the diff is 'diff -pru client-rpc-fops.c
> client-rpc-fops_v2.c', and similar on server to see if the diff is fine).

Reviewed, we should be good to take this in tomorrow, leaving 12 hours
for others to get back with any more comments.


Based on this branching will be done at 1:00 PM Eastern tomorrow, and I
will send a list of backports that we will accept beyond that point (IOW
things called out in the mail thread).

Thanks,
Shyam
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Release 3.13.2: Planned for 19th of Jan, 2018

2018-01-18 Thread Shyam Ranganathan
On 01/02/2018 11:08 AM, Shyam Ranganathan wrote:
> Hi,
> 
> As release 3.13.1 is announced, here is are the needed details for 3.13.2
> 
> Release date: 19th Jan, 2018 (20th is a Saturday)

Heads up, this is tomorrow.

> Tracker bug for blockers:
> https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.13.2

The one blocker bug has had its patch merged, so I am assuming there are
no more that should block this release.

As usual, shout out in case something needs attention.

> 
> Shyam
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] cluster/dht: restrict migration of opened files

2018-01-18 Thread Susant Palai
This does not restrict tiered migrations.

Susant

On 18 Jan 2018 8:18 pm, "Milind Changire"  wrote:

On Tue, Jan 16, 2018 at 2:52 PM, Raghavendra Gowdappa 
wrote:

> All,
>
> Patch [1] prevents migration of opened files during rebalance operation.
> If patch [1] affects you, please voice out your concerns. [1] is a stop-gap
> fix for the problem discussed in issues [2][3]
>
> [1] https://review.gluster.org/#/c/19202/
> [2] https://github.com/gluster/glusterfs/issues/308
> [3] https://github.com/gluster/glusterfs/issues/347
>
> regards,
> Raghavendra
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>


Would this patch affect tiering as well ?
Do we need to worry about tiering anymore ?

--
Milind


___
Gluster-users mailing list
gluster-us...@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] cluster/dht: restrict migration of opened files

2018-01-18 Thread Milind Changire
On Tue, Jan 16, 2018 at 2:52 PM, Raghavendra Gowdappa 
wrote:

> All,
>
> Patch [1] prevents migration of opened files during rebalance operation.
> If patch [1] affects you, please voice out your concerns. [1] is a stop-gap
> fix for the problem discussed in issues [2][3]
>
> [1] https://review.gluster.org/#/c/19202/
> [2] https://github.com/gluster/glusterfs/issues/308
> [3] https://github.com/gluster/glusterfs/issues/347
>
> regards,
> Raghavendra
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel



Would this patch affect tiering as well ?
Do we need to worry about tiering anymore ?

--
Milind
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] cluster/dht: restrict migration of opened files

2018-01-18 Thread Pranith Kumar Karampuri
On Tue, Jan 16, 2018 at 2:52 PM, Raghavendra Gowdappa 
wrote:

> All,
>
> Patch [1] prevents migration of opened files during rebalance operation.
> If patch [1] affects you, please voice out your concerns. [1] is a stop-gap
> fix for the problem discussed in issues [2][3]
>

What is the impact on VM and gluster-block usecases after this patch? Will
it rebalance any data in these usecases?


>
> [1] https://review.gluster.org/#/c/19202/
> [2] https://github.com/gluster/glusterfs/issues/308
> [3] https://github.com/gluster/glusterfs/issues/347
>
> regards,
> Raghavendra
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Coverity covscan for 2018-01-18-c2fa8a94 (master branch)

2018-01-18 Thread staticanalysis
GlusterFS Coverity covscan results are available from
http://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2018-01-18-c2fa8a94
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] a link issue maybe introduced in a bug fix " Don't let NFS cache stat after writes"

2018-01-18 Thread Pranith Kumar Karampuri
On Thu, Jan 18, 2018 at 12:17 PM, Lian, George (NSB - CN/Hangzhou) <
george.l...@nokia-sbell.com> wrote:

> Hi,
>
> >>>I actually tried it with replica-2 and replica-3 and then distributed
> replica-2 before replying to the earlier mail. We can have a debugging
> session if you are okay with it.
>
>
>
> It is fine if you can’t reproduce the issue in your ENV.
>
> And I has attached the detail reproduce log in the Bugzilla FYI
>
>
>
> But I am sorry I maybe OOO at Monday and Tuesday next week, so debug
> session will be fine to me at next Wednesday.
>

Cool, this works for me too. Send me a mail off-list once you are available
and we can figure out a way to get into a call and work on this.


>
>
>
>
> Paste the detail reproduce log FYI here:
>
> *root@ubuntu:~# gluster peer probe ubuntu*
>
> *peer probe: success. Probe on localhost not needed*
>
> *root@ubuntu:~# gluster v create test replica 2 ubuntu:/home/gfs/b1
> ubuntu:/home/gfs/b2 force*
>
> *volume create: test: success: please start the volume to access data*
>
> *root@ubuntu:~# gluster v start test*
>
> *volume start: test: success*
>
> *root@ubuntu:~# gluster v info test*
>
>
>
> *Volume Name: test*
>
> *Type: Replicate*
>
> *Volume ID: fef5fca3-81d9-46d3-8847-74cde6f701a5*
>
> *Status: Started*
>
> *Snapshot Count: 0*
>
> *Number of Bricks: 1 x 2 = 2*
>
> *Transport-type: tcp*
>
> *Bricks:*
>
> *Brick1: ubuntu:/home/gfs/b1*
>
> *Brick2: ubuntu:/home/gfs/b2*
>
> *Options Reconfigured:*
>
> *transport.address-family: inet*
>
> *nfs.disable: on*
>
> *performance.client-io-threads: off*
>
> *root@ubuntu:~# gluster v status*
>
> *Status of volume: test*
>
> *Gluster process TCP Port  RDMA Port  Online
> Pid*
>
>
> *--*
>
> *Brick ubuntu:/home/gfs/b1   49152 0  Y
> 7798*
>
> *Brick ubuntu:/home/gfs/b2   49153 0  Y
> 7818*
>
> *Self-heal Daemon on localhost   N/A   N/AY
> 7839*
>
>
>
> *Task Status of Volume test*
>
>
> *--*
>
> *There are no active volume tasks*
>
>
>
>
>
> *root@ubuntu:~# gluster v set test cluster.consistent-metadata on*
>
> *volume set: success*
>
>
>
> *root@ubuntu:~# ls /mnt/test*
>
> *ls: cannot access '/mnt/test': No such file or directory*
>
> *root@ubuntu:~# mkdir -p /mnt/test*
>
> *root@ubuntu:~# mount -t glusterfs ubuntu:/test /mnt/test*
>
>
>
> *root@ubuntu:~# cd /mnt/test*
>
> *root@ubuntu:/mnt/test# echo "abc">aaa*
>
> *root@ubuntu:/mnt/test# cp aaa bbb;link bbb ccc*
>
>
>
> *root@ubuntu:/mnt/test# kill -9 7818*
>
> *root@ubuntu:/mnt/test# cp aaa ddd;link ddd eee*
>
> *link: cannot create link 'eee' to 'ddd': No such file or directory*
>
>
>
>
>
> Best Regards,
>
> George
>
>
>
> *From:* gluster-devel-boun...@gluster.org [mailto:gluster-devel-bounces@
> gluster.org] *On Behalf Of *Pranith Kumar Karampuri
> *Sent:* Thursday, January 18, 2018 2:40 PM
>
> *To:* Lian, George (NSB - CN/Hangzhou) 
> *Cc:* Zhou, Cynthia (NSB - CN/Hangzhou) ;
> Gluster-devel@gluster.org; Li, Deqian (NSB - CN/Hangzhou) <
> deqian...@nokia-sbell.com>; Sun, Ping (NSB - CN/Hangzhou) <
> ping@nokia-sbell.com>
> *Subject:* Re: [Gluster-devel] a link issue maybe introduced in a bug fix
> " Don't let NFS cache stat after writes"
>
>
>
>
>
>
>
> On Thu, Jan 18, 2018 at 6:33 AM, Lian, George (NSB - CN/Hangzhou) <
> george.l...@nokia-sbell.com> wrote:
>
> Hi,
>
> I suppose the brick numbers in your testing is six, and you just shut down
> the 3 process.
>
> When I reproduce the issue, I only create a replicate volume with 2
> bricks, only let ONE brick working and set cluster.consistent-metadata on,
>
> With this 2 test condition, the issue could 100% reproducible.
>
>
>
> Hi,
>
>   I actually tried it with replica-2 and replica-3 and then
> distributed replica-2 before replying to the earlier mail. We can have a
> debugging session if you are okay with it.
>
> I am in the middle of a customer issue myself(That is the reason for this
> delay :-( ) and thinking of wrapping it up early next week. Would that be
> fine with you?
>
>
>
>
>
>
>
>
>
> 16:44:28 :) ⚡ gluster v status
>
> Status of volume: r2
>
> Gluster process TCP Port  RDMA Port  Online
> Pid
>
> 
> --
>
> Brick localhost.localdomain:/home/gfs/r2_0  49152 0  Y
> 5309
>
> Brick localhost.localdomain:/home/gfs/r2_1  49154 0  Y
> 5330
>
> Brick localhost.localdomain:/home/gfs/r2_2  49156 0  Y
> 5351
>
> Brick localhost.localdomain:/home/gfs/r2_3  49158 0  Y
> 5372
>
> Brick localhost.localdomain:/home/gfs/r2_4  49159 0  Y
> 5393
>
> Brick localhost.localdomain:/home/gfs/r2_5  49160 0  Y
> 5414
>
>