Re: [Gluster-devel] [Gluster-users] On making ctime generator enabled by default in stack

2018-11-11 Thread Amar Tumballi
On Mon, Nov 12, 2018 at 10:39 AM Vijay Bellur  wrote:

>
>
> On Sun, Nov 11, 2018 at 8:25 PM Raghavendra Gowdappa 
> wrote:
>
>>
>>
>> On Sun, Nov 11, 2018 at 11:41 PM Vijay Bellur  wrote:
>>
>>>
>>>
>>> On Mon, Nov 5, 2018 at 8:31 PM Raghavendra Gowdappa 
>>> wrote:
>>>


 On Tue, Nov 6, 2018 at 9:58 AM Vijay Bellur  wrote:

>
>
> On Mon, Nov 5, 2018 at 7:56 PM Raghavendra Gowdappa <
> rgowd...@redhat.com> wrote:
>
>> All,
>>
>> There is a patch [1] from Kotresh, which makes ctime generator as
>> default in stack. Currently ctime generator is being recommended only for
>> usecases where ctime is important (like for Elasticsearch). However, a
>> reliable (c)(m)time can fix many consistency issues within glusterfs 
>> stack
>> too. These are issues with caching layers having stale (meta)data
>> [2][3][4]. Basically just like applications, components within glusterfs
>> stack too need a time to find out which among racing ops (like write, 
>> stat,
>> etc) has latest (meta)data.
>>
>> Also note that a consistent (c)(m)time is not an optional feature,
>> but instead forms the core of the infrastructure. So, I am proposing to
>> merge this patch. If you've any objections, please voice out before Nov 
>> 13,
>> 2018 (a week from today).
>>
>> As to the existing known issues/limitations with ctime generator, my
>> conversations with Kotresh, revealed following:
>> * Potential performance degradation (we don't yet have data to
>> conclusively prove it, preliminary basic tests from Kotresh didn't 
>> indicate
>> a significant perf drop).
>>
>
> Do we have this data captured somewhere? If not, would it be possible
> to share that data here?
>

 I misquoted Kotresh. He had measured impact of gfid2path and said both
 features might've similar impact as major perf cost is related to storing
 xattrs on backend fs. I am in the process of getting a fresh set of
 numbers. Will post those numbers when available.


>>>
>>> I observe that the patch under discussion has been merged now [1]. A
>>> quick search did not yield me any performance data. Do we have the
>>> performance numbers posted somewhere?
>>>
>>
>> No. Perf benchmarking is a task pending on me.
>>
>
> When can we expect this task to be complete?
>
> In any case, I don't think it is ideal for us to merge a patch without
> completing our due diligence on it. How do we want to handle this scenario
> since the patch is already merged?
>
> We could:
>
> 1. Revert the patch now
> 2. Review the performance data and revert the patch if performance
> characterization indicates a significant dip. It would be preferable to
> complete this activity before we branch off for the next release.
>

I am for option 2. Considering the branch out for next release is another 2
months, and no one is expected to use the 'release' off a master branch
yet, it makes sense to give that buffer time to get this activity completed.

Regards,
Amar

3. Think of some other option?
>
> Thanks,
> Vijay
>
>
>> ___
> Gluster-users mailing list
> gluster-us...@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users



-- 
Amar Tumballi (amarts)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] On making ctime generator enabled by default in stack

2018-11-11 Thread Vijay Bellur
On Sun, Nov 11, 2018 at 8:25 PM Raghavendra Gowdappa 
wrote:

>
>
> On Sun, Nov 11, 2018 at 11:41 PM Vijay Bellur  wrote:
>
>>
>>
>> On Mon, Nov 5, 2018 at 8:31 PM Raghavendra Gowdappa 
>> wrote:
>>
>>>
>>>
>>> On Tue, Nov 6, 2018 at 9:58 AM Vijay Bellur  wrote:
>>>


 On Mon, Nov 5, 2018 at 7:56 PM Raghavendra Gowdappa <
 rgowd...@redhat.com> wrote:

> All,
>
> There is a patch [1] from Kotresh, which makes ctime generator as
> default in stack. Currently ctime generator is being recommended only for
> usecases where ctime is important (like for Elasticsearch). However, a
> reliable (c)(m)time can fix many consistency issues within glusterfs stack
> too. These are issues with caching layers having stale (meta)data
> [2][3][4]. Basically just like applications, components within glusterfs
> stack too need a time to find out which among racing ops (like write, 
> stat,
> etc) has latest (meta)data.
>
> Also note that a consistent (c)(m)time is not an optional feature, but
> instead forms the core of the infrastructure. So, I am proposing to merge
> this patch. If you've any objections, please voice out before Nov 13, 2018
> (a week from today).
>
> As to the existing known issues/limitations with ctime generator, my
> conversations with Kotresh, revealed following:
> * Potential performance degradation (we don't yet have data to
> conclusively prove it, preliminary basic tests from Kotresh didn't 
> indicate
> a significant perf drop).
>

 Do we have this data captured somewhere? If not, would it be possible
 to share that data here?

>>>
>>> I misquoted Kotresh. He had measured impact of gfid2path and said both
>>> features might've similar impact as major perf cost is related to storing
>>> xattrs on backend fs. I am in the process of getting a fresh set of
>>> numbers. Will post those numbers when available.
>>>
>>>
>>
>> I observe that the patch under discussion has been merged now [1]. A
>> quick search did not yield me any performance data. Do we have the
>> performance numbers posted somewhere?
>>
>
> No. Perf benchmarking is a task pending on me.
>

When can we expect this task to be complete?

In any case, I don't think it is ideal for us to merge a patch without
completing our due diligence on it. How do we want to handle this scenario
since the patch is already merged?

We could:

1. Revert the patch now
2. Review the performance data and revert the patch if performance
characterization indicates a significant dip. It would be preferable to
complete this activity before we branch off for the next release.
3. Think of some other option?

Thanks,
Vijay


>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] On making ctime generator enabled by default in stack

2018-11-11 Thread Raghavendra Gowdappa
On Sun, Nov 11, 2018 at 11:41 PM Vijay Bellur  wrote:

>
>
> On Mon, Nov 5, 2018 at 8:31 PM Raghavendra Gowdappa 
> wrote:
>
>>
>>
>> On Tue, Nov 6, 2018 at 9:58 AM Vijay Bellur  wrote:
>>
>>>
>>>
>>> On Mon, Nov 5, 2018 at 7:56 PM Raghavendra Gowdappa 
>>> wrote:
>>>
 All,

 There is a patch [1] from Kotresh, which makes ctime generator as
 default in stack. Currently ctime generator is being recommended only for
 usecases where ctime is important (like for Elasticsearch). However, a
 reliable (c)(m)time can fix many consistency issues within glusterfs stack
 too. These are issues with caching layers having stale (meta)data
 [2][3][4]. Basically just like applications, components within glusterfs
 stack too need a time to find out which among racing ops (like write, stat,
 etc) has latest (meta)data.

 Also note that a consistent (c)(m)time is not an optional feature, but
 instead forms the core of the infrastructure. So, I am proposing to merge
 this patch. If you've any objections, please voice out before Nov 13, 2018
 (a week from today).

 As to the existing known issues/limitations with ctime generator, my
 conversations with Kotresh, revealed following:
 * Potential performance degradation (we don't yet have data to
 conclusively prove it, preliminary basic tests from Kotresh didn't indicate
 a significant perf drop).

>>>
>>> Do we have this data captured somewhere? If not, would it be possible to
>>> share that data here?
>>>
>>
>> I misquoted Kotresh. He had measured impact of gfid2path and said both
>> features might've similar impact as major perf cost is related to storing
>> xattrs on backend fs. I am in the process of getting a fresh set of
>> numbers. Will post those numbers when available.
>>
>>
>
> I observe that the patch under discussion has been merged now [1]. A quick
> search did not yield me any performance data. Do we have the performance
> numbers posted somewhere?
>

No. Perf benchmarking is a task pending on me.


>
> Thanks,
> Vijay
>
> [1] https://review.gluster.org/#/c/glusterfs/+/21060/
>
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Weekly Untriaged Bugs

2018-11-11 Thread jenkins
[...truncated 6 lines...]
https://bugzilla.redhat.com/1641969 / core: Mounted Dir Gets Error in GlusterFS 
Storage Cluster with SSL/TLS Encryption as Doing add-brick and remove-brick 
Repeatly
https://bugzilla.redhat.com/1642804 / core: remove 'decompounder' xlator and 
compound fops from glusterfs codebase
https://bugzilla.redhat.com/1648169 / encryption-xlator: Fuse mount would crash 
if features.encryption is on in the version from 3.13.0 to 4.1.5
https://bugzilla.redhat.com/1648642 / geo-replication: fails to sync non-ascii 
(utf8) file and directory names, causes permanently faulty geo-replication state
https://bugzilla.redhat.com/1644322 / geo-replication: flooding log with 
"glusterfs-fuse: read from /dev/fuse returned -1 (Operation not permitted)"
https://bugzilla.redhat.com/1643716 / geo-replication: "OSError: [Errno 40] Too 
many levels of symbolic links" when syncing deletion of directory hierarchy
https://bugzilla.redhat.com/1647506 / glusterd: glusterd_brick_start wrongly 
discovers already-running brick
https://bugzilla.redhat.com/1640109 / md-cache: Default ACL cannot be removed
https://bugzilla.redhat.com/163 / project-infrastructure: GD2 containers 
build do not seems to use a up to date container
https://bugzilla.redhat.com/1641021 / project-infrastructure: gd2-smoke do not 
seems to clean itself after a crash
[...truncated 2 lines...]

build.log
Description: Binary data
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] On making ctime generator enabled by default in stack

2018-11-11 Thread Vijay Bellur
On Mon, Nov 5, 2018 at 8:31 PM Raghavendra Gowdappa 
wrote:

>
>
> On Tue, Nov 6, 2018 at 9:58 AM Vijay Bellur  wrote:
>
>>
>>
>> On Mon, Nov 5, 2018 at 7:56 PM Raghavendra Gowdappa 
>> wrote:
>>
>>> All,
>>>
>>> There is a patch [1] from Kotresh, which makes ctime generator as
>>> default in stack. Currently ctime generator is being recommended only for
>>> usecases where ctime is important (like for Elasticsearch). However, a
>>> reliable (c)(m)time can fix many consistency issues within glusterfs stack
>>> too. These are issues with caching layers having stale (meta)data
>>> [2][3][4]. Basically just like applications, components within glusterfs
>>> stack too need a time to find out which among racing ops (like write, stat,
>>> etc) has latest (meta)data.
>>>
>>> Also note that a consistent (c)(m)time is not an optional feature, but
>>> instead forms the core of the infrastructure. So, I am proposing to merge
>>> this patch. If you've any objections, please voice out before Nov 13, 2018
>>> (a week from today).
>>>
>>> As to the existing known issues/limitations with ctime generator, my
>>> conversations with Kotresh, revealed following:
>>> * Potential performance degradation (we don't yet have data to
>>> conclusively prove it, preliminary basic tests from Kotresh didn't indicate
>>> a significant perf drop).
>>>
>>
>> Do we have this data captured somewhere? If not, would it be possible to
>> share that data here?
>>
>
> I misquoted Kotresh. He had measured impact of gfid2path and said both
> features might've similar impact as major perf cost is related to storing
> xattrs on backend fs. I am in the process of getting a fresh set of
> numbers. Will post those numbers when available.
>
>

I observe that the patch under discussion has been merged now [1]. A quick
search did not yield me any performance data. Do we have the performance
numbers posted somewhere?

Thanks,
Vijay

[1] https://review.gluster.org/#/c/glusterfs/+/21060/
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel