Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Branched and further dates

2018-10-07 Thread Kotresh Hiremath Ravishankar
Had forgot to add milind, ccing.

On Mon, Oct 8, 2018 at 11:41 AM Kotresh Hiremath Ravishankar <
khire...@redhat.com> wrote:

>
>
> On Fri, Oct 5, 2018 at 10:31 PM Shyam Ranganathan 
> wrote:
>
>> On 10/05/2018 10:59 AM, Shyam Ranganathan wrote:
>> > On 10/04/2018 11:33 AM, Shyam Ranganathan wrote:
>> >> On 09/13/2018 11:10 AM, Shyam Ranganathan wrote:
>> >>> RC1 would be around 24th of Sep. with final release tagging around 1st
>> >>> of Oct.
>> >> RC1 now stands to be tagged tomorrow, and patches that are being
>> >> targeted for a back port include,
>> > We still are awaiting release notes (other than the bugs section) to be
>> > closed.
>> >
>> > There is one new bug that needs attention from the replicate team.
>> > https://bugzilla.redhat.com/show_bug.cgi?id=1636502
>> >
>> > The above looks important to me to be fixed before the release, @ravi or
>> > @pranith can you take a look?
>> >
>>
>> RC1 is tagged and release tarball generated.
>>
>> We still have 2 issues to work on,
>>
>> 1. The above messages from AFR in self heal logs
>>
>> 2. We need to test with Py3, else we risk putting out packages there on
>> Py3 default distros and causing some mayhem if basic things fail.
>>
>> I am open to suggestions on how to ensure we work with Py3, thoughts?
>>
>> I am thinking we run a regression on F28 (or a platform that defaults to
>> Py3) and ensure regressions are passing at the very least. For other
>> Python code that regressions do not cover,
>> - We have a list at [1]
>> - How can we split ownership of these?
>>
>
> +1 for the regression  run on py3 default platform. We don't need to run
> full regressions.
> We can chose to run only those test cases related to python. Categorically
> we have
> 1. geo-rep
> 2. events framework
> 3. glusterfind
> 4. tools/scripts
>
> I can take care of geo-rep. With following two patches, geo-rep works both
> on py2 and py3.
> I have tested these locally on centos-7.5 (py2 is default) and fedora28
> (making py3 default by
> symlink /usr/bin/python -> python3). Again the test was very basic, we can
> fix going forward,
> if there are any corner cases.
>
> 1. https://review.gluster.org/#/c/glusterfs/+/21356/ (Though this is
> events patch, geo-rep internally uses it, so is required for geo-rep.)
> 2. https://review.gluster.org/#/c/glusterfs/+/21357/
>
> It think we need to add regression tests for events and glusterfind.
> Adding milind to comment on glusterfind.
>
>>
>> @Aravinda, @Kotresh, and @ppai, looking to you folks to help out with
>> the process and needs here.
>>
>> Shyam
>>
>> [1] https://github.com/gluster/glusterfs/issues/411
>>
>
>
> --
> Thanks and Regards,
> Kotresh H R
>


-- 
Thanks and Regards,
Kotresh H R
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Branched and further dates

2018-10-07 Thread Kotresh Hiremath Ravishankar
On Fri, Oct 5, 2018 at 10:31 PM Shyam Ranganathan 
wrote:

> On 10/05/2018 10:59 AM, Shyam Ranganathan wrote:
> > On 10/04/2018 11:33 AM, Shyam Ranganathan wrote:
> >> On 09/13/2018 11:10 AM, Shyam Ranganathan wrote:
> >>> RC1 would be around 24th of Sep. with final release tagging around 1st
> >>> of Oct.
> >> RC1 now stands to be tagged tomorrow, and patches that are being
> >> targeted for a back port include,
> > We still are awaiting release notes (other than the bugs section) to be
> > closed.
> >
> > There is one new bug that needs attention from the replicate team.
> > https://bugzilla.redhat.com/show_bug.cgi?id=1636502
> >
> > The above looks important to me to be fixed before the release, @ravi or
> > @pranith can you take a look?
> >
>
> RC1 is tagged and release tarball generated.
>
> We still have 2 issues to work on,
>
> 1. The above messages from AFR in self heal logs
>
> 2. We need to test with Py3, else we risk putting out packages there on
> Py3 default distros and causing some mayhem if basic things fail.
>
> I am open to suggestions on how to ensure we work with Py3, thoughts?
>
> I am thinking we run a regression on F28 (or a platform that defaults to
> Py3) and ensure regressions are passing at the very least. For other
> Python code that regressions do not cover,
> - We have a list at [1]
> - How can we split ownership of these?
>

+1 for the regression  run on py3 default platform. We don't need to run
full regressions.
We can chose to run only those test cases related to python. Categorically
we have
1. geo-rep
2. events framework
3. glusterfind
4. tools/scripts

I can take care of geo-rep. With following two patches, geo-rep works both
on py2 and py3.
I have tested these locally on centos-7.5 (py2 is default) and fedora28
(making py3 default by
symlink /usr/bin/python -> python3). Again the test was very basic, we can
fix going forward,
if there are any corner cases.

1. https://review.gluster.org/#/c/glusterfs/+/21356/ (Though this is events
patch, geo-rep internally uses it, so is required for geo-rep.)
2. https://review.gluster.org/#/c/glusterfs/+/21357/

It think we need to add regression tests for events and glusterfind.
Adding milind to comment on glusterfind.

>
> @Aravinda, @Kotresh, and @ppai, looking to you folks to help out with
> the process and needs here.
>
> Shyam
>
> [1] https://github.com/gluster/glusterfs/issues/411
>


-- 
Thanks and Regards,
Kotresh H R
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Glusterfs and Structured data

2018-10-07 Thread Raghavendra Gowdappa
+Gluster-users 

On Mon, Oct 8, 2018 at 9:34 AM Raghavendra Gowdappa 
wrote:

>
>
> On Fri, Feb 9, 2018 at 4:30 PM Raghavendra Gowdappa 
> wrote:
>
>>
>>
>> - Original Message -
>> > From: "Pranith Kumar Karampuri" 
>> > To: "Raghavendra G" 
>> > Cc: "Gluster Devel" 
>> > Sent: Friday, February 9, 2018 2:30:59 PM
>> > Subject: Re: [Gluster-devel] Glusterfs and Structured data
>> >
>> >
>> >
>> > On Thu, Feb 8, 2018 at 12:05 PM, Raghavendra G <
>> raghaven...@gluster.com >
>> > wrote:
>> >
>> >
>> >
>> >
>> >
>> > On Tue, Feb 6, 2018 at 8:15 PM, Vijay Bellur < vbel...@redhat.com >
>> wrote:
>> >
>> >
>> >
>> >
>> >
>> > On Sun, Feb 4, 2018 at 3:39 AM, Raghavendra Gowdappa <
>> rgowd...@redhat.com >
>> > wrote:
>> >
>> >
>> > All,
>> >
>> > One of our users pointed out to the documentation that glusterfs is not
>> good
>> > for storing "Structured data" [1], while discussing an issue [2].
>> >
>> >
>> > As far as I remember, the content around structured data in the Install
>> Guide
>> > is from a FAQ that was being circulated in Gluster, Inc. indicating the
>> > startup's market positioning. Most of that was based on not wanting to
>> get
>> > into performance based comparisons of storage systems that are
>> frequently
>> > seen in the structured data space.
>> >
>> >
>> > Does any of you have more context on the feasibility of storing
>> "structured
>> > data" on Glusterfs? Is one of the reasons for such a suggestion
>> "staleness
>> > of metadata" as encountered in bugs like [3]?
>> >
>> >
>> > There are challenges that distributed storage systems face when exposed
>> to
>> > applications that were written for a local filesystem interface. We have
>> > encountered problems with applications like tar [4] that are not in the
>> > realm of "Structured data". If we look at the common theme across all
>> these
>> > problems, it is related to metadata & read after write consistency
>> issues
>> > with the default translator stack that gets exposed on the client side.
>> > While the default stack is optimal for other scenarios, it does seem
>> that a
>> > category of applications needing strict metadata consistency is not well
>> > served by that. We have observed that disabling a few performance
>> > translators and tuning cache timeouts for VFS/FUSE have helped to
>> overcome
>> > some of them. The WIP effort on timestamp consistency across the
>> translator
>> > stack, patches that have been merged as a result of the bugs that you
>> > mention & other fixes for outstanding issues should certainly help in
>> > catering to these workloads better with the file interface.
>> >
>> > There are deployments that I have come across where glusterfs is used
>> for
>> > storing structured data. gluster-block & qemu-libgfapi overcome the
>> metadata
>> > consistency problem by exposing a file as a block device & by disabling
>> most
>> > of the performance translators in the default stack. Workloads that have
>> > been deemed problematic with the file interface for the reasons alluded
>> > above, function well with the block interface.
>> >
>> > I agree that gluster-block due to its usage of a subset of glusterfs
>> fops
>> > (mostly reads/writes I guess), runs into less number of consistency
>> issues.
>> > However, as you've mentioned we seem to disable perf xlator stack in our
>> > tests/use-cases till now. Note that perf xlator stack is one of worst
>> > offenders as far as the metadata consistency is concerned (relatively
>> less
>> > scenarios of data inconsistency). So, I wonder,
>> > * what would be the scenario if we enable perf xlator stack for
>> > gluster-block?
>> > * Is performance on gluster-block satisfactory so that we don't need
>> these
>> > xlators?
>> > - Or is it that these xlators are not useful for the workload usually
>> run on
>> > gluster-block (For random read/write workload, read/write caching
>> xlators
>> > offer less or no advantage)?
>> >
>> > Yes. They are not useful. Block/VM files are opened with O_DIRECT, so we
>> > don't enable caching at any layer in glusterfs. md-cache could be
>> useful for
>> > serving fstat from glusterfs. But apart from that I don't see any other
>> > xlator contributing much.
>> >
>> >
>> >
>> > - Or theoretically the workload is ought to benefit from perf xlators,
>> but we
>> > don't see them in our results (there are open bugs to this effect)?
>> >
>> > I am asking these questions to ascertain priority on fixing perf
>> xlators for
>> > (meta)data inconsistencies. If we offer a different solution for these
>> > workloads, the need for fixing these issues will be less.
>> >
>> > My personal opinion is that both block and fs should work correctly.
>> i.e.
>> > caching xlators shouldn't lead to inconsistency issues.
>>
>> +1. That's my personal opinion too. We'll try to fix these issues.
>> However, we need to qualify the fixes. It would be helpful if community can
>> help here. We'll let community know when the fixes are in.
>>
>
> There has been some 

Re: [Gluster-devel] Glusterfs and Structured data

2018-10-07 Thread Raghavendra Gowdappa
On Fri, Feb 9, 2018 at 4:30 PM Raghavendra Gowdappa 
wrote:

>
>
> - Original Message -
> > From: "Pranith Kumar Karampuri" 
> > To: "Raghavendra G" 
> > Cc: "Gluster Devel" 
> > Sent: Friday, February 9, 2018 2:30:59 PM
> > Subject: Re: [Gluster-devel] Glusterfs and Structured data
> >
> >
> >
> > On Thu, Feb 8, 2018 at 12:05 PM, Raghavendra G < raghaven...@gluster.com
> >
> > wrote:
> >
> >
> >
> >
> >
> > On Tue, Feb 6, 2018 at 8:15 PM, Vijay Bellur < vbel...@redhat.com >
> wrote:
> >
> >
> >
> >
> >
> > On Sun, Feb 4, 2018 at 3:39 AM, Raghavendra Gowdappa <
> rgowd...@redhat.com >
> > wrote:
> >
> >
> > All,
> >
> > One of our users pointed out to the documentation that glusterfs is not
> good
> > for storing "Structured data" [1], while discussing an issue [2].
> >
> >
> > As far as I remember, the content around structured data in the Install
> Guide
> > is from a FAQ that was being circulated in Gluster, Inc. indicating the
> > startup's market positioning. Most of that was based on not wanting to
> get
> > into performance based comparisons of storage systems that are frequently
> > seen in the structured data space.
> >
> >
> > Does any of you have more context on the feasibility of storing
> "structured
> > data" on Glusterfs? Is one of the reasons for such a suggestion
> "staleness
> > of metadata" as encountered in bugs like [3]?
> >
> >
> > There are challenges that distributed storage systems face when exposed
> to
> > applications that were written for a local filesystem interface. We have
> > encountered problems with applications like tar [4] that are not in the
> > realm of "Structured data". If we look at the common theme across all
> these
> > problems, it is related to metadata & read after write consistency issues
> > with the default translator stack that gets exposed on the client side.
> > While the default stack is optimal for other scenarios, it does seem
> that a
> > category of applications needing strict metadata consistency is not well
> > served by that. We have observed that disabling a few performance
> > translators and tuning cache timeouts for VFS/FUSE have helped to
> overcome
> > some of them. The WIP effort on timestamp consistency across the
> translator
> > stack, patches that have been merged as a result of the bugs that you
> > mention & other fixes for outstanding issues should certainly help in
> > catering to these workloads better with the file interface.
> >
> > There are deployments that I have come across where glusterfs is used for
> > storing structured data. gluster-block & qemu-libgfapi overcome the
> metadata
> > consistency problem by exposing a file as a block device & by disabling
> most
> > of the performance translators in the default stack. Workloads that have
> > been deemed problematic with the file interface for the reasons alluded
> > above, function well with the block interface.
> >
> > I agree that gluster-block due to its usage of a subset of glusterfs fops
> > (mostly reads/writes I guess), runs into less number of consistency
> issues.
> > However, as you've mentioned we seem to disable perf xlator stack in our
> > tests/use-cases till now. Note that perf xlator stack is one of worst
> > offenders as far as the metadata consistency is concerned (relatively
> less
> > scenarios of data inconsistency). So, I wonder,
> > * what would be the scenario if we enable perf xlator stack for
> > gluster-block?
> > * Is performance on gluster-block satisfactory so that we don't need
> these
> > xlators?
> > - Or is it that these xlators are not useful for the workload usually
> run on
> > gluster-block (For random read/write workload, read/write caching xlators
> > offer less or no advantage)?
> >
> > Yes. They are not useful. Block/VM files are opened with O_DIRECT, so we
> > don't enable caching at any layer in glusterfs. md-cache could be useful
> for
> > serving fstat from glusterfs. But apart from that I don't see any other
> > xlator contributing much.
> >
> >
> >
> > - Or theoretically the workload is ought to benefit from perf xlators,
> but we
> > don't see them in our results (there are open bugs to this effect)?
> >
> > I am asking these questions to ascertain priority on fixing perf xlators
> for
> > (meta)data inconsistencies. If we offer a different solution for these
> > workloads, the need for fixing these issues will be less.
> >
> > My personal opinion is that both block and fs should work correctly. i.e.
> > caching xlators shouldn't lead to inconsistency issues.
>
> +1. That's my personal opinion too. We'll try to fix these issues.
> However, we need to qualify the fixes. It would be helpful if community can
> help here. We'll let community know when the fixes are in.
>

There has been some progress on this. Details can be found at:
https://www.mail-archive.com/gluster-devel@gluster.org/msg14877.html


> > It would be better
> > if we are in a position where we choose a workload on block vs fs based
> on
> > their perfo

[Gluster-devel] Weekly Untriaged Bugs

2018-10-07 Thread jenkins
[...truncated 6 lines...]
https://bugzilla.redhat.com/1635688 / build: Keep only the valid 
(maintained/supported) components in the build
https://bugzilla.redhat.com/1636297 / build: Make it easy to build / host a 
project which just builds glusterfs translator
https://bugzilla.redhat.com/1636088 / common-ha: ocf:glusterfs:volume resource 
agent for pacemaker fails to stop gluster volume processes like glusterfsd
https://bugzilla.redhat.com/1636631 / disperse: Issuing a "heal ... full" on a 
disperse volume causes permanent high CPU utilization.
https://bugzilla.redhat.com/1632503 / fuse: FUSE client segfaults when 
performance.md-cache-statfs is enabled for a volume
https://bugzilla.redhat.com/1636798 / fuse: Memory Consumption of 
Glusterfs-fuse client mount point process keeps on increasing If processes 
which has opened the files gets killed abruptly
https://bugzilla.redhat.com/1628269 / geo-replication: [geo-rep]: _GMaster: 
changelogs could not be processed completely - moving on...
https://bugzilla.redhat.com/1631595 / geo-replication: glusterfs 
geo-replication breaks
https://bugzilla.redhat.com/1633259 / glusterd2: gd2: respin/rerelease 4.1 
vendor tarball to update golang.org/x/net/html/...
https://bugzilla.redhat.com/1633669 / glusterd: Gluster bricks fails frequently
https://bugzilla.redhat.com/1635784 / index: brick process segfault
https://bugzilla.redhat.com/1628219 / libgfapi: High memory consumption 
depending on volume bricks count
https://bugzilla.redhat.com/1631247 / libgfapi: Issue enabling 
cluster.use-compound-fops with libgfapi application running
https://bugzilla.redhat.com/1630803 / libgfapi: libgfapi coredump when 
glfs_fini() is called on uninitialised fs object
https://bugzilla.redhat.com/1630804 / libgfapi: libgfapi-python: 
test_listdir_with_stat and test_scandir failure on release 5 branch
https://bugzilla.redhat.com/1634220 / md-cache: md-cache: some problems of 
cache "glusterfs.posix.acl" for ganesha
https://bugzilla.redhat.com/1633754 / nfs: Missing 
/usr/lib64/glusterfs/4.1.5/xlator/nfs/server.so file
https://bugzilla.redhat.com/1636570 / posix: Cores due to SIGILL during 
multiplex regression tests
https://bugzilla.redhat.com/1633318 / posix: health check fails on restart from 
crash
https://bugzilla.redhat.com/1635976 / replicate: Low Random write IOPS in VM 
workloads
https://bugzilla.redhat.com/1635977 / replicate: Writes taking very long time 
leading to system hogging
https://bugzilla.redhat.com/1635863 / rpc: Gluster peer probe doesn't work for 
IPv6
https://bugzilla.redhat.com/1631437 / rpc: Mount process hangs if remote server 
is unavailable
https://bugzilla.redhat.com/1633926 / scripts: Script to collect system-stats
https://bugzilla.redhat.com/1630891 / sharding: sharding: nfstest creat gets 
incorrect file size
https://bugzilla.redhat.com/1632935 / tiering: "peer probe" rejected after a 
tier detach commit: "Version of Cksums nas-volume differ".
https://bugzilla.redhat.com/1627060 / trash-xlator: ./tests/features/trash.t 
test case failing on s390x
[...truncated 2 lines...]

build.log
Description: Binary data
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel