Re: [Gluster-devel] gluster volume stop and the regressions

2018-02-13 Thread Milind Changire
The volume stop, in brick-mux mode reveals a race with my patch [1]
Although this behavior is 100% reproducible with my patch, this, by no
means, implies that my patch is buggy.

In brick-mux mode, during volume stop, when glusterd sends a brick-detach
message to the brick process for the last brick, the brick process responds
back to glusterd with an acknowledgment and then kills itself with a
SIGTERM signal. All this sounds fine. However, somehow, the response
doesn't reach glusterd and instead a socket disconnect notification reaches
glusterd before the response. This causes glusterd to presume that
something has gone wrong during volume stop and glusterd then fails the
volume stop operation causing the test to fail.

This race is reproducible by running the test
tests/basic/distribute/rebal-all-nodes-migrate.t in brick-mux mode for my
patch [1]

[1] https://review.gluster.org/19308


On Thu, Feb 1, 2018 at 9:54 AM, Atin Mukherjee  wrote:

> I don't think that's the right way. Ideally the test shouldn't be
> attempting to stop a volume if rebalance session is in progress. If we do
> see such a situation even with we check for rebalance status and wait till
> it finishes for 30 secs and still volume stop fails with rebalance session
> in progress error, that means either (a) rebalance session took more than
> the timeout which has been passed to EXPECT_WITHIN or (b) there's a bug in
> the code.
>
> On Thu, Feb 1, 2018 at 9:46 AM, Milind Changire 
> wrote:
>
>> If a *volume stop* fails at a user's production site with a reason like
>> *rebalance session is active* then the admin will wait for the session to
>> complete and then reissue a *volume stop*;
>>
>> So, in essence, the failed volume stop is not fatal; for the regression
>> tests, I would like to propose to change a single volume stop to
>> *EXPECT_WITHIN 30* so that a if a volume cannot be stopped even after 30
>> seconds, then it could be termed fatal in the regressions scenario
>>
>> Any comments about the proposal ?
>>
>> --
>> Milind
>>
>>
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-devel
>>
>
>


-- 
Milind
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] run-tests-in-vagrant

2018-02-13 Thread Nigel Babu
Hello,

Centos CI has a run-tests-in-vagrant job. Do we continue to need this
anymore? It still runs master and 3.8. I don't see this job adding much
value at this point given we only look at results that are on
build.gluster.org. I'd like to use the extra capacity for other tests that
will run on centos-ci.

-- 
nigelb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Glusterfs and Structured data

2018-02-13 Thread Raghavendra G
I've started marking "whiteboard" of bugs in this class with tag
"GLUSTERFS_METADATA_INCONSISTENCY". Please add the tag to any bugs which
you deem to fit in.

On Fri, Feb 9, 2018 at 4:30 PM, Raghavendra Gowdappa 
wrote:

>
>
> - Original Message -
> > From: "Pranith Kumar Karampuri" 
> > To: "Raghavendra G" 
> > Cc: "Gluster Devel" 
> > Sent: Friday, February 9, 2018 2:30:59 PM
> > Subject: Re: [Gluster-devel] Glusterfs and Structured data
> >
> >
> >
> > On Thu, Feb 8, 2018 at 12:05 PM, Raghavendra G < raghaven...@gluster.com
> >
> > wrote:
> >
> >
> >
> >
> >
> > On Tue, Feb 6, 2018 at 8:15 PM, Vijay Bellur < vbel...@redhat.com >
> wrote:
> >
> >
> >
> >
> >
> > On Sun, Feb 4, 2018 at 3:39 AM, Raghavendra Gowdappa <
> rgowd...@redhat.com >
> > wrote:
> >
> >
> > All,
> >
> > One of our users pointed out to the documentation that glusterfs is not
> good
> > for storing "Structured data" [1], while discussing an issue [2].
> >
> >
> > As far as I remember, the content around structured data in the Install
> Guide
> > is from a FAQ that was being circulated in Gluster, Inc. indicating the
> > startup's market positioning. Most of that was based on not wanting to
> get
> > into performance based comparisons of storage systems that are frequently
> > seen in the structured data space.
> >
> >
> > Does any of you have more context on the feasibility of storing
> "structured
> > data" on Glusterfs? Is one of the reasons for such a suggestion
> "staleness
> > of metadata" as encountered in bugs like [3]?
> >
> >
> > There are challenges that distributed storage systems face when exposed
> to
> > applications that were written for a local filesystem interface. We have
> > encountered problems with applications like tar [4] that are not in the
> > realm of "Structured data". If we look at the common theme across all
> these
> > problems, it is related to metadata & read after write consistency issues
> > with the default translator stack that gets exposed on the client side.
> > While the default stack is optimal for other scenarios, it does seem
> that a
> > category of applications needing strict metadata consistency is not well
> > served by that. We have observed that disabling a few performance
> > translators and tuning cache timeouts for VFS/FUSE have helped to
> overcome
> > some of them. The WIP effort on timestamp consistency across the
> translator
> > stack, patches that have been merged as a result of the bugs that you
> > mention & other fixes for outstanding issues should certainly help in
> > catering to these workloads better with the file interface.
> >
> > There are deployments that I have come across where glusterfs is used for
> > storing structured data. gluster-block & qemu-libgfapi overcome the
> metadata
> > consistency problem by exposing a file as a block device & by disabling
> most
> > of the performance translators in the default stack. Workloads that have
> > been deemed problematic with the file interface for the reasons alluded
> > above, function well with the block interface.
> >
> > I agree that gluster-block due to its usage of a subset of glusterfs fops
> > (mostly reads/writes I guess), runs into less number of consistency
> issues.
> > However, as you've mentioned we seem to disable perf xlator stack in our
> > tests/use-cases till now. Note that perf xlator stack is one of worst
> > offenders as far as the metadata consistency is concerned (relatively
> less
> > scenarios of data inconsistency). So, I wonder,
> > * what would be the scenario if we enable perf xlator stack for
> > gluster-block?
> > * Is performance on gluster-block satisfactory so that we don't need
> these
> > xlators?
> > - Or is it that these xlators are not useful for the workload usually
> run on
> > gluster-block (For random read/write workload, read/write caching xlators
> > offer less or no advantage)?
> >
> > Yes. They are not useful. Block/VM files are opened with O_DIRECT, so we
> > don't enable caching at any layer in glusterfs. md-cache could be useful
> for
> > serving fstat from glusterfs. But apart from that I don't see any other
> > xlator contributing much.
> >
> >
> >
> > - Or theoretically the workload is ought to benefit from perf xlators,
> but we
> > don't see them in our results (there are open bugs to this effect)?
> >
> > I am asking these questions to ascertain priority on fixing perf xlators
> for
> > (meta)data inconsistencies. If we offer a different solution for these
> > workloads, the need for fixing these issues will be less.
> >
> > My personal opinion is that both block and fs should work correctly. i.e.
> > caching xlators shouldn't lead to inconsistency issues.
>
> +1. That's my personal opinion too. We'll try to fix these issues.
> However, we need to qualify the fixes. It would be helpful if community can
> help here. We'll let community know when the fixes are in.
>

[Gluster-devel] Coverity covscan for 2018-02-13-e67cd078 (master branch)

2018-02-13 Thread staticanalysis
GlusterFS Coverity covscan results are available from
http://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2018-02-13-e67cd078
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel