Saravana,
Thank you for working on this. We'll be considering this patch for 3.10.
On Thu, Oct 27, 2016 at 11:54 AM, Saravanakumar Arumugam <
sarum...@redhat.com> wrote:
> Hi,
>
> I have refreshed this patch addressing review comments (originally
> authored by Gaurav) which moves brick pid
Hi,
I have refreshed this patch addressing review comments (originally
authored by Gaurav) which moves brick pid files from /var/lib/glusterd/*
to /var/run/gluster.
It will be great if you can review this:
http://review.gluster.org/#/c/13580/
Thank you
Regards,
Saravana
Found the RC. The problem seems to be that sharding translator attempts to
create
non-existent shards in read/write codepaths with a newly generated gfid
attached
to the create request in case the shard is absent. Replicate translator,
which sits below
sharding on the stack takes this request and
Hi, Raghavendra,
Could you please give some suggestion for this issue? we try to find the clue
for this issue for a long time, but it has no progress:(
Thanks & Best Regards,
George
-Original Message-
From: Lian, George (Nokia - CN/Hangzhou)
Sent: Wednesday, October 19, 2016 4:40 PM
* I've also posted this to my blog at https://kshlm.in/post/volgen-2.0/ *
# Designing and prototyping volgen for GD2
I've recently begun working on volgen for GD2. This post gives an
overview of what I've been doing till now.
## Some background first
One of the requirements for GD2 is to make
Now it's reproducible, thanks. :)
I think I know the RC. Let me confirm it through tests and report back.
-Krutika
On Thu, Oct 27, 2016 at 10:42 AM, qingwei wei wrote:
> Hi,
>
> I did few more test runs and it seems that it happens during this sequence
>
> 1.populate data
BZ filed :https://bugzilla.redhat.com/show_bug.cgi?id=1389282
2016-10-27 16:46 GMT+05:30 Atin Mukherjee :
>
>
> --
>
> ~ Atin (atinm)
>
--
~ Atin (atinm)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
Hi,
I have done some basic testing specific to SSL component on tar(
http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.9.0rc2.tar.gz
).
1) After enable SSL(io and mgmt encryption ) mount is working(for
distributed/replicated) and able to transfer data on volume.
2) reconnection is
This should work without any issues. It is possible that the shard(s)
would get created with different gfids but the ones on the lagging brick
will eventually (by the time heal-info returns all zeroes) get replaced
with shards having the correct gfids.
Have you tried it yet? Did you face any
Hi,
My final goal of the test is to see the impact of brick replacement
while IO is till running.
One scenario that i think of is as below:
1. random read IO is performed on gluster volume (3 replicas)
2. 1 brick down and IO still ongoing
3. Perform brick replacement and IO still ongoing
4.
Ack on nfs-ganesha bits. Tentative ack on gnfs bits.
Conditional ack on build, see:
http://review.gluster.org/15726
http://review.gluster.org/15733
http://review.gluster.org/15737
http://review.gluster.org/15743
There will be backports to 3.9 of the last three soon. Timely reviews of
Tested the gbench script across 3 volume configurations, dist,
dist-replicate, dist-ec. All passed. There was no -ve testing that I
did, so nothing to report on that front.
This test is not on any list, just an additional test that I was able to
run in my day. It covers smallfile metadata
Ack for Geo-replication and Events API features. No regressions found
during testing, and verified all the bug fixes made for Release-3.9.
regards
Aravinda
On Wednesday 26 October 2016 08:04 PM, Aravinda wrote:
Gluster 3.9.0rc2 tarball is available here
Hi,
There has been some work that is being done on improving the small file
performance.
Few of the many steps were md-cache enhancements, compound fops implementation.
Both these will be available with 3.9 release. There are many more improvements
planned [1].
[1]
14 matches
Mail list logo