- Original Message -
> From: "Emmanuel Dreyfus"
> To: "Niels de Vos"
> Cc: gluster-in...@gluster.org, gluster-devel@gluster.org
> Sent: Sunday, January 17, 2016 10:23:16 AM
> Subject: Re: [Gluster-devel] [Gluster-infra] NetBSD regression fixes
>
> Niels de Vos wrote:
>
> > > 2) Spuri
On 01/21/2016 12:20 PM, Atin Mukherjee wrote:
Etherpad link please?
Oops My Bad. Here it is https://public.pad.fsfe.org/p/NSR_name_suggestions
On 01/21/2016 12:19 PM, Avra Sengupta wrote:
Thanks for the suggestion Pranith. To make things interesting, we have
created an etherpad where people c
Etherpad link please?
On 01/21/2016 12:19 PM, Avra Sengupta wrote:
> Thanks for the suggestion Pranith. To make things interesting, we have
> created an etherpad where people can put their suggestions. Somewhere
> around mid of feb, we will look at all the suggestions we have got, have
> a communi
Thanks for the suggestion Pranith. To make things interesting, we have
created an etherpad where people can put their suggestions. Somewhere
around mid of feb, we will look at all the suggestions we have got, have
a community vote and zero in on one. The suggester of the winning name
gets a goo
On 01/18/2016 02:28 PM, Oleksandr Natalenko wrote:
XFS. Server side works OK, I'm able to mount volume again. Brick is 30% full.
Oleksandr,
Will it be possible to get the statedump of the client, bricks
output next time it happens?
https://github.com/gluster/glusterfs/blob/master/doc/
hey,
Which process is consuming so much cpu? I went through the logs
you gave me. I see that the following files are in gfid mismatch state:
<066e4525-8f8b-43aa-b7a1-86bbcecc68b9/safebrowsing-backup>,
<1d48754b-b38c-403d-94e2-0f5c41d5f885/recovery.bak>,
,
Could you give me the output of
The two favorite current marketing buzzwords seem to be "Hyperconverged"
and "Technology", so if we could work those in somewhere it might make
it seem more hip. Maybe "Hyperconvered Replication with Leader Technology".
On 01/20/16 20:38, Pranith Kumar Karampuri wrote:
On 01/19/2016 08:00 PM
I would expect cluster.lookup-optimize to be creating the problem here, so may
be you could first try with this option off. Another thing that would be
helpful is to get the strace when rm fails with no such file, as this would
tell us to identify if the readdir is not returning the entry or is
On 01/19/2016 08:00 PM, Avra Sengupta wrote:
Hi,
The leader election based replication has been called NSR or "New
Style Replication" for a while now. We would like to have a new name
for the same that's less generic. It can be something like "Leader
Driven Replication" or something more sp
Sakshi Bansal wrote:
> The directory deletion is failing with ENOTEMPTY since not all the files
> inside it have been deleted. Looks like lookup is not listing all the files.
> It is possible that cluster.lookup-optimize could be the culprit here. When
> did you turn this option 'on'? Was it during
Vijay Bellur wrote:
> Does not look like a DNS problem. It is happening to me outside of
> rackspace too.
I mean I have already seen rackspace VM failing to initiate connexions
because rackspace DNS failed to answer DNS requests. This was the cause
of failed regression at some time.
--
Emmanue
- Original Message -
> From: "Emmanuel Dreyfus"
> To: "Vijay Bellur" , "Pranith Kumar Karampuri"
>
> Cc: "Gluster Devel" , "Gluster Infra"
>
> Sent: Wednesday, January 20, 2016 9:10:10 PM
> Subject: Re: [Gluster-devel] Netbsd regressions are failing because of
> connection pr
Vijay Bellur wrote:
> There is some problem with review.gluster.org now. git clone/pull fails
> for me consistently.
First check DNS is working. I recall seing rackspace DNS failing to
answer.
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
__
- Original Message -
> From: "Pranith Kumar Karampuri"
> To: "Gluster Devel" , "Gluster Infra"
> , "Emmanuel Dreyfus"
>
> Sent: Wednesday, January 20, 2016 7:35:49 PM
> Subject: [Gluster-devel] Netbsd regressions are failing because of
> connection problems?
>
> /origin/*
> ERROR:
/origin/*
ERROR: Error cloning remote repo 'origin'
hudson.plugins.git.GitException: Command "git -c core.askpass=true fetch
--tags --progress git://review.gluster.org/glusterfs.git
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout:
stderr: fatal: unable to connect to review
Hi Niels,
Here is something for DHT2:
DHT2:
* Why DHT2:
To address consistency and correctness of operations, complexity and
scale requirements in the gluster IO path, keeping performance
characteristics unchanging.
The issue with current DHT is that a directory is present in all
subvolume
So what about IPv6?!
Bishoy
On Tuesday, January 19, 2016, Vijay Bellur wrote:
> On 01/11/2016 04:22 PM, Vijay Bellur wrote:
>
>> Hi All,
>>
>> We discussed the following proposal for 3.8 in the maintainers mailing
>> list and there was general consensus about the changes being a step in
>> the
> On Tue, 19 Jan 2016 01:01:19 -0500 (EST), Raghavendra Gowdappa said:
>
> - Original Message -
> > From: "Rick Macklem"
> > To: "Raghavendra Gowdappa"
> > Cc: "Jeff Darcy" , "Raghavendra G"
> > , "freebsd-fs"
> > , "Hubbard Jordan" , "Xavier
> > Hernandez" , "Gluster
> > Devel"
>
Le mardi 19 janvier 2016 à 10:30 +0530, Ravishankar N a écrit :
> https://build.gluster.org/job/glusterfs-devrpms-el7/6734/console
> --
>
> Error Summary
> -
> Disk Requirements:
>At least 104MB more space needed on the / file
> on Saturday the 30th of January I am scheduled to give a presentation
> titled "Gluster roadmap, recent improvements and upcoming features":
>
> https://fosdem.org/2016/schedule/event/gluster_roadmap/
>
> I would like to ask from all feature owners/developers to reply to this
> email with a s
On 01/20/2016 04:11 PM, Niels de Vos wrote:
Hi all,
on Saturday the 30th of January I am scheduled to give a presentation
titled "Gluster roadmap, recent improvements and upcoming features":
https://fosdem.org/2016/schedule/event/gluster_roadmap/
I would like to ask from all feature owners/
Due to less number of participants and noone to chair.
Thanks.
Venky
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
Eventing Framework for Gluster:
---
Let us imagine we have a Gluster monitoring system which displays
list of volumes and its state, to show the realtime status, monitoring
app need to query the Gluster in regular interval to check volume
status, new volumes etc. Assume
1. Why GlusterD 2.0
- Gluster has come a long way being the POSIX-compliant distributed
filesystem in clusters of small to medium sized clusters (10s-100s).
Gluster.Next is a collection of improvements to push Gluster's
capabilities to cloud-scale (read 1000s of nodes). GlusterD 2.0, the
next vers
sharding:
http://blog.gluster.org/2015/12/introducing-shard-translator/
http://blog.gluster.org/2015/12/sharding-what-next-2/
-Krutika
- Original Message -
> From: "Niels de Vos"
> To: gluster-devel@gluster.org
> Sent: Wednesday, January 20, 2016 4:11:42 PM
> Subject: [Gluster-devel
Sorry for the delay in response.
On 01/15/2016 02:34 PM, li.ping...@zte.com.cn wrote:
GLUSTERFS_WRITE_IS_APPEND Setting in afr_writev function at glusterfs
client end makes the posix_writev in the server end deal IO write
fops from parallel to serial in consequence.
i.e. multiple io-worker
http://www.gluster.org/pipermail/gluster-devel/2015-September/046773.html
Pranith
On 01/20/2016 04:11 PM, Niels de Vos wrote:
Hi all,
on Saturday the 30th of January I am scheduled to give a presentation
titled "Gluster roadmap, recent improvements and upcoming features":
https://fosdem.or
Hi all,
on Saturday the 30th of January I am scheduled to give a presentation
titled "Gluster roadmap, recent improvements and upcoming features":
https://fosdem.org/2016/schedule/event/gluster_roadmap/
I would like to ask from all feature owners/developers to reply to this
email with a short
Adding http://review.gluster.org/#/c/13119/ to the list. Hopefully it
will go in today.
On 01/20/2016 01:31 PM, Venky Shankar wrote:
Pranith Kumar Karampuri wrote:
https://public.pad.fsfe.org/p/glusterfs-3.7.7 is the final list of
patches I am waiting for before making 3.7.7 release.
Please
Yes, there are couple of messages like this in my logs too (I guess one
message per each remount):
===
[2016-01-18 23:42:08.742447] I [fuse-bridge.c:3875:notify_kernel_loop] 0-
glusterfs-fuse: kernel notifier loop terminated
===
On середа, 20 січня 2016 р. 09:51:23 EET Xavier Hernandez wrote:
>
I'm seeing a similar problem with 3.7.6.
This latest statedump contains a lot of gf_fuse_mt_invalidate_node_t
objects in fuse. Looking at the code I see they are used to send
invalidations to kernel fuse, however this is done in a separate thread
that writes a log message when it exits. On the
Pranith Kumar Karampuri wrote:
https://public.pad.fsfe.org/p/glusterfs-3.7.7 is the final list of
patches I am waiting for before making 3.7.7 release.
Please let me know if I need to wait for any other patches. It would be
great if we make the tag tomorrow.
Backport of http://review.gluster
32 matches
Mail list logo