Hi Wannes Van causbroeck ,
Thanks to Atin! Seems to have fixed this here (under review):
http://review.gluster.org/8571
Regards,
-Prashanth Pai
- Original Message -
From: VAN CAUSBROECK Wannes wannes.vancausbro...@onprvp.fgov.be
To: gluster-users@gluster.org
Sent: Sunday, August 24,
Hi,
Probably, a really late reply but here you go:
This small python script(gf_dm_hash.py) should get hash of filename:
http://joejulian.name/blog/dht-misses-are-expensive/
Regards,
-Prashanth Pai
- Original Message -
From: david zhang700 david.zhang...@gmail.com
To: Ben Turner
The same goes for worm feature as well.
~Atin
On 09/05/2014 11:35 AM, Prashanth Pai wrote:
Hi Wannes Van causbroeck ,
Thanks to Atin! Seems to have fixed this here (under review):
http://review.gluster.org/8571
Regards,
-Prashanth Pai
- Original Message -
From: VAN
Hi all:
I do the following test:
I create a glusterfs replica volume (replica count is 2 ) with two server
node(server A and server B), then mount the volume in client node,
then, I shut down the network of server A node, in client node, I copy a
dir(which has a lot of small
Funny you mention this, yesterday i enable the worm feature for the first time
and i noticed this too :) I was already frightened that my whole setup was
broken!
Good to know it's a known problem, it's not blocking us in any way...
Thanks!
-Original Message-
From: Atin Mukherjee
On 09/04/2014 10:45 PM, Pranith Kumar Karampuri wrote:
On 09/04/2014 10:34 PM, John Mark Walker wrote:
Thanks Pranith - has a bug been filed so we can track this?
https://bugzilla.redhat.com/show_bug.cgi?id=1138386
Bug is happening because of synchronization problem between 'distribute'
On 05/09/2014, at 11:21 AM, Kaushal M wrote:
snip
As part of the first phase, we aim to delegate the distributed configuration
store. We are exploring consul [1]
Does this mean we'll need to learn Go as well as C and Python?
If so, that doesn't sound completely optimal. :/
That being said,
On 5 Sep 2014, at 12:21, Kaushal M kshlms...@gmail.com wrote:
- Peer membership management
- Maintains consistency of configuration data across nodes (distributed
configuration store)
- Distributed command execution (orchestration)
- Service management (manage GlusterFS daemons)
- Portmap
- Original Message -
On 05/09/2014, at 11:21 AM, Kaushal M wrote:
snip
As part of the first phase, we aim to delegate the distributed
configuration store. We are exploring consul [1]
Does this mean we'll need to learn Go as well as C and Python?
If so, that doesn't sound
- Original Message -
On 5 Sep 2014, at 12:21, Kaushal M kshlms...@gmail.com wrote:
- Peer membership management
- Maintains consistency of configuration data across nodes (distributed
configuration store)
- Distributed command execution (orchestration)
- Service management
Hi,
consider looking at this topic
http://supercolony.gluster.org/pipermail/gluster-users/2014-September/018577.html
. This guy made lots of test. It seems like 50 MB/s (or around 400 Mbps) is
the theoretical maximal limit for replica 1. I am waiting for comments to
this too.
2014-09-05 11:15
Libvirt log does not contain anything related. The error messages go
from dmesg of virtual machine.
What I can see in gluster logs is that connection between two peers was
lost. The physical connection, however, worked all the time.
Dne 5.9.2014 0:47, Joe Julian napsal(a):
That is about as
Isn't some of this covered by crm/corosync/pacemaker/heartbeat?
Sorta, kinda, mostly no. Those implement virtual synchrony, which is
closely related to consensus but not quite the same even in a formal CS
sense. In practice, using them is *very* different. Two jobs ago, I
inherited a design
As part of the first phase, we aim to delegate the distributed configuration
store. We are exploring consul [1] as a replacement for the existing
distributed configuration store (sum total of /var/lib/glusterd/* across all
nodes). Consul provides distributed configuration store which is
On Wed, Sep 03, 2014 at 09:00:09PM +0530, M S Vishwanath Bhat wrote:
On 03/09/14 20:31, David F. Robinson wrote:
Is this bug-fix going to be in the 3.5.3 beta release?
Not sure. I will have to check that. AFAIK the patches are present in
upstream and 3.6 branch. Is upgrading to 3.6 an option?
Hi Justin,
Regarding the server.py script, yes that graph is live and it is refreshed
periodically. According to the model right now, the server script at the
backend polls the meta directory of mounted volumes repeatedly, calculates the
speed from the difference of values encountered and
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
Hi all,
I've been trying to track down a bug that looks like data corruption coming
from the Gluster FUSE client. I'm pretty new to Gluster, so perhaps there are
better ways to get to the bottom of this.
I have a distributed, replicated volume that was initialized as 3.5.1 and it
has been
On 2014-09-04 16:31, Kaleb KEITHLEY wrote:
On 09/04/2014 06:18 PM, Ernie Dunbar wrote:
Hi, I just started using Gluster today to build a new fileserver, and so far
I'm impressed with the ease of set-up and configuration. However, my cluster
appears to be working normally, yet no
I have found that the O_APPEND flag is key to this failure - I had overlooked
that flag when reading the strace and trying to cobble up a minimal
reproduction.
I now have a small pair of python scripts that can reliably reproduce this
failure.
On Sep 5, 2014, at 11:11 AM, mike
I have a replicate glusterfs setup on 3 Bricks ( replicate = 3 ). I have
client and server quorum turned on. I rebooted one of the 3 bricks. When it
came back up, the client started throwing error messages that one of the
files went into split brain.
This is a good example of how split brain
Thanks Jeff for the detailed explanation. You mentioned delayed changelog may
have prevented this issue. Can you please tell me how to enable it?
Thanks
Ramesh
On Sep 5, 2014, at 6:23 PM, Jeff Darcy jda...@redhat.com wrote:
I have a replicate glusterfs setup on 3 Bricks ( replicate = 3 ). I
I have narrowed down the source of the bug.
Here is an strace of glusterfsd http://fpaste.org/131455/40996378/
The first line represents a write that does *not* make it into the underlying
file.
The last line is the write that stomps the earlier write.
As I said, the client file is opened in
23 matches
Mail list logo