On 10 June 2015 at 12:24, Gabriel Kuri gk...@ieee.org wrote:
I need some clarification on how geo-replication (gluster = 3.5)
operates, as I'm not fully understanding how the new and improved version
works from the docs.
Let's assume the following scenario, three servers setup in a geo-rep
On 06/10/2015 03:24 PM, Venky Shankar wrote:
I would like to propose Aravinda (avish...@redhat.com) as a maintainer
for Geo-replication. I can act as a backup maintainer in his absence.
Hope it's not too late for this proposal.
Thanks for the consideration Venky. I think it would be apt to
On 06/10/2015 02:58 PM, Sergio Traldi wrote:
On 06/10/2015 10:27 AM, Krishnan Parthasarathi wrote:
Hi all,
I two servers with 3.7.1 and have the same problem of this issue:
http://comments.gmane.org/gmane.comp.file-systems.gluster.user/20693
My servers packages:
# rpm -qa | grep gluster |
I would like to propose Aravinda (avish...@redhat.com) as a maintainer
for Geo-replication. I can act as a backup maintainer in his absence.
Hope it's not too late for this proposal.
-Venky
On Wed, Jun 3, 2015 at 5:20 PM, Vijay Bellur vbel...@redhat.com wrote:
Hi All,
To supplement our
Hi Guys,
I have a failure server in my gluster pool, I try to install a new server to
replace that down server.
I install new server by ovirt (ovirt 3.5.2), that new server same to old
server’s IP, Name etc. The server state is “UP” in ovirt web.
I try to peer probe one in peers node,
Thanks, I'll do that. Is it possible/likely that this lower op-version is
also causing the issue I posted on gluster-users earlier: 0-rpcsvc: rpc
actor failed to complete successfully
https://www.mail-archive.com/gluster-users@gluster.org/msg20569.html
Any pointers on that would be greatly
On 06/10/2015 05:32 PM, Tiemen Ruiten wrote:
Hello Atin,
We are running 3.7.0 on our storage nodes and suffer from the same issue.
Is it safe to perform the same command or should we first upgrade to 3.7.1?
You should upgrade to 3.7.1
On 10 June 2015 at 13:45, Atin Mukherjee
Hello Atin,
We are running 3.7.0 on our storage nodes and suffer from the same issue.
Is it safe to perform the same command or should we first upgrade to 3.7.1?
On 10 June 2015 at 13:45, Atin Mukherjee amukh...@redhat.com wrote:
On 06/10/2015 02:58 PM, Sergio Traldi wrote:
On 06/10/2015
In short, it would seem that either were I to use geo-repliciation,
whether recommended or not in this kind of usage, I'd need to own both
which volume to mount and what to do with writes when the client has
chosen to mount the slave.
True. Various active/active geo-replication solutions
On 10 June 2015 at 14:06, Andreas Hollaus andreas.holl...@ericsson.com
wrote:
Hi,
I wonder if there are any plans for a c API which would let me manage my
GlusterFS
volume (create volume, addr brick, remove brick, examine status and so on)?
I am not aware of any plans for c API. But there
Hi,
I wonder if there are any plans for a c API which would let me manage my
GlusterFS
volume (create volume, addr brick, remove brick, examine status and so on)?
Regards
Andreas
___
Gluster-users mailing list
Gluster-users@gluster.org
On 06/10/2015 02:30 PM, 何亦军 wrote:
Hi Guys,
I have a failure server in my gluster pool, I try to install a new server
to replace that down server.
I install new server by ovirt (ovirt 3.5.2), that new server same to old
server’s IP, Name etc. The server state is “UP” in ovirt web.
Hi,
yesterday I’ve got a strange crash on almost all bricks, same type of crash,
repeated:
[2015-06-09 18:23:56.407520] I [login.c:81:gf_auth] 0-auth/login: allowed user
names: c3deedb5-893f-41fb-8c33-9ae23a0e9d27
[2015-06-09 18:23:56.407580] I [server-handshake.c:585:server_setvolume]
Derek Yarnell derek@... writes:
We just upgraded to 3.7.0 from 3.4.2 last night (no quotas, no geosync)
and the volume came up and is available via NFS. But we can't get
status for the volume or any other administration. We have restarted
all the gluster daemons on both nodes in the
Hi all,
I two servers with 3.7.1 and have the same problem of this issue:
http://comments.gmane.org/gmane.comp.file-systems.gluster.user/20693
My servers packages:
# rpm -qa | grep gluster | sort
glusterfs-3.7.1-1.el6.x86_64
glusterfs-api-3.7.1-1.el6.x86_64
glusterfs-cli-3.7.1-1.el6.x86_64
Hi all,
I two servers with 3.7.1 and have the same problem of this issue:
http://comments.gmane.org/gmane.comp.file-systems.gluster.user/20693
My servers packages:
# rpm -qa | grep gluster | sort
glusterfs-3.7.1-1.el6.x86_64
glusterfs-api-3.7.1-1.el6.x86_64
On 06/10/2015 05:49 AM, Alessandro De Salvo wrote:
Hi,
I have enabled the full debug already, but I see nothing special. Before
exporting any volume the log shows no error, even when I do a showmount (the
log is attached, ganesha.log.gz). If I do the same after exporting a volume
On 06/10/2015 01:45 PM, Atin Mukherjee wrote:
gluster volume set all cluster.op-version 30701
Hi Atin,
thanks for your answer.
I try the command you gave me but I have an error:
# gluster volume set all cluster.op-version 30701
volume set: failed: Required op_version (30701) is not
On 06/10/2015 07:25 PM, Sergio Traldi wrote:
On 06/10/2015 01:45 PM, Atin Mukherjee wrote:
gluster volume set all cluster.op-version 30701
Hi Atin,
thanks for your answer.
I try the command you gave me but I have an error:
# gluster volume set all cluster.op-version 30701
volume
If I'm not mistaken, max op-version is still 30700, since this change
wasn't merged into 3.7.1:
http://review.gluster.org/#/c/10975/
see also: https://bugzilla.redhat.com/show_bug.cgi?id=1225999
On 10 June 2015 at 16:09, Atin Mukherjee amukh...@redhat.com wrote:
On 06/10/2015 07:25 PM,
Hi,
On 06/10/2015 04:17 PM, Tiemen Ruiten wrote:
If I'm not mistaken, max op-version is still 30700, since this change
wasn't merged into 3.7.1:
http://review.gluster.org/#/c/10975/
see also: https://bugzilla.redhat.com/show_bug.cgi?id=1225999
I set the cluster.op-version 30700 without
Hi Soumya,
OK, that trick worked, but now I'm back to the same situation of the
hanging showmount -e. Did you check the logs I sent yesterday? Now I'm
essentially back to the situation of the fir log (ganesha.log.gz) in all
cases.
Thanks,
Alessandro
On Wed, 2015-06-10 at 15:28 +0530,
- Original Message -
From: Ravishankar N ravishan...@redhat.com
To: Andreas Hollaus andreas.holl...@ericsson.com,
gluster-users@gluster.org, Anuradha Talur ata...@redhat.com
Sent: Wednesday, June 10, 2015 7:09:34 AM
Subject: Re: [Gluster-users] Strange output from gluster volume
I need some clarification on how geo-replication (gluster = 3.5) operates,
as I'm not fully understanding how the new and improved version works from
the docs.
Let's assume the following scenario, three servers setup in a geo-rep
cluster all separated by a WAN:
server a -- WAN -- server b -- WAN
Regards,
Atin
Sent from Samsung Galaxy S4
On 10 Jun 2015 21:32, 何亦军 heyi...@greatwall.com.cn wrote:
Thanks Atin,
I tryed to stop all node iptables (systemctl stop iptables) , it
can't be fix that problem.
Are you able to ping this node?
发件人:
glusterfs doesn't support master-master yet. In your case, one of the
servers (A or B or C) should be a master and your client should write to
only that volume.
Other two volumes should be read-only till volume in server-A fails for
some reason.
So the writes from the client will go directly to
Thanks Atin,
I tryed to stop all node iptables (systemctl stop iptables) , it can't be
fix that problem.
发件人: Atin Mukherjee [amukh...@redhat.com]
发送时间: 2015年6月10日 17:04
收件人: 何亦军; gluster-users@gluster.org
主题: Re: [Gluster-users] Replace the down
On 10 June 2015 at 22:38, Aravinda avish...@redhat.com wrote:
On 06/10/2015 09:43 PM, Gabriel Kuri wrote:
glusterfs doesn't support master-master yet. In your case, one of the
servers (A or B or C) should be a master and your client should write to
only that volume.
Other two volumes
Hi,
by looking at the connections I also see a strange problem:
# netstat -ltaupn | grep 2049
tcp6 4 0 :::2049 :::*
LISTEN 32080/ganesha.nfsd
tcp6 1 0 x.x.x.2:2049 x.x.x.2:33285 CLOSE_WAIT
-
tcp6 1 0
On 06/10/2015 09:43 PM, Gabriel Kuri wrote:
glusterfs doesn't support master-master yet. In your case, one of
the servers (A or B or C) should be a master and your client should
write to only that volume.
Other two volumes should be read-only till volume in server-A fails
for some reason.
Thanks for the replies, it helps to better understand the architecture.
Is master-master planned for geo-replication?
What is the downfall or problems associated with attempting to run basic
replicated volume (synchronous) across servers in separate data centers
with latency 75ms and about 1Gbps
On 06/10/2015 07:15 AM, Jeff Darcy wrote:
In short, it would seem that either were I to use geo-repliciation,
whether recommended or not in this kind of usage, I'd need to own both
which volume to mount and what to do with writes when the client has
chosen to mount the slave.
True. Various
Are you able to ping this node?
Yes, Any node can ping any node, and all nodes name and ip include in the every
nodes /etc/hosts .
发件人: Atin Mukherjee [mailto:atin.mukherje...@gmail.com]
发送时间: 2015年6月11日 0:32
收件人: 何亦军
抄送: gluster-users@gluster.org; Atin Mukherjee
主题: Re: [Gluster-users] 答复:
- Original Message -
From: p...@email.cz
To: GLUSTER-USERS gluster-users@gluster.org
Sent: Wednesday, June 10, 2015 8:44:51 PM
Subject: [Gluster-users] DIRECT_IO_TEST in split brain
hello,
pls , how to eliminate this split brain on
- centos 7.1
- glusterfs-3.7.1-1.el7.x86_64
Hi, atin:
I found the ping problem is in ifcfg-ovirtmgmt file.
I changed the ip address in ifcfg-ovirtmgmt, that make all other node
can’t ping this node, though this node can ping other node.
So, I will rebuild new node server. Thanks.
发件人:
hello,
pls , how to eliminate this split brain on
- centos 7.1
- glusterfs-3.7.1-1.el7.x86_64
# *gluster volume heal R2 info*
Brick cl1:/R2/R2/
*/__DIRECT_IO_TEST__ - Is in split-brain*
Number of entries: 1
Brick cl3:/R2/R2/
*/__DIRECT_IO_TEST__ - Is in split-brain*
Number of entries: 1
36 matches
Mail list logo