I upgraded my ovirt stack to 3.12.9, added a brick to a volume and left
it to settle. No problems. I am now running replica 4 (preparing to
remove a brick and host to replica 3).
On Fri, 2018-05-04 at 14:24 +, Gandalf Corvotempesta wrote:
> Il giorno ven 4 mag 2018 alle ore 14:06 Jim Kinney
I do not see it works, simple(st) test I thought would be
local mount points, I do not see fuse/gluster mentions any
options or some elaborate way to make it works, so on mount
points even for replica volumes I do not see glusterfs sends
inotifies.
On 03/05/18 17:44, Joe Julian wrote:
There
It stopped being an outstanding issue at 3.12.7. I think it's now fixed.
On May 4, 2018 6:28:40 AM EDT, Gandalf Corvotempesta
wrote:
>Hi to all
>is the "famous" corruption bug when sharding enabled fixed or still a
>work
>in progress ?
Hello gluster users and professionals,
We are running gluster 3.10.10 distributed volume (9 nodes) using RDMA
transport.
From time to time applications crash with I/O errors (can't access file)
and in the client logs we can see messages like:
[2018-05-04 10:00:43.467490] W [MSGID: 114031]
Hi to all
is the "famous" corruption bug when sharding enabled fixed or still a work
in progress ?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
Il giorno ven 4 mag 2018 alle ore 14:06 Jim Kinney
ha scritto:
> It stopped being an outstanding issue at 3.12.7. I think it's now fixed.
So, is not possible to extend and rebalance a working cluster with sharded
data ?
Can someone confirm this ? Maybe the ones that hit the