All,
A quick question:
how can I get the "Gluster process" field to be larger when doing a
"gluster volume status" command?
It word-wraps that field so I end up with 2 lines for some bricks and 1
for others depending on the length of the path to the brick or hostna
filesystems within that for each brick. I can reserve a
specific amount of space in the pool for each brick and that can be
modified as well.
It is easy to grow it too. Plus, configured right, zfs does parallel
across all the disks, so you get speedup in performance.
Brian Andrus
On 8/24/2018
, glusterfsd and anything else
with "gluster"
Then just start glusterd and glusterfsd. Once they are up you should be
able to do the heal.
If you can't tell which it is and are able to take gluster offline for
users for a moment, do that process to all your brick servers.
Brian And
heal even in a 3-way replicate?
Brian Andrus
[2018-06-27 14:16:00.075738] I [MSGID: 114046]
[client-handshake.c:1176:client_setvolume_cbk] 0-GDATA-client-12: Connected to
GDATA-client-12, attached to remote volume '/GLUSTER/brick1'.
[2018-06-27 14:16:00.075755] I [MSGID: 108005]
/brick1/**/
/**//resv_state/**/
/**/ - Is in split-brain/**/
/**/
/**//node_state/**/
/**//job_state.old/**/
/**//node_state.old/**/
/**/Status: Connected/**/
/**/Number of entries: 5/**/
/*
So, how do I clean those up so they aren't showing up anywhere at all?
Brian Andrus
on it.
Any ideas out there?
Brian Andrus
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
All,
With Gluster 4.0.2, is it possible to take an existing distributed
volume and turn it into a distributed-replicate by adding servers/bricks?
It seems this should be possible, but I don't know that anything has
been done to get it there.
Brian Andrus
/rebuild time.
All the best,
Brian Andrus
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users