On 08/08/2016 02:56 PM, David Gossage wrote:
On Mon, Aug 8, 2016 at 4:37 PM, David Gossage <[email protected] <mailto:[email protected]>> wrote:

    On Mon, Aug 8, 2016 at 4:23 PM, Joe Julian <[email protected]
    <mailto:[email protected]>> wrote:



        On 08/08/2016 01:39 PM, David Gossage wrote:
        So now that I have my cluster on 3.7.14 and sharded and
        working I am of course looking for what to break next.

        Currently each of 3 nodes is on a 6 disk (WD Red 1TB) raidz6
        (zil on mirrored ssd), which I am thinking is more protection
        than I may need with a 3 way replica.  I was going to one by
        one change them to basically raid10 letting it heal in between.

        Is best way to do that a systemctl stop glusterd, should I
        just kill the brick process to simulate a brick dying, or is
        their an actual brick maintenance command?

        Just kill (-15) the brick process. That'll close the TCP
        connections and the clients will just go right on functioning
        off the remaining replica. When you format and recreate your
        filesystem, it'll be missing the volume-id extended attributes
        so to start it you'll need to force it:


Also could I just do this from different node?

getfattr  -n trusted.glusterfs.volume-id /srv/.bricks/www

Then on node with new raid10 backed disks

setfattr -n trusted.glusterfs.volume-id -v 'value_from_other_brick' /srv/.bricks/www

Sure, but that's a lot more keystrokes and a lot more potential for human error.





           gluster volume start $volname start force


    If I left volume started when brick process is killed and clients
    are still (in theory) connected to volume wouldn't that just give
    me an error that volume is already started?


    Likely I would shut down the volume and do downtime for this
    anyway though letting heals go on with VM's off.



        If /etc/glusterfs is unchanged and /var/lib/glusterd is
        unchanged will doing a heal full after reboot or restarting
        glusterd take care of everything if I recreate the expected
        brick path first?

        Once started, perform a full heal to re-replicate.


        Are the improvements in 3.8 for sharding significant enough I
        should first look at updating to 3.8.2 when released in few days?

        Yes.



        */David Gossage/*/*
        */
        //*Carousel Checks Inc.| System Administrator*
        *Office*708.613.2284


        _______________________________________________
        Gluster-users mailing list
        [email protected] <mailto:[email protected]>
        http://www.gluster.org/mailman/listinfo/gluster-users
        <http://www.gluster.org/mailman/listinfo/gluster-users>
        _______________________________________________ Gluster-users
        mailing list [email protected]
        <mailto:[email protected]>
        http://www.gluster.org/mailman/listinfo/gluster-users
<http://www.gluster.org/mailman/listinfo/gluster-users>
_______________________________________________
Gluster-users mailing list
[email protected]
http://www.gluster.org/mailman/listinfo/gluster-users

Reply via email to