hi Jan,
         Are you doing this as part of erasing the underlying disk(which we
call as reset brick) or replacing the complete brick with a new brick?
If it is replacing and not resetting, considering you are using 3.7.6
version you can use the CLI directly without all these steps.

Just use single command "gluster volume replace-brick <volname> <old-brick>
<new-brick> commit force"

We never tested it on FreeBSD though, may be it is a good idea to try this
out in a test environment and do it on your production setup. This code
does get executed on NetBSD regressions, not sure if that is good enough
for FreeBSD.


+Anuradha,
       Could you update readthedocs documentation with the release details
after which just executing replace-brick is good enough? As per git log
v3.7.3 has your patch.

On Tue, Aug 23, 2016 at 6:18 AM, Jan Michael Martirez <[email protected]>
wrote:

> I can't use replace-bricks.
>
> I followed this tutorial: https://gluster.readthedocs.io/en/latest/
> Administrator%20Guide/Managing%20Volumes/#replace-brick
>
> Volume Name: dr
> Type: Distributed-Replicate
> Volume ID: 0ce3038c-55c6-4a4e-9b97-22269bce9d11
> Status: Started
> Number of Bricks: 2 x 2 = 4
> Transport-type: tcp
> Bricks:
> Brick1: gluster01:/glu1
> Brick2: gluster02:/glu2
> Brick3: gluster03:/glu3
> Brick4: gluster04:/glu4
> Options Reconfigured:
> features.shard-block-size: 4MB
> features.shard: on
> performance.readdir-ahead: on
>
> I'm stuck with setfattr. I'm using FreeBSD, so I use setextattr instead.
>
> root@gluster01:/mnt/fuse # setextattr system wheel abc /mnt/fuse
> setextattr: /mnt/fuse: failed: Operation not supported
>
> root@gluster01:/mnt/fuse # glusterd --version
> glusterfs 3.7.6 built on Jul 13 2016 20:32:46
>
> _______________________________________________
> Gluster-users mailing list
> [email protected]
> http://www.gluster.org/mailman/listinfo/gluster-users
>



-- 
Pranith
_______________________________________________
Gluster-users mailing list
[email protected]
http://www.gluster.org/mailman/listinfo/gluster-users

Reply via email to