[ovirt-users] Re: oVirt over gluster: Replacing a dead host

2022-07-23 Thread Strahil Nikolov via Users
Yes. The arbiter is useless with only 1 data brick,as it contains only medatata 
and if you have 1 full brick (data + metadata) and 1 arbiter (only metadata) in 
the volume, you could end in a metadata split brain (risk is the same like  
replica 2 volumes) .
Once you have the new data brick , you can revert the process and increase the 
replica count .
gluster volume add-brick VOLUME replica 3 arbiter 1 host1:/path/to/data/brick 
host3:/path/to/arbiter/brick
Best Regards,Strahil Nikolov  
 
  On Sat, Jul 23, 2022 at 12:07, Gilboa Davara wrote:   
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6LOFWLT7HX3KNJ7FTOPBN6LWVTEXHMBH/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IT6GVWPZQ7WS7SIFSAO6747ACO7SK3S3/


[ovirt-users] Re: oVirt over gluster: Replacing a dead host

2022-07-23 Thread Gilboa Davara
Hello,

Many thanks for your reply.
So not only did I need to remove the broken brick
(office-wx-hv1-lab-gfs:/mnt/LogGFSData/brick), I should have also remove
the active arbiter (office-wx-hv3-lab-gfs:/mnt/LogGFSData/brick)?

Thanks again,
Gilboa

On Fri, Jul 22, 2022 at 10:09 PM Strahil Nikolov 
wrote:

> There is no need to stop the volume, the operation can be done online.
>
> gluster volume remove-brick GV2Data repica 1
> office-wx-hv1-lab-gfs:/mnt/LogGFSData/brick
> office-wx-hv3-lab-gfs:/mnt/LogGFSData/brick force
>
> replica 1 -> remaining copies ( only Brick 2 remains)
>
> When you got a replacement brick define the brick first and the arbiter
> second (don't forget to define the replica 3 arbiter 1 change /I think this
> notation is still valid/ ).
>
>
> Best Regards,
> Strahil Nikolov
>
>
> On Thu, Jul 21, 2022 at 14:09, Gilboa Davara
>  wrote:
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/K4UVL3LQCTOFW2MIPYE5OFAIJNJSRJEQ/
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6LOFWLT7HX3KNJ7FTOPBN6LWVTEXHMBH/


[ovirt-users] Re: oVirt over gluster: Replacing a dead host

2022-07-22 Thread Strahil Nikolov via Users
There is no need to stop the volume, the operation can be done online.
gluster volume remove-brick GV2Data repica 1 
office-wx-hv1-lab-gfs:/mnt/LogGFSData/brick 
office-wx-hv3-lab-gfs:/mnt/LogGFSData/brick force
replica 1 -> remaining copies ( only Brick 2 remains)
When you got a replacement brick define the brick first and the arbiter second 
(don't forget to define the replica 3 arbiter 1 change /I think this notation 
is still valid/ ).

Best Regards,Strahil Nikolov 
 
 
  On Thu, Jul 21, 2022 at 14:09, Gilboa Davara wrote:   
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/K4UVL3LQCTOFW2MIPYE5OFAIJNJSRJEQ/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6XT5B54P4OLINKBIZOWOEIBAENLZPZGJ/


[ovirt-users] Re: oVirt over gluster: Replacing a dead host

2022-07-21 Thread Gilboa Davara
Hello,


$ gluster volume info GV2Data
 Volume Name: GV2Data
Type: Replicate
Volume ID: c1946fc2-ed94-4b9f-9da3-f0f1ee90f303
Status: Stopped
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: office-wx-hv1-lab-gfs:/mnt/LogGFSData/brick  <-- This is the
dead host.
Brick2: office-wx-hv2-lab-gfs:/mnt/LogGFSData/brick
Brick3: office-wx-hv3-lab-gfs:/mnt/LogGFSData/brick (arbiter)

I tried to remove the dead brick, and failed.
*$ gluster volume remove-brick GV2Data replica 1
office-wx-hv1-lab-gfs:/mnt/**LogGFSData/brick force*
...volume remove-brick start: failed: need 2(xN) bricks for reducing
replica count of the volume from 3 to 1

What was I doing wrong?
(Assuming my original configuration, 2 + 1, was acceptable).

- Gilboa

On Wed, Jul 20, 2022 at 7:14 PM Strahil Nikolov 
wrote:

> The remove-brick expects the bricks that have to be removed.
> Yet, you specified 1 brick, so 2 are left in the volume -> yet you
> specified 'replica 1'.
> Define both the data brick and the arbiter brick.
>
> Best Regards,
> Strahil Nikolov
>
> On Wed, Jul 20, 2022 at 13:34, Gilboa Davara
>  wrote:
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/I2S247DEVERX7NPBNGJOU26QSNXMPPTO/
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/K4UVL3LQCTOFW2MIPYE5OFAIJNJSRJEQ/


[ovirt-users] Re: oVirt over gluster: Replacing a dead host

2022-07-20 Thread Strahil Nikolov via Users
The remove-brick expects the bricks that have to be removed.Yet, you specified 
1 brick, so 2 are left in the volume -> yet you specified 'replica 1'.Define 
both the data brick and the arbiter brick.
Best Regards,Strahil Nikolov 
 
 
  On Wed, Jul 20, 2022 at 13:34, Gilboa Davara wrote:   
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/I2S247DEVERX7NPBNGJOU26QSNXMPPTO/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZRXMEHVIVSRDOVIP4DGWKO7PYFO5N4G6/


[ovirt-users] Re: oVirt over gluster: Replacing a dead host

2022-07-20 Thread Gilboa Davara
Hello,

Tried it:
*$ gluster volume remove-brick GV2Data replica 1
office-wx-hv1-lab-gfs:/mnt/**LogGFSData/brick force*
Running remove-brick with cluster.force-migration enabled can result in
data corruption. It is safer to disable this option so that files that
receive writes during migration are not migrated.
Files that are not migrated can then be manually copied after the
remove-brick commit operation.
Do you want to continue with your current cluster.force-migration settings?
(y/n) y
volume remove-brick start: failed: need 2(xN) bricks for reducing replica
count of the volume from 3 to 1

- Gilboa


On Tue, Jul 19, 2022 at 11:36 PM Strahil Nikolov 
wrote:

> Replacing a dead brick in a 2+1 scenario (2 data + 1 arbiter brick)
> requires to reduce the replica to 1 by removing the dead brick and the
> arbiter.
> Use the force option as you are not using a distributed-replicated volume .
>
> Best Regards,
> Strahil Nikolov
>
> On Mon, Jul 18, 2022 at 11:36, Patrick Hibbs
>  wrote:
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/XIA6UN3ULXCN55PPVNGV5KVFNGZSRBLV/
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/I2S247DEVERX7NPBNGJOU26QSNXMPPTO/


[ovirt-users] Re: oVirt over gluster: Replacing a dead host

2022-07-19 Thread Strahil Nikolov via Users
Replacing a dead brick in a 2+1 scenario (2 data + 1 arbiter brick) requires to 
reduce the replica to 1 by removing the dead brick and the arbiter.Use the 
force option as you are not using a distributed-replicated volume .
Best Regards,Strahil Nikolov 
 
 
  On Mon, Jul 18, 2022 at 11:36, Patrick Hibbs wrote:   
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XIA6UN3ULXCN55PPVNGV5KVFNGZSRBLV/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2TXMVNQTWS7QIAVRARRLGCOXATCMGD2G/


[ovirt-users] Re: oVirt over gluster: Replacing a dead host

2022-07-18 Thread Patrick Hibbs
If I'm understanding your question / setup correctly, the best way
would be to simply mount -o bind the old path to the new one. The old
path would still be used by gluster, but it would ultimately go to the
new location.

Changing the brick path on a single brick while leaving the original
path intact on others may be possible, the gluster docs are silent
either way, but I'd doubt oVirt would work correctly in that
circumstance.
Maybe Sandro can point us at someone who would know more on that
subject? :)

If it is possible to do, and standard disclaimer: I'm *NOT*
recommending anyone do it, I'd imagine you'd need to follow
instructions similar
to (https://serverfault.com/questions/631365/rename-a-glusterfs-peer)
replacing the peer name with the brick directory.

-Patrick Hibbs

On Mon, 2022-07-18 at 08:19 +0300, Gilboa Davara wrote:
> Hello,
> 
> Many thanks for your email.
> I should add that this is a test environment we set up in preparation
> for a planned CentOS 7 / oVirt 4.3 upgrade to CentOS 8 Streams /
> oVirt 4.5 upgrade in one of our old(er) oVirt clusters.
> In this case, we blew up the software RAID during the OS replacement
> (CentOS 7 -> 8) so have a host, but no storage.
> And as an added bonus, the FS locations are a bit different. (due MD
> changes we made during the blowup).
> 
> So, essentially the host is alive, but we need to create a new brick
> using a known good brick.
> A couple of questions:
> Assuming I have a known good brick to copy but the FS location is
> different and given the fact I cannot simply remove/add brick, how do
> I change the brick path?
> Old location:
> office-wx-hv1-lab-gfs:/mnt/LogGFSData/brick
> New location:
> office-wx-hv1-lab-gfs.localdomain:/gluster/brick/data/brick
> 
> Thanks again,
> Gilboa
> 
> On Mon, Jul 18, 2022 at 1:32 AM Patrick Hibbs
>  wrote:
> > What you are missing is the fact that gluster requires more than
> > one set of bricks to recover from a dead host. I.e. In your set up,
> > you'd need 6 hosts. 4x replicas and 2x arbiters with at least one
> > set (2x replicas and 1x arbiter) operational bare minimum.
> > Automated commands to fix the volume do not exist otherwise. (It's
> > a Gluster limitation.) This can be fixed manually however.
> > 
> > Standard Disclaimer: Back up your data first! Fixing this issue
> > requires manual intervention. Reader assumes all responsiblity for
> > any action resulting from the instructions below. Etc.
> > 
> > If it's just a dead brick, (i.e. the host is still functional), all
> > you really need to do is replace the underlying storage:
> > 
> > 1. Take the gluster volume offline.
> > 2. Remove the bad storage device, and attach the replacement.
> > 3. rsync / scp / etc. the data from a known good brick (be sure to
> > include hidden files / preserve file times and ownership / SELinux
> > labels / etc. ). 
> > 4. Restart the gluster volume.
> > 
> > Gluster *might* still need to heal everything after all of that,
> > but it should start the volume and get it running again.
> > 
> > If the host itself is dead, (and the underlying storage is still
> > functional), you can just move the underlying storage over to the
> > new host:
> > 
> > 1. Take the gluster volume offline.
> > 2. Attach the old storage.
> > 3. Fix up the ids on the volume file.
> > (https://serverfault.com/questions/631365/rename-a-glusterfs-peer)
> > 4. Restart the gluster volume.
> > 
> > If both the host and underlying storage are dead, you'll need to do
> > both tasks:
> > 
> > 1. Take the gluster volume offline.
> > 2. Attach the new storage.
> > 3. rsync / scp / etc. the data from a known good brick (be sure to
> > include hidden files / preserve file times and ownership / SELinux
> > labels / etc. ).
> > 4. Fix up the ids on the volume file.
> > 5. Restart the gluster volume.
> > 
> > Keep in mind one thing however: If the gluster host you are
> > replacing is used by oVirt to connect to the volume (I.e. It's the
> > host named in the volume config in the Admin portal). The new host
> > will need to retain the old hostname / IP, or you'll need to update
> > oVirt's config. Otherwise the VM hosts will wind up in Unassigned /
> > Non-functional status.
> > 
> > - Patrick Hibbs
> > 
> > On Sun, 2022-07-17 at 22:15 +0300, Gilboa Davara wrote:
> > > Hello all,
> > > 
> > > I'm attempting to replace a dead host in a replica 2 + arbiter
> > > gluster setup and replace it with a new host.
> > > I've already set up a new host (same hostname..localdomain) and
> > > got into the cluster.
> > > 
> > > $ gluster peer status
> > > Number of Peers: 2
> > > 
> > > Hostname: office-wx-hv3-lab-gfs
> > > Uuid: 4e13f796-b818-4e07-8523-d84eb0faa4f9
> > > State: Peer in Cluster (Connected)
> > > 
> > > Hostname: office-wx-hv1-lab-gfs.localdomain <-- This is a new
> > > host.
> > > Uuid: eee17c74-0d93-4f92-b81d-87f6b9c2204d
> > > State: Peer in Cluster (Connected)
> > > 
> > > $ gluster volume info GV2Data
> > >  Volume Name: GV2Data
> > > Type: 

[ovirt-users] Re: oVirt over gluster: Replacing a dead host

2022-07-17 Thread Gilboa Davara
Hello,

Many thanks for your email.
I should add that this is a test environment we set up in preparation for a
planned CentOS 7 / oVirt 4.3 upgrade to CentOS 8 Streams / oVirt 4.5
upgrade in one of our old(er) oVirt clusters.
In this case, we blew up the software RAID during the OS replacement
(CentOS 7 -> 8) so have a host, but no storage.
And as an added bonus, the FS locations are a bit different. (due MD
changes we made during the blowup).

So, essentially the host is alive, but we need to create a new brick using
a known good brick.
A couple of questions:
Assuming I have a known good brick to copy but the FS location is different
and given the fact I cannot simply remove/add brick, how do I change the
brick path?
Old location:
office-wx-hv1-lab-gfs:/mnt/LogGFSData/brick
New location:
office-wx-hv1-lab-gfs.localdomain:/gluster/brick/data/brick
Thanks again,
Gilboa

On Mon, Jul 18, 2022 at 1:32 AM Patrick Hibbs 
wrote:

> What you are missing is the fact that gluster requires more than one set
> of bricks to recover from a dead host. I.e. In your set up, you'd need 6
> hosts. 4x replicas and 2x arbiters with at least one set (2x replicas and
> 1x arbiter) operational bare minimum.
> Automated commands to fix the volume do not exist otherwise. (It's a
> Gluster limitation.) This can be fixed manually however.
>
> Standard Disclaimer: Back up your data first! Fixing this issue requires
> manual intervention. Reader assumes all responsiblity for any action
> resulting from the instructions below. Etc.
>
> If it's just a dead brick, (i.e. the host is still functional), all you
> really need to do is replace the underlying storage:
>
> 1. Take the gluster volume offline.
> 2. Remove the bad storage device, and attach the replacement.
> 3. rsync / scp / etc. the data from a known good brick (be sure to include
> hidden files / preserve file times and ownership / SELinux labels / etc. ).
> 4. Restart the gluster volume.
>
> Gluster *might* still need to heal everything after all of that, but it
> should start the volume and get it running again.
>
> If the host itself is dead, (and the underlying storage is still
> functional), you can just move the underlying storage over to the new host:
>
> 1. Take the gluster volume offline.
> 2. Attach the old storage.
> 3. Fix up the ids on the volume file. (
> https://serverfault.com/questions/631365/rename-a-glusterfs-peer)
> 4. Restart the gluster volume.
>
> If both the host and underlying storage are dead, you'll need to do both
> tasks:
>
> 1. Take the gluster volume offline.
> 2. Attach the new storage.
> 3. rsync / scp / etc. the data from a known good brick (be sure to
> include hidden files / preserve file times and ownership / SELinux labels /
> etc. ).
> 4. Fix up the ids on the volume file.
> 5. Restart the gluster volume.
>
> Keep in mind one thing however: If the gluster host you are replacing is
> used by oVirt to connect to the volume (I.e. It's the host named in the
> volume config in the Admin portal). The new host will need to retain the
> old hostname / IP, or you'll need to update oVirt's config. Otherwise the
> VM hosts will wind up in Unassigned / Non-functional status.
>
> - Patrick Hibbs
>
> On Sun, 2022-07-17 at 22:15 +0300, Gilboa Davara wrote:
>
> Hello all,
>
> I'm attempting to replace a dead host in a replica 2 + arbiter gluster
> setup and replace it with a new host.
> I've already set up a new host (same hostname..localdomain) and got into
> the cluster.
>
> $ gluster peer status
> Number of Peers: 2
>
> Hostname: office-wx-hv3-lab-gfs
> Uuid: 4e13f796-b818-4e07-8523-d84eb0faa4f9
> State: Peer in Cluster (Connected)
>
> Hostname: office-wx-hv1-lab-gfs.localdomain <-- This is a new host.
> Uuid: eee17c74-0d93-4f92-b81d-87f6b9c2204d
> State: Peer in Cluster (Connected)
>
> $ gluster volume info GV2Data
>  Volume Name: GV2Data
> Type: Replicate
> Volume ID: c1946fc2-ed94-4b9f-9da3-f0f1ee90f303
> Status: Stopped
> Snapshot Count: 0
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: office-wx-hv1-lab-gfs:/mnt/LogGFSData/brick  <-- This is the
> dead host.
> Brick2: office-wx-hv2-lab-gfs:/mnt/LogGFSData/brick
> Brick3: office-wx-hv3-lab-gfs:/mnt/LogGFSData/brick (arbiter)
> ...
>
> Looking at the docs, it seems that I need to remove the dead brick.
>
> $ gluster volume remove-brick GV2Data
> office-wx-hv1-lab-gfs:/mnt/LogGFSData/brick start
> Running remove-brick with cluster.force-migration enabled can result in
> data corruption. It is safer to disable this option so that files that
> receive writes during migration are not migrated.
> Files that are not migrated can then be manually copied after the
> remove-brick commit operation.
> Do you want to continue with your current cluster.force-migration
> settings? (y/n) y
> volume remove-brick start: failed: Removing bricks from replicate
> configuration is not allowed without reducing replica count explicitly
>
> So I guess I need to drop from replica 2 + 

[ovirt-users] Re: oVirt over gluster: Replacing a dead host

2022-07-17 Thread Patrick Hibbs
What you are missing is the fact that gluster requires more than one
set of bricks to recover from a dead host. I.e. In your set up, you'd
need 6 hosts. 4x replicas and 2x arbiters with at least one set (2x
replicas and 1x arbiter) operational bare minimum.
Automated commands to fix the volume do not exist otherwise. (It's a
Gluster limitation.) This can be fixed manually however.

Standard Disclaimer: Back up your data first! Fixing this issue
requires manual intervention. Reader assumes all responsiblity for any
action resulting from the instructions below. Etc.

If it's just a dead brick, (i.e. the host is still functional), all you
really need to do is replace the underlying storage:

1. Take the gluster volume offline.
2. Remove the bad storage device, and attach the replacement.
3. rsync / scp / etc. the data from a known good brick (be sure to
include hidden files / preserve file times and ownership / SELinux
labels / etc. ). 
4. Restart the gluster volume.

Gluster *might* still need to heal everything after all of that, but it
should start the volume and get it running again.

If the host itself is dead, (and the underlying storage is still
functional), you can just move the underlying storage over to the new
host:

1. Take the gluster volume offline.
2. Attach the old storage.
3. Fix up the ids on the volume file.
(https://serverfault.com/questions/631365/rename-a-glusterfs-peer)
4. Restart the gluster volume.

If both the host and underlying storage are dead, you'll need to do
both tasks:

1. Take the gluster volume offline.
2. Attach the new storage.
3. rsync / scp / etc. the data from a known good brick (be sure to
include hidden files / preserve file times and ownership / SELinux
labels / etc. ).
4. Fix up the ids on the volume file.
5. Restart the gluster volume.

Keep in mind one thing however: If the gluster host you are replacing
is used by oVirt to connect to the volume (I.e. It's the host named in
the volume config in the Admin portal). The new host will need to
retain the old hostname / IP, or you'll need to update oVirt's config.
Otherwise the VM hosts will wind up in Unassigned / Non-functional
status.

- Patrick Hibbs

On Sun, 2022-07-17 at 22:15 +0300, Gilboa Davara wrote:
> Hello all,
> 
> I'm attempting to replace a dead host in a replica 2 + arbiter
> gluster setup and replace it with a new host.
> I've already set up a new host (same hostname..localdomain) and got
> into the cluster.
> 
> $ gluster peer status
> Number of Peers: 2
> 
> Hostname: office-wx-hv3-lab-gfs
> Uuid: 4e13f796-b818-4e07-8523-d84eb0faa4f9
> State: Peer in Cluster (Connected)
> 
> Hostname: office-wx-hv1-lab-gfs.localdomain <-- This is a new
> host.
> Uuid: eee17c74-0d93-4f92-b81d-87f6b9c2204d
> State: Peer in Cluster (Connected)
> 
> $ gluster volume info GV2Data
>  Volume Name: GV2Data
> Type: Replicate
> Volume ID: c1946fc2-ed94-4b9f-9da3-f0f1ee90f303
> Status: Stopped
> Snapshot Count: 0
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: office-wx-hv1-lab-gfs:/mnt/LogGFSData/brick  <-- This is
> the dead host.
> Brick2: office-wx-hv2-lab-gfs:/mnt/LogGFSData/brick
> Brick3: office-wx-hv3-lab-gfs:/mnt/LogGFSData/brick (arbiter)
> ...
> 
> Looking at the docs, it seems that I need to remove the dead brick.
> 
> $ gluster volume remove-brick GV2Data office-wx-hv1-lab-
> gfs:/mnt/LogGFSData/brick start
> Running remove-brick with cluster.force-migration enabled can result
> in data corruption. It is safer to disable this option so that files
> that receive writes during migration are not migrated.
> Files that are not migrated can then be manually copied after the
> remove-brick commit operation.
> Do you want to continue with your current cluster.force-migration
> settings? (y/n) y
> volume remove-brick start: failed: Removing bricks from replicate
> configuration is not allowed without reducing replica count
> explicitly
> 
> So I guess I need to drop from replica 2 + arbiter to replica 1 +
> arbiter (?).
> 
> $ gluster volume remove-brick GV2Data replica 1 office-wx-hv1-lab-
> gfs:/mnt/LogGFSData/brick start
> Running remove-brick with cluster.force-migration enabled can result
> in data corruption. It is safer to disable this option so that files
> that receive writes during migration are not migrated.
> Files that are not migrated can then be manually copied after the
> remove-brick commit operation.
> Do you want to continue with your current cluster.force-migration
> settings? (y/n) y
> volume remove-brick start: failed: need 2(xN) bricks for reducing
> replica count of the volume from 3 to 1
> 
> ... What am I missing?
> 
> - Gilboa
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
>