[ovirt-users] Re: Replace bad Host from a 9 Node hyperconverged setup 4.3.3

2019-06-10 Thread Strahil
Hi Adrian,

You have several options:
A) If you have space on another gluster volume (or volumes) or on NFS-based 
storage, you can migrate all VMs live . Once you do it,  the simple way will be 
to stop and remove the storage domain (from UI) and gluster volume that 
correspond to the problematic brick. Once gone, you can  remove the entry in 
oVirt for the old host and add the newly built one.Then you can recreate your 
volume and migrate the data back.

B)  If you don't have space you have to use a more riskier approach (usually it 
shouldn't be risky, but I had bad experience in gluster v3):
- New server has same IP and hostname:
Use command line and run the 'gluster volume reset-brick VOLNAME 
HOSTNAME:BRICKPATH HOSTNAME:BRICKPATH commit'
Replace VOLNAME with your volume name.
A more practical example would be:
'gluster volume reset-brick data ovirt3:/gluster_bricks/data/brick 
ovirt3:/gluster_ ricks/data/brick commit'

If it refuses, then you have to cleanup '/gluster_bricks/data' (which should be 
empty).
Also check if the new peer has been probed via 'gluster peer status'.Check the 
firewall is allowing gluster communication (you can compare it to the firewalls 
on another gluster host).


The automatic healing will kick in 10 minutes (if it succeeds) and will stress 
the other 2 replicas, so pick your time properly.
Note: I'm not recommending you to use the 'force' option in the previous 
command ... for now :)

- The new server has a different IP/hostname:
Instead of 'reset-brick' you can use  'replace-brick':
It should be like this:
gluster volume replace-brick data old-server:/path/to/brick 
new-server:/new/path/to/brick commit force

In both cases check the status via:
gluster volume info VOLNAME

If your cluster is in production , I really recommend you the first option as 
it is less risky and the chance for unplanned downtime will be minimal.

The 'reset-brick'  in your previous e-mail shows that one of the servers is not 
connected. Check peer status on all servers, if they are less than they should 
check for network and/or firewall issues.
On the new node check if glusterd is enabled and running.

In order to debug - you should provide more info like 'gluster volume info' and 
the peer status from each node.

Best Regards,
Strahil Nikolov

On Jun 10, 2019 20:10, Adrian Quintero  wrote:
>
> Can you let me know how to fix the gluster and missing brick?,
> I tried removing it by going to "storage > Volumes > vmstore > bricks > 
> selected the brick
> However it is showing as an unknown status (which is expected because the 
> server was completely wiped) so if I try to "remove", "replace brick" or 
> "reset brick" it wont work 
> If i do remove brick: Incorrect bricks selected for removal in Distributed 
> Replicate volume. Either all the selected bricks should be from the same sub 
> volume or one brick each for every sub volume!
> If I try "replace brick" I cant because I dont have another server with extra 
> bricks/disks
> And if I try "reset brick": Error while executing action Start Gluster Volume 
> Reset Brick: Volume reset brick commit force failed: rc=-1 out=() err=['Host 
> myhost1_mydomain_com  not connected']
>
> Are you suggesting to try and fix the gluster using command line? 
>
> Note that I cant "peer detach"   the sever , so if I force the removal of the 
> bricks would I need to force downgrade to replica 2 instead of 3? what would 
> happen to oVirt as it only supports replica 3?
>
> thanks again.
>
> On Mon, Jun 10, 2019 at 12:52 PM Strahil  wrote:
>>
>> Hi Adrian,
>> Did you fix the issue with the gluster and the missing brick?
>> If yes, try to set the 'old' host in maintenance an___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7SZKSXEWVJC6UNU7GOEYXURXERGZCQ2Y/


[ovirt-users] Re: Replace bad Host from a 9 Node hyperconverged setup 4.3.3

2019-06-10 Thread Adrian Quintero
Thanks for pointing me in the right direction, I was able to add the server
to the cluster by adding /etc/vdsm/vdsm.id
I will now try to create the new bricks and try a replacement brick, this
part I think I will have to do thru command line because my Hyperconverged
setup with a replica 3 is as follows:
*/dev/sdb = /gluster_bricks/engine  100G*
*/dev/sdb = /gluster_bricks/vmstore12600G*

/dev/sdc = /gluster_bricks/data1  2700G
/dev/sdd = /gluster_bricks/data2  2700G

/dev/sde = caching disk.

The issue i see here is that I don't see an option through the WEB UI to
create 2 bricks in the same /dev/sdb (one of 100Gb for the engine and one
of 2600Gb for vmstore1).

So if you have any ideas they are most welcome.

thanks again.

On Mon, Jun 10, 2019 at 4:35 PM Leo David  wrote:

> https://stijn.tintel.eu/blog/2013/03/02/ovirt-problem-duplicate-uuids
>
> On Mon, Jun 10, 2019, 18:13 Adrian Quintero 
> wrote:
>
>> Ok I have tried reinstalling the server from scratch with a different
>> name and IP address and when trying to add it to cluster I get the
>> following error:
>>
>> Event details
>> ID: 505
>> Time: Jun 10, 2019, 10:00:00 AM
>> Message: Host myshost2.virt.iad3p installation failed. Host
>> myhost2.mydomain.com reports unique id which already registered for
>> myhost1.mydomain.com
>>
>> I am at a loss here, I don't have a brand new server to do this and in
>> need to re-use what I have.
>>
>>
>> *From the oVirt engine log (/var/log/ovirt-engine/engine.log): *
>> 2019-06-10 10:57:59,950-04 ERROR
>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>> (EE-ManagedThreadFactory-engine-Thread-37744) [9b88055] EVENT_ID:
>> VDS_INSTALL_FAILED(505), Host myhost2.mydomain.com installation failed.
>> Host myhost2.mydomain.com reports unique id which already registered for
>> myhost1.mydomain.com
>>
>> So in the
>> /var/log/ovirt-engine/host-deploy/ovirt-host-deploy-20190610105759-myhost2.mydomain.com-9b88055.log
>> of the ovirt engine I see that the host deploy is running  the following
>> command to identify the system, if this is the case then it will never work
>> :( because it identifies each host using the system uuid.
>>
>> *dmidecode -s system-uuid*
>> b64d566e-055d-44d4-83a2-d3b83f25412e
>>
>>
>> Any suggestions?
>>
>> Thanks
>>
>> On Sat, Jun 8, 2019 at 11:23 AM Adrian Quintero 
>> wrote:
>>
>>> Leo,
>>> I did try putting it under maintenance and checking to ignore gluster
>>> and it did not work.
>>> Error while executing action:
>>> -Cannot remove host. Server having gluster volume.
>>>
>>> Note: the server was already reinstalled so gluster will never see the
>>> volumes or bricks for this server.
>>>
>>> I will rename the server to myhost2.mydomain.com and try to replace the
>>> bricks hopefully that might work, however it would be good to know that you
>>> can re-install from scratch an existing cluster server and put it back to
>>> the cluster.
>>>
>>> Still doing research hopefully we can find a way.
>>>
>>> thanks again
>>>
>>> Adrian
>>>
>>>
>>>
>>>
>>> On Fri, Jun 7, 2019 at 2:39 AM Leo David  wrote:
>>>
 You will need to remove the storage role from that server first ( not
 being part of gluster cluster ).
 I cannot test this right now on production,  but maybe putting host
 although its already died under "mantainance" while checking to ignore
 guster warning will let you remove it.
 Maybe I am wrong about the procedure,  can anybody input an advice
 helping with this situation ?
 Cheers,

 Leo




 On Thu, Jun 6, 2019 at 9:45 PM Adrian Quintero <
 adrianquint...@gmail.com> wrote:

> I tried removing the bad host but running into the following issue ,
> any idea?
> Operation Canceled
> Error while executing action:
>
> host1.mydomain.com
>
>- Cannot remove Host. Server having Gluster volume.
>
>
>
>
> On Thu, Jun 6, 2019 at 11:18 AM Adrian Quintero <
> adrianquint...@gmail.com> wrote:
>
>> Leo, I forgot to mention that I have 1 SSD disk for caching purposes,
>> wondering how that setup should be achieved?
>>
>> thanks,
>>
>> Adrian
>>
>> On Wed, Jun 5, 2019 at 11:25 PM Adrian Quintero <
>> adrianquint...@gmail.com> wrote:
>>
>>> Hi Leo, yes, this helps a lot, this confirms the plan we had in mind.
>>>
>>> Will test tomorrow and post the results.
>>>
>>> Thanks again
>>>
>>> Adrian
>>>
>>> On Wed, Jun 5, 2019 at 11:18 PM Leo David  wrote:
>>>
 Hi Adrian,
 I think the steps are:
 - reinstall the host
 - join it to virtualisation cluster
 And if was member of gluster cluster as well:
 - go to host - storage devices
 - create the bricks on the devices - as they are on the other hosts
 - go to storage - volumes
 - replace each failed brick with the corresponding new 

[ovirt-users] Re: Failed to activate Storage Domain --- ovirt 4.2

2019-06-10 Thread Nir Soffer
On Mon, Jun 10, 2019 at 11:22 PM David Teigland  wrote:

> On Mon, Jun 10, 2019 at 10:59:43PM +0300, Nir Soffer wrote:
> > > [root@uk1-ion-ovm-18  pvscan
> > >   /dev/mapper/36000d31005697814: Checksum error at
> offset
> > > 4397954425856
> > >   Couldn't read volume group metadata from
> > > /dev/mapper/36000d31005697814.
> > >   Metadata location on /dev/mapper/36000d31005697814 at
> > > 4397954425856 has invalid summary for VG.
> > >   Failed to read metadata summary from
> > > /dev/mapper/36000d31005697814
> > >   Failed to scan VG from /dev/mapper/36000d31005697814
> >
> > This looks like corrupted vg metadata.
>
> Yes, the second metadata area, at the end of the device is corrupted; the
> first metadata area is probably ok.  That version of lvm is not able to
> continue by just using the one good copy.


Can we copy the first metadata area into the second metadata area?

Last week I pushed out major changes to LVM upstream to be able to handle
> and repair most of these cases.  So, one option is to build lvm from the
> upstream master branch, and check if that can read and repair this
> metadata.
>

This sound pretty risky for production.

> David, we keep 2 metadata copies on the first PV. Can we use one of the
> > copies on the PV to restore the metadata to the least good state?
>
> pvcreate with --restorefile and --uuid, and with the right backup metadata
>

What would be the right backup metadata?


> could probably correct things, but experiment with some temporary PVs
> first.
>

Aminur, can you copy and compress the metadata areas, and shared them
somewhere?

To copy the first metadata area, use:

dd if=/dev/mapper/360014058ccaab4857eb40f393aaf0351 of=md1 bs=128M count=1
skip=4096 iflag=skip_bytes

To copy the second metadata area, you need to know the size of the PV. On
my setup with 100G
PV, I have 800 extents (128M each), and this works:

dd if=/dev/mapper/360014058ccaab4857eb40f393aaf0351 of=md2 bs=128M count=1
skip=799

gzip md1 md2

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RYQA4SXJQJJN7DV3U6KB2XQ3AOPLAHT6/


[ovirt-users] Re: Replace bad Host from a 9 Node hyperconverged setup 4.3.3

2019-06-10 Thread Leo David
https://stijn.tintel.eu/blog/2013/03/02/ovirt-problem-duplicate-uuids

On Mon, Jun 10, 2019, 18:13 Adrian Quintero 
wrote:

> Ok I have tried reinstalling the server from scratch with a different name
> and IP address and when trying to add it to cluster I get the following
> error:
>
> Event details
> ID: 505
> Time: Jun 10, 2019, 10:00:00 AM
> Message: Host myshost2.virt.iad3p installation failed. Host
> myhost2.mydomain.com reports unique id which already registered for
> myhost1.mydomain.com
>
> I am at a loss here, I don't have a brand new server to do this and in
> need to re-use what I have.
>
>
> *From the oVirt engine log (/var/log/ovirt-engine/engine.log): *
> 2019-06-10 10:57:59,950-04 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (EE-ManagedThreadFactory-engine-Thread-37744) [9b88055] EVENT_ID:
> VDS_INSTALL_FAILED(505), Host myhost2.mydomain.com installation failed.
> Host myhost2.mydomain.com reports unique id which already registered for
> myhost1.mydomain.com
>
> So in the
> /var/log/ovirt-engine/host-deploy/ovirt-host-deploy-20190610105759-myhost2.mydomain.com-9b88055.log
> of the ovirt engine I see that the host deploy is running  the following
> command to identify the system, if this is the case then it will never work
> :( because it identifies each host using the system uuid.
>
> *dmidecode -s system-uuid*
> b64d566e-055d-44d4-83a2-d3b83f25412e
>
>
> Any suggestions?
>
> Thanks
>
> On Sat, Jun 8, 2019 at 11:23 AM Adrian Quintero 
> wrote:
>
>> Leo,
>> I did try putting it under maintenance and checking to ignore gluster and
>> it did not work.
>> Error while executing action:
>> -Cannot remove host. Server having gluster volume.
>>
>> Note: the server was already reinstalled so gluster will never see the
>> volumes or bricks for this server.
>>
>> I will rename the server to myhost2.mydomain.com and try to replace the
>> bricks hopefully that might work, however it would be good to know that you
>> can re-install from scratch an existing cluster server and put it back to
>> the cluster.
>>
>> Still doing research hopefully we can find a way.
>>
>> thanks again
>>
>> Adrian
>>
>>
>>
>>
>> On Fri, Jun 7, 2019 at 2:39 AM Leo David  wrote:
>>
>>> You will need to remove the storage role from that server first ( not
>>> being part of gluster cluster ).
>>> I cannot test this right now on production,  but maybe putting host
>>> although its already died under "mantainance" while checking to ignore
>>> guster warning will let you remove it.
>>> Maybe I am wrong about the procedure,  can anybody input an advice
>>> helping with this situation ?
>>> Cheers,
>>>
>>> Leo
>>>
>>>
>>>
>>>
>>> On Thu, Jun 6, 2019 at 9:45 PM Adrian Quintero 
>>> wrote:
>>>
 I tried removing the bad host but running into the following issue ,
 any idea?
 Operation Canceled
 Error while executing action:

 host1.mydomain.com

- Cannot remove Host. Server having Gluster volume.




 On Thu, Jun 6, 2019 at 11:18 AM Adrian Quintero <
 adrianquint...@gmail.com> wrote:

> Leo, I forgot to mention that I have 1 SSD disk for caching purposes,
> wondering how that setup should be achieved?
>
> thanks,
>
> Adrian
>
> On Wed, Jun 5, 2019 at 11:25 PM Adrian Quintero <
> adrianquint...@gmail.com> wrote:
>
>> Hi Leo, yes, this helps a lot, this confirms the plan we had in mind.
>>
>> Will test tomorrow and post the results.
>>
>> Thanks again
>>
>> Adrian
>>
>> On Wed, Jun 5, 2019 at 11:18 PM Leo David  wrote:
>>
>>> Hi Adrian,
>>> I think the steps are:
>>> - reinstall the host
>>> - join it to virtualisation cluster
>>> And if was member of gluster cluster as well:
>>> - go to host - storage devices
>>> - create the bricks on the devices - as they are on the other hosts
>>> - go to storage - volumes
>>> - replace each failed brick with the corresponding new one.
>>> Hope it helps.
>>> Cheers,
>>> Leo
>>>
>>>
>>> On Wed, Jun 5, 2019, 23:09  wrote:
>>>
 Anybody have had to replace a failed host from a 3, 6, or 9 node
 hyperconverged setup with gluster storage?

 One of my hosts is completely dead, I need to do a fresh install
 using ovirt node iso, can anybody point me to the proper steps?

 thanks,
 ___
 Users mailing list -- users@ovirt.org
 To unsubscribe send an email to users-le...@ovirt.org
 Privacy Statement: https://www.ovirt.org/site/privacy-policy/
 oVirt Code of Conduct:
 https://www.ovirt.org/community/about/community-guidelines/
 List Archives:
 https://lists.ovirt.org/archives/list/users@ovirt.org/message/RFBYQKWC2KNZVYTYQF5T256UZBCJHK5F/

>>> --
>> Adrian Quintero
>>
>
>
> --
> 

[ovirt-users] Re: Replace bad Host from a 9 Node hyperconverged setup 4.3.3

2019-06-10 Thread Leo David
Hi, i think you can generate and use a new uuid, althought i am not sure
about the procedure right now..

On Mon, Jun 10, 2019, 18:13 Adrian Quintero 
wrote:

> Ok I have tried reinstalling the server from scratch with a different name
> and IP address and when trying to add it to cluster I get the following
> error:
>
> Event details
> ID: 505
> Time: Jun 10, 2019, 10:00:00 AM
> Message: Host myshost2.virt.iad3p installation failed. Host
> myhost2.mydomain.com reports unique id which already registered for
> myhost1.mydomain.com
>
> I am at a loss here, I don't have a brand new server to do this and in
> need to re-use what I have.
>
>
> *From the oVirt engine log (/var/log/ovirt-engine/engine.log): *
> 2019-06-10 10:57:59,950-04 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (EE-ManagedThreadFactory-engine-Thread-37744) [9b88055] EVENT_ID:
> VDS_INSTALL_FAILED(505), Host myhost2.mydomain.com installation failed.
> Host myhost2.mydomain.com reports unique id which already registered for
> myhost1.mydomain.com
>
> So in the
> /var/log/ovirt-engine/host-deploy/ovirt-host-deploy-20190610105759-myhost2.mydomain.com-9b88055.log
> of the ovirt engine I see that the host deploy is running  the following
> command to identify the system, if this is the case then it will never work
> :( because it identifies each host using the system uuid.
>
> *dmidecode -s system-uuid*
> b64d566e-055d-44d4-83a2-d3b83f25412e
>
>
> Any suggestions?
>
> Thanks
>
> On Sat, Jun 8, 2019 at 11:23 AM Adrian Quintero 
> wrote:
>
>> Leo,
>> I did try putting it under maintenance and checking to ignore gluster and
>> it did not work.
>> Error while executing action:
>> -Cannot remove host. Server having gluster volume.
>>
>> Note: the server was already reinstalled so gluster will never see the
>> volumes or bricks for this server.
>>
>> I will rename the server to myhost2.mydomain.com and try to replace the
>> bricks hopefully that might work, however it would be good to know that you
>> can re-install from scratch an existing cluster server and put it back to
>> the cluster.
>>
>> Still doing research hopefully we can find a way.
>>
>> thanks again
>>
>> Adrian
>>
>>
>>
>>
>> On Fri, Jun 7, 2019 at 2:39 AM Leo David  wrote:
>>
>>> You will need to remove the storage role from that server first ( not
>>> being part of gluster cluster ).
>>> I cannot test this right now on production,  but maybe putting host
>>> although its already died under "mantainance" while checking to ignore
>>> guster warning will let you remove it.
>>> Maybe I am wrong about the procedure,  can anybody input an advice
>>> helping with this situation ?
>>> Cheers,
>>>
>>> Leo
>>>
>>>
>>>
>>>
>>> On Thu, Jun 6, 2019 at 9:45 PM Adrian Quintero 
>>> wrote:
>>>
 I tried removing the bad host but running into the following issue ,
 any idea?
 Operation Canceled
 Error while executing action:

 host1.mydomain.com

- Cannot remove Host. Server having Gluster volume.




 On Thu, Jun 6, 2019 at 11:18 AM Adrian Quintero <
 adrianquint...@gmail.com> wrote:

> Leo, I forgot to mention that I have 1 SSD disk for caching purposes,
> wondering how that setup should be achieved?
>
> thanks,
>
> Adrian
>
> On Wed, Jun 5, 2019 at 11:25 PM Adrian Quintero <
> adrianquint...@gmail.com> wrote:
>
>> Hi Leo, yes, this helps a lot, this confirms the plan we had in mind.
>>
>> Will test tomorrow and post the results.
>>
>> Thanks again
>>
>> Adrian
>>
>> On Wed, Jun 5, 2019 at 11:18 PM Leo David  wrote:
>>
>>> Hi Adrian,
>>> I think the steps are:
>>> - reinstall the host
>>> - join it to virtualisation cluster
>>> And if was member of gluster cluster as well:
>>> - go to host - storage devices
>>> - create the bricks on the devices - as they are on the other hosts
>>> - go to storage - volumes
>>> - replace each failed brick with the corresponding new one.
>>> Hope it helps.
>>> Cheers,
>>> Leo
>>>
>>>
>>> On Wed, Jun 5, 2019, 23:09  wrote:
>>>
 Anybody have had to replace a failed host from a 3, 6, or 9 node
 hyperconverged setup with gluster storage?

 One of my hosts is completely dead, I need to do a fresh install
 using ovirt node iso, can anybody point me to the proper steps?

 thanks,
 ___
 Users mailing list -- users@ovirt.org
 To unsubscribe send an email to users-le...@ovirt.org
 Privacy Statement: https://www.ovirt.org/site/privacy-policy/
 oVirt Code of Conduct:
 https://www.ovirt.org/community/about/community-guidelines/
 List Archives:
 https://lists.ovirt.org/archives/list/users@ovirt.org/message/RFBYQKWC2KNZVYTYQF5T256UZBCJHK5F/

>>> --
>> Adrian Quintero

[ovirt-users] Re: Failed to activate Storage Domain --- ovirt 4.2

2019-06-10 Thread David Teigland
On Mon, Jun 10, 2019 at 10:59:43PM +0300, Nir Soffer wrote:
> > [root@uk1-ion-ovm-18  pvscan
> >   /dev/mapper/36000d31005697814: Checksum error at offset
> > 4397954425856
> >   Couldn't read volume group metadata from
> > /dev/mapper/36000d31005697814.
> >   Metadata location on /dev/mapper/36000d31005697814 at
> > 4397954425856 has invalid summary for VG.
> >   Failed to read metadata summary from
> > /dev/mapper/36000d31005697814
> >   Failed to scan VG from /dev/mapper/36000d31005697814
> 
> This looks like corrupted vg metadata.

Yes, the second metadata area, at the end of the device is corrupted; the
first metadata area is probably ok.  That version of lvm is not able to
continue by just using the one good copy.

Last week I pushed out major changes to LVM upstream to be able to handle
and repair most of these cases.  So, one option is to build lvm from the
upstream master branch, and check if that can read and repair this
metadata.

> David, we keep 2 metadata copies on the first PV. Can we use one of the
> copies on the PV to restore the metadata to the least good state?

pvcreate with --restorefile and --uuid, and with the right backup metadata
could probably correct things, but experiment with some temporary PVs
first.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6T7EM2R7422CXGBO3CKALMIHBYSTBUYK/


[ovirt-users] Re: Failed to activate Storage Domain --- ovirt 4.2

2019-06-10 Thread Nir Soffer
On Fri, Jun 7, 2019 at 5:03 PM  wrote:

> Hi
> Has anyone experiencing the following issue with Storage Domain -
>
> Failed to activate Storage Domain cLUN-R940-DC2-dstore01 --
> VDSM command ActivateStorageDomainVDS failed: Storage domain does not
> exist: (u'1b0ef853-fd71-45ea-8165-cc6047a267bc',)
>
> Currently, the storge Domain is Inactive and strangely, the VMs are
> running as normal. We can't manage or extend the volume size of this
> storage domain. The pvscan shows as:
> [root@uk1-ion-ovm-18  pvscan
>   /dev/mapper/36000d31005697814: Checksum error at offset
> 4397954425856
>   Couldn't read volume group metadata from
> /dev/mapper/36000d31005697814.
>   Metadata location on /dev/mapper/36000d31005697814 at
> 4397954425856 has invalid summary for VG.
>   Failed to read metadata summary from
> /dev/mapper/36000d31005697814
>   Failed to scan VG from /dev/mapper/36000d31005697814
>

This looks like corrupted vg metadata.

> I have tired the following steps:
> 1. Restarted ovirt-engine.service
> 2. tried to restore the metadata using vgcfgrestore but it failed with the
> following error:
>
> [root@uk1-ion-ovm-19 backup]# vgcfgrestore
> 36000d31005697814
>   Volume group 36000d31005697814 has active volume: .
>   WARNING: Found 1 active volume(s) in volume group
> "36000d31005697814".
>   Restoring VG with active LVs, may cause mismatch with its metadata.
> Do you really want to proceed with restore of volume group
> "36000d31005697814", while 1 volume(s) are active? [y/n]: y
>

This is not safe, you cannot fix the VG while it is being used by oVirt.

You need to migrate the running VMs to other storage, or shut down the VMs.
Then
deactivate this storage domain. Only then you can try to restore the VG.

  /dev/mapper/36000d31005697814: Checksum error at offset
> 4397954425856
>   Couldn't read volume group metadata from
> /dev/mapper/36000d31005697814.
>   Metadata location on /dev/mapper/36000d31005697814 at
> 4397954425856 has invalid summary for VG.
>   Failed to read metadata summary from
> /dev/mapper/36000d31005697814
>   Failed to scan VG from /dev/mapper/36000d31005697814
>   /etc/lvm/backup/36000d31005697814: stat failed: No such
> file or directory
>

Looks like you don't have a backup in this host. You may have the most
recent backup
on another host.


>   Couldn't read volume group metadata from file.
>   Failed to read VG 36000d31005697814 from
> /etc/lvm/backup/36000d31005697814
>   Restore failed.
>
> Please let me know if anyone knows any possible resolution.
>

David, we keep 2 metadata copies on the first PV. Can we use one of the
copies on the PV
to restore the metadata to the least good state?

David, how do you suggest to proceed?

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KA4TVUE775MMCQVD3YF7GSUZGEEOCQCF/


[ovirt-users] Re: Windows Server 2019: Driver Signature Enforcement

2019-06-10 Thread Vinícius Ferrão
RHV drivers works.
oVirt drivers does not.

Checked this now.

I’m not sure if this is intended or not. But oVirt drivers aren’t signed for 
Windows.

> On 29 May 2019, at 21:41, mich...@wanderingmad.com wrote:
> 
> I'm running server 2012R2, 2016, and 2019 with no issue using the Redhat 
> signed drivers from RHEV.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/GBQEMLOW5DCSW7XSLNZKNX532BQHRFUB/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PNZWHYEWW23N4GQKHMJ2RUSQR363NSYH/


[ovirt-users] Re: run ovirt without firewalld

2019-06-10 Thread Strahil
Then ... You have no choice , but to enable firewalld via a cron job or systemd 
service.

I think Sahina and/or Sandro can hint you how to proceed .

Best Regards,
Strahil Nikolov
On Jun 10, 2019 20:02, Chris Boon  wrote:
>
> yeah i noticed the same, HostedEngine doesn't want to migrate or start due to 
> disabled firewalld. 
> But some organizations have a company policy not to run firewalls on servers, 
> only edge firewalls. 
> So automation tools disable firewalld :( 
>
> best regards, 
> Chris
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/JQOROLNXKSQHT7ZSDFABDQ673CKRVZBF/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6YNPQLXLMMNV7ALHZQWUDXX5WEHQO4AJ/


[ovirt-users] Re: Replace bad Host from a 9 Node hyperconverged setup 4.3.3

2019-06-10 Thread Adrian Quintero
Can you let me know how to fix the gluster and missing brick?,
I tried removing it by going to "storage > Volumes > vmstore > bricks >
selected the brick
However it is showing as an unknown status (which is expected because the
server was completely wiped) so if I try to "remove", "replace brick" or
"reset brick" it wont work
If i do remove brick: Incorrect bricks selected for removal in Distributed
Replicate volume. Either all the selected bricks should be from the same
sub volume or one brick each for every sub volume!
If I try "replace brick" I cant because I dont have another server with
extra bricks/disks
And if I try "reset brick": Error while executing action Start Gluster
Volume Reset Brick: Volume reset brick commit force failed: rc=-1 out=()
err=['Host myhost1_mydomain_com  not connected']

Are you suggesting to try and fix the gluster using command line?

Note that I cant "peer detach"   the sever , so if I force the removal of
the bricks would I need to force downgrade to replica 2 instead of 3? what
would happen to oVirt as it only supports replica 3?

thanks again.

On Mon, Jun 10, 2019 at 12:52 PM Strahil  wrote:

> Hi Adrian,
> Did you fix the issue with the gluster and the missing brick?
> If yes, try to set the 'old' host in maintenance and then forcefully
> remove it from oVirt.
> If it succeeds (and it should), then you can add the server back and then
> check what happens.
>
> Best Regards,
> Strahil Nikolov
> On Jun 10, 2019 18:12, Adrian Quintero  wrote:
>
> Ok I have tried reinstalling the server from scratch with a different name
> and IP address and when trying to add it to cluster I get the following
> error:
>
> Event details
> ID: 505
> Time: Jun 10, 2019, 10:00:00 AM
> Message: Host myshost2.virt.iad3p installation failed. Host
> myhost2.mydomain.com reports unique id which already registered for
> myhost1.mydomain.com
>
> I am at a loss here, I don't have a brand new server to do this and in
> need to re-use what I have.
>
>
> *From the oVirt engine log (/var/log/ovirt-engine/engine.log): *
> 2019-06-10 10:57:59,950-04 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (EE-ManagedThreadFactory-engine-Thread-37744) [9b88055] EVENT_ID:
> VDS_INSTALL_FAILED(505), Host myhost2.mydomain.com installation failed.
> Host myhost2.mydomain.com reports unique id which already registered for
> myhost1.mydomain.com
>
> So in the /var/log/ovirt-engine/host-deploy/
> ovirt-host-deploy-20190610105759-myhost2.mydomain.com-9b88055.log of the
> ovirt engine I see that the host deploy is running  the following command
> to identify the system, if this is the case then it will never work :(
> because it identifies each host using the system uuid.
>
> *dmidecode -s system-uuid*
> b64d566e-055d-44d4-83a2-d3b83f25412e
>
>
> Any suggestions?
>
> Thanks
>
> On Sat, Jun 8, 2019 at 11:23 AM Adrian Quintero 
> wrote:
>
> Leo,
> I did try putting it under maintenance and checking to ignore gluster and
> it did not work.
> Error while executing action:
> -Cannot remove host. Server having gluster volume.
>
> Note: the server was already reinstalled so gluster will never see the
> volumes or bricks for this server.
>
> I will rename the server to myhost2.mydomain.com and try to replace the
> bricks hopefully that might work, however it would be good to know that you
> can re-install from scratch an existing cluster server and put it back to
> the cluster.
>
> Still doing research hopefully we can find a way.
>
> thanks again
>
> Adrian
>
>
>
>
> On Fri, Jun 7, 2019 at 2:39 AM Leo David  wrote:
>
> You will need to remove the storage role from that server first ( not
> being part of gluster cluster ).
> I cannot test this right now on production,  but maybe putting host
> although its already died under "mantainance" while checking to ignore
> guster warning will let you remove it.
> Maybe I am wrong about the procedure,  can anybody input an advice helping
> with this situation ?
> Cheers,
>
> Leo
>
>
>
>
> On Thu, Jun 6, 2019 at 9:45 PM Adrian Quintero 
> wrote:
>
> I tried removing the bad host but running into the following issue , any
> idea?
>
>

-- 
Adrian Quintero
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PF7KEND4VZAZ4QF34LDF6YEWJRQ2R52Y/


[ovirt-users] Re: Replace bad Host from a 9 Node hyperconverged setup 4.3.3

2019-06-10 Thread Dmitry Filonov
At this point I'd go to engine VM and remove host from the postgres DB
manually.
A bit of a hack, but...

ssh root@
su - postgres
cd /opt/rh/rh-postgresql10/
 source enable
psql engine
select vds_id from vds_static where host_name='myhost1.mydomain.com';
select DeleteVds('');

Of course, keep in mind that editing database directly is the last resort
and not supported in any way.

--
Dmitry Filonov
Linux Administrator
SBGrid Core | Harvard Medical School
250 Longwood Ave, SGM-114
Boston, MA 02115


On Mon, Jun 10, 2019 at 11:16 AM Adrian Quintero 
wrote:

> Ok I have tried reinstalling the server from scratch with a different name
> and IP address and when trying to add it to cluster I get the following
> error:
>
> Event details
> ID: 505
> Time: Jun 10, 2019, 10:00:00 AM
> Message: Host myshost2.virt.iad3p installation failed. Host
> myhost2.mydomain.com reports unique id which already registered for
> myhost1.mydomain.com
>
> I am at a loss here, I don't have a brand new server to do this and in
> need to re-use what I have.
>
>
> *From the oVirt engine log (/var/log/ovirt-engine/engine.log): *
> 2019-06-10 10:57:59,950-04 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (EE-ManagedThreadFactory-engine-Thread-37744) [9b88055] EVENT_ID:
> VDS_INSTALL_FAILED(505), Host myhost2.mydomain.com installation failed.
> Host myhost2.mydomain.com reports unique id which already registered for
> myhost1.mydomain.com
>
> So in the
> /var/log/ovirt-engine/host-deploy/ovirt-host-deploy-20190610105759-myhost2.mydomain.com-9b88055.log
> of the ovirt engine I see that the host deploy is running  the following
> command to identify the system, if this is the case then it will never work
> :( because it identifies each host using the system uuid.
>
> *dmidecode -s system-uuid*
> b64d566e-055d-44d4-83a2-d3b83f25412e
>
>
> Any suggestions?
>
> Thanks
>
> On Sat, Jun 8, 2019 at 11:23 AM Adrian Quintero 
> wrote:
>
>> Leo,
>> I did try putting it under maintenance and checking to ignore gluster and
>> it did not work.
>> Error while executing action:
>> -Cannot remove host. Server having gluster volume.
>>
>> Note: the server was already reinstalled so gluster will never see the
>> volumes or bricks for this server.
>>
>> I will rename the server to myhost2.mydomain.com and try to replace the
>> bricks hopefully that might work, however it would be good to know that you
>> can re-install from scratch an existing cluster server and put it back to
>> the cluster.
>>
>> Still doing research hopefully we can find a way.
>>
>> thanks again
>>
>> Adrian
>>
>>
>>
>>
>> On Fri, Jun 7, 2019 at 2:39 AM Leo David  wrote:
>>
>>> You will need to remove the storage role from that server first ( not
>>> being part of gluster cluster ).
>>> I cannot test this right now on production,  but maybe putting host
>>> although its already died under "mantainance" while checking to ignore
>>> guster warning will let you remove it.
>>> Maybe I am wrong about the procedure,  can anybody input an advice
>>> helping with this situation ?
>>> Cheers,
>>>
>>> Leo
>>>
>>>
>>>
>>>
>>> On Thu, Jun 6, 2019 at 9:45 PM Adrian Quintero 
>>> wrote:
>>>
 I tried removing the bad host but running into the following issue ,
 any idea?
 Operation Canceled
 Error while executing action:

 host1.mydomain.com

- Cannot remove Host. Server having Gluster volume.




 On Thu, Jun 6, 2019 at 11:18 AM Adrian Quintero <
 adrianquint...@gmail.com> wrote:

> Leo, I forgot to mention that I have 1 SSD disk for caching purposes,
> wondering how that setup should be achieved?
>
> thanks,
>
> Adrian
>
> On Wed, Jun 5, 2019 at 11:25 PM Adrian Quintero <
> adrianquint...@gmail.com> wrote:
>
>> Hi Leo, yes, this helps a lot, this confirms the plan we had in mind.
>>
>> Will test tomorrow and post the results.
>>
>> Thanks again
>>
>> Adrian
>>
>> On Wed, Jun 5, 2019 at 11:18 PM Leo David  wrote:
>>
>>> Hi Adrian,
>>> I think the steps are:
>>> - reinstall the host
>>> - join it to virtualisation cluster
>>> And if was member of gluster cluster as well:
>>> - go to host - storage devices
>>> - create the bricks on the devices - as they are on the other hosts
>>> - go to storage - volumes
>>> - replace each failed brick with the corresponding new one.
>>> Hope it helps.
>>> Cheers,
>>> Leo
>>>
>>>
>>> On Wed, Jun 5, 2019, 23:09  wrote:
>>>
 Anybody have had to replace a failed host from a 3, 6, or 9 node
 hyperconverged setup with gluster storage?

 One of my hosts is completely dead, I need to do a fresh install
 using ovirt node iso, can anybody point me to the proper steps?

 thanks,
 ___
 Users mailing list -- users@ovirt.org

[ovirt-users] Re: run ovirt without firewalld

2019-06-10 Thread Chris Boon
yeah i noticed the same, HostedEngine doesn't want to migrate or start due to 
disabled firewalld.
But some organizations have a company policy not to run firewalls on servers, 
only edge firewalls.
So automation tools disable firewalld :(

best regards,
Chris
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JQOROLNXKSQHT7ZSDFABDQ673CKRVZBF/


[ovirt-users] Re: Replace bad Host from a 9 Node hyperconverged setup 4.3.3

2019-06-10 Thread Strahil
Hi Adrian,
Did you fix the issue with the gluster and the missing brick?
If yes, try to set the 'old' host in maintenance and then forcefully remove it 
from oVirt.
If it succeeds (and it should), then you can add the server back and then check 
what happens.

Best Regards,
Strahil NikolovOn Jun 10, 2019 18:12, Adrian Quintero 
 wrote:
>
> Ok I have tried reinstalling the server from scratch with a different name 
> and IP address and when trying to add it to cluster I get the following error:
>
> Event details
> ID: 505
> Time: Jun 10, 2019, 10:00:00 AM
> Message: Host myshost2.virt.iad3p installation failed. Host 
> myhost2.mydomain.com reports unique id which already registered for 
> myhost1.mydomain.com
>
> I am at a loss here, I don't have a brand new server to do this and in need 
> to re-use what I have. 
>
>
> From the oVirt engine log (/var/log/ovirt-engine/engine.log): 
> 2019-06-10 10:57:59,950-04 ERROR 
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
> (EE-ManagedThreadFactory-engine-Thread-37744) [9b88055] EVENT_ID: 
> VDS_INSTALL_FAILED(505), Host myhost2.mydomain.com installation failed. Host 
> myhost2.mydomain.com reports unique id which already registered for 
> myhost1.mydomain.com
>
> So in the 
> /var/log/ovirt-engine/host-deploy/ovirt-host-deploy-20190610105759-myhost2.mydomain.com-9b88055.log
>  of the ovirt engine I see that the host deploy is running  the following 
> command to identify the system, if this is the case then it will never work 
> :( because it identifies each host using the system uuid.
>
> dmidecode -s system-uuid
> b64d566e-055d-44d4-83a2-d3b83f25412e
>
>
> Any suggestions?
>
> Thanks
>
> On Sat, Jun 8, 2019 at 11:23 AM Adrian Quintero  
> wrote:
>>
>> Leo,
>> I did try putting it under maintenance and checking to ignore gluster and it 
>> did not work.
>> Error while executing action: 
>> -Cannot remove host. Server having gluster volume.
>>
>> Note: the server was already reinstalled so gluster will never see the 
>> volumes or bricks for this server.
>>
>> I will rename the server to myhost2.mydomain.com and try to replace the 
>> bricks hopefully that might work, however it would be good to know that you 
>> can re-install from scratch an existing cluster server and put it back to 
>> the cluster.
>>
>> Still doing research hopefully we can find a way.
>>
>> thanks again
>>
>> Adrian
>>
>>
>>
>>
>> On Fri, Jun 7, 2019 at 2:39 AM Leo David  wrote:
>>>
>>> You will need to remove the storage role from that server first ( not being 
>>> part of gluster cluster ).
>>> I cannot test this right now on production,  but maybe putting host 
>>> although its already died under "mantainance" while checking to ignore 
>>> guster warning will let you remove it.
>>> Maybe I am wrong about the procedure,  can anybody input an advice helping 
>>> with this situation ?
>>> Cheers,
>>>
>>> Leo
>>>
>>>
>>>
>>>
>>> On Thu, Jun 6, 2019 at 9:45 PM Adrian Quintero  
>>> wrote:

 I tried removing the bad host but running into the following issue , any 
 idea?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YNJ3SAOTYFG5YGWL6USSEFS6PL2DSZKU/


[ovirt-users] Re: Erro seabios ovirt

2019-06-10 Thread Sandro Bonazzola
Michal, Ryan, can you please have a look?

Il giorno lun 3 giu 2019 alle ore 12:01 LIONS TECNOLOGIA <
lionstecnologi...@gmail.com> ha scritto:

> Please! I am a beginner in ovirt and I installed a single server, I am with 
> the following problem, when I open the virtual machine console screen, it 
> shows this black screen attached, I would like to know how to solve this 
> problem.
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ENOUI2EEQVSN4MHGYRV3M7CV3PLBVFLC/
>


-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XO7KNNNWROBSFTDSKJMJ33KFL55VOLCE/


[ovirt-users] Re: Replace bad Host from a 9 Node hyperconverged setup 4.3.3

2019-06-10 Thread Adrian Quintero
Ok I have tried reinstalling the server from scratch with a different name
and IP address and when trying to add it to cluster I get the following
error:

Event details
ID: 505
Time: Jun 10, 2019, 10:00:00 AM
Message: Host myshost2.virt.iad3p installation failed. Host
myhost2.mydomain.com reports unique id which already registered for
myhost1.mydomain.com

I am at a loss here, I don't have a brand new server to do this and in need
to re-use what I have.


*From the oVirt engine log (/var/log/ovirt-engine/engine.log): *
2019-06-10 10:57:59,950-04 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engine-Thread-37744) [9b88055] EVENT_ID:
VDS_INSTALL_FAILED(505), Host myhost2.mydomain.com installation failed.
Host myhost2.mydomain.com reports unique id which already registered for
myhost1.mydomain.com

So in the
/var/log/ovirt-engine/host-deploy/ovirt-host-deploy-20190610105759-myhost2.mydomain.com-9b88055.log
of the ovirt engine I see that the host deploy is running  the following
command to identify the system, if this is the case then it will never work
:( because it identifies each host using the system uuid.

*dmidecode -s system-uuid*
b64d566e-055d-44d4-83a2-d3b83f25412e


Any suggestions?

Thanks

On Sat, Jun 8, 2019 at 11:23 AM Adrian Quintero 
wrote:

> Leo,
> I did try putting it under maintenance and checking to ignore gluster and
> it did not work.
> Error while executing action:
> -Cannot remove host. Server having gluster volume.
>
> Note: the server was already reinstalled so gluster will never see the
> volumes or bricks for this server.
>
> I will rename the server to myhost2.mydomain.com and try to replace the
> bricks hopefully that might work, however it would be good to know that you
> can re-install from scratch an existing cluster server and put it back to
> the cluster.
>
> Still doing research hopefully we can find a way.
>
> thanks again
>
> Adrian
>
>
>
>
> On Fri, Jun 7, 2019 at 2:39 AM Leo David  wrote:
>
>> You will need to remove the storage role from that server first ( not
>> being part of gluster cluster ).
>> I cannot test this right now on production,  but maybe putting host
>> although its already died under "mantainance" while checking to ignore
>> guster warning will let you remove it.
>> Maybe I am wrong about the procedure,  can anybody input an advice
>> helping with this situation ?
>> Cheers,
>>
>> Leo
>>
>>
>>
>>
>> On Thu, Jun 6, 2019 at 9:45 PM Adrian Quintero 
>> wrote:
>>
>>> I tried removing the bad host but running into the following issue , any
>>> idea?
>>> Operation Canceled
>>> Error while executing action:
>>>
>>> host1.mydomain.com
>>>
>>>- Cannot remove Host. Server having Gluster volume.
>>>
>>>
>>>
>>>
>>> On Thu, Jun 6, 2019 at 11:18 AM Adrian Quintero <
>>> adrianquint...@gmail.com> wrote:
>>>
 Leo, I forgot to mention that I have 1 SSD disk for caching purposes,
 wondering how that setup should be achieved?

 thanks,

 Adrian

 On Wed, Jun 5, 2019 at 11:25 PM Adrian Quintero <
 adrianquint...@gmail.com> wrote:

> Hi Leo, yes, this helps a lot, this confirms the plan we had in mind.
>
> Will test tomorrow and post the results.
>
> Thanks again
>
> Adrian
>
> On Wed, Jun 5, 2019 at 11:18 PM Leo David  wrote:
>
>> Hi Adrian,
>> I think the steps are:
>> - reinstall the host
>> - join it to virtualisation cluster
>> And if was member of gluster cluster as well:
>> - go to host - storage devices
>> - create the bricks on the devices - as they are on the other hosts
>> - go to storage - volumes
>> - replace each failed brick with the corresponding new one.
>> Hope it helps.
>> Cheers,
>> Leo
>>
>>
>> On Wed, Jun 5, 2019, 23:09  wrote:
>>
>>> Anybody have had to replace a failed host from a 3, 6, or 9 node
>>> hyperconverged setup with gluster storage?
>>>
>>> One of my hosts is completely dead, I need to do a fresh install
>>> using ovirt node iso, can anybody point me to the proper steps?
>>>
>>> thanks,
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RFBYQKWC2KNZVYTYQF5T256UZBCJHK5F/
>>>
>> --
> Adrian Quintero
>


 --
 Adrian Quintero

>>>
>>>
>>> --
>>> Adrian Quintero
>>>
>>
>>
>> --
>> Best regards, Leo David
>>
>
>
> --
> Adrian Quintero
>


-- 
Adrian Quintero
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: 

[ovirt-users] Can't bring upgraded to 4.3 host back to cluster

2019-06-10 Thread Artem Tambovskiy
Hello,

May I ask you for and advise?
I'm running a small oVirt cluster and couple of months ago I decided to do
an upgrade from oVirt 4.2.8 to 4.3 and having an issues since that time. I
can only guess what I did wrong - probably one of the problems that I
haven't switched the cluster from iptables to firewalld. But this is just
my guess.

The problem is that I have upgraded the engine and one host, and then I
done an upgrade of second host I can't bring it to active state. Looks like
VDSM can't detect the network and fails to start. I even tried to reinstall
the hosts from UI (I have seen that the packages being installed) but
again, VDSM doesn't startup at the end and reinstallation fails.

Looking at hosts process list I see  script *wait_for_ipv4s*  hanging
forever.

vdsm   8603  1  6 16:26 ?00:00:00 /usr/bin/python
/usr/share/ovirt-hosted-engine-ha/ovirt-ha-agent

*root   8630  1  0 16:26 ?00:00:00 /bin/sh
/usr/libexec/vdsm/vdsmd_init_common.sh --pre-startroot   8645   8630  6
16:26 ?00:00:00 /usr/bin/python2 /usr/libexec/vdsm/wait_for_ipv4s*
root   8688  1 30 16:27 ?00:00:00 /usr/bin/python2
/usr/share/vdsm/supervdsmd --sockfile /var/run/vdsm/svdsm.sock
vdsm   8715  1  0 16:27 ?00:00:00 /usr/bin/python
/usr/share/ovirt-hosted-engine-ha/ovirt-ha-broker

The all hosts in cluster are reachable from each other ...  That could be
the issue?

Thank you in advance!
-- 
Regards,
Artem
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TQX3LN2TEM4DECKKUMMRCWXTRM6BGIAB/


[ovirt-users] Re: Failed to activate Storage Domain --- ovirt 4.2

2019-06-10 Thread Eyal Shenitzky
Nir, can you please have a look?

On Mon, Jun 10, 2019 at 2:29 PM Aminur Rahman 
wrote:

> Hi Eyal
>
>
>
> We’re using:
>
>
>
> ovirt-engine-4.2.8.2-1.el7.noarch
>
> vdsm-client-4.20.46-1.el7.noarch
>
>
>
> Thanks
>
> *Aminur Rahman*
>
> aminur.rah...@iongroup.com
>
> *t*
>
> +44 20 7398 0243 <+44%2020%207398%200243>
>
> *m*
>
> +44 7825 780697 <+44%207825%20780697%3c>
>
> iongroup.com 
>
>
>
> *From:* Eyal Shenitzky 
> *Sent:* 10 June 2019 07:20
> *To:* Aminur Rahman ; Nir Soffer <
> nsof...@redhat.com>
> *Cc:* users 
> *Subject:* Re: [ovirt-users] Failed to activate Storage Domain --- ovirt
> 4.2
>
>
>
> Hi Aminur,
>
>
>
> Can you please send the engine and vdsm versions?
>
>
>
>
>
> On Fri, Jun 7, 2019 at 5:03 PM  wrote:
>
> Hi
> Has anyone experiencing the following issue with Storage Domain -
>
> Failed to activate Storage Domain cLUN-R940-DC2-dstore01 --
> VDSM command ActivateStorageDomainVDS failed: Storage domain does not
> exist: (u'1b0ef853-fd71-45ea-8165-cc6047a267bc',)
>
> Currently, the storge Domain is Inactive and strangely, the VMs are
> running as normal. We can't manage or extend the volume size of this
> storage domain. The pvscan shows as:
> [root@uk1-ion-ovm-18  pvscan
>   /dev/mapper/36000d31005697814: Checksum error at offset
> 4397954425856
>   Couldn't read volume group metadata from
> /dev/mapper/36000d31005697814.
>   Metadata location on /dev/mapper/36000d31005697814 at
> 4397954425856 has invalid summary for VG.
>   Failed to read metadata summary from
> /dev/mapper/36000d31005697814
>   Failed to scan VG from /dev/mapper/36000d31005697814
>
> I have tired the following steps:
> 1. Restarted ovirt-engine.service
> 2. tried to restore the metadata using vgcfgrestore but it failed with the
> following error:
>
> [root@uk1-ion-ovm-19 backup]# vgcfgrestore
> 36000d31005697814
>   Volume group 36000d31005697814 has active volume: .
>   WARNING: Found 1 active volume(s) in volume group
> "36000d31005697814".
>   Restoring VG with active LVs, may cause mismatch with its metadata.
> Do you really want to proceed with restore of volume group
> "36000d31005697814", while 1 volume(s) are active? [y/n]: y
>   /dev/mapper/36000d31005697814: Checksum error at offset
> 4397954425856
>   Couldn't read volume group metadata from
> /dev/mapper/36000d31005697814.
>   Metadata location on /dev/mapper/36000d31005697814 at
> 4397954425856 has invalid summary for VG.
>   Failed to read metadata summary from
> /dev/mapper/36000d31005697814
>   Failed to scan VG from /dev/mapper/36000d31005697814
>   /etc/lvm/backup/36000d31005697814: stat failed: No such
> file or directory
>   Couldn't read volume group metadata from file.
>   Failed to read VG 36000d31005697814 from
> /etc/lvm/backup/36000d31005697814
>   Restore failed.
>
> Please let me know if anyone knows any possible resolution.
>
> -AMinur
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> 
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> 
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/W2JP7ZO5XMV66ATT3N33IKCZHKM6XPWJ/
> 
>
>
>
>
> --
>
> Regards,
>
> Eyal Shenitzky
>


-- 
Regards,
Eyal Shenitzky
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 

[ovirt-users] Re: oVirt survey - May 2019

2019-06-10 Thread Sandro Bonazzola
Il giorno lun 20 mag 2019 alle ore 13:30 Sandro Bonazzola <
sbona...@redhat.com> ha scritto:

> As we continue to develop oVirt 4.3 and future releases, the Development
> and Integration teams at Red Hat would value insights on how you are
> deploying the oVirt environment.
> Please help us to hit the mark by completing this short survey. Survey
> will close on June 7th.
> If you're managing multiple oVirt deployments with very different use
> cases or very different deployments you can consider answering this survey
> multiple times.
> Please note the answers to this survey will be publicly accessible. This
> survey is under oVirt Privacy Policy available at
> https://www.ovirt.org/site/privacy-policy.html
>
> The survey is available here: https://forms.gle/8uzuVNmDWtoKruhm8
>
>
The Survey is now closed
Results are available here:
https://docs.google.com/forms/d/e/1FAIpQLSdBoN2otjkxzJdsdmldpjg5tRl9aqg2vOFUJdQ5p1XnuUMuXw/viewanalytics
Thanks everyone who participated to the survey!

-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/34YWZF37MMHIN3WNUK2HNQYGTRSZWB3Z/


[ovirt-users] Re: The CPU type of the cluster is unknown. Its possible to change the cluster cpu or set a different one per VM.

2019-06-10 Thread Juhani Rautiainen
Hi!

Epyc is not available in oVirt-4.2. If you can recover the system and
can upgrade to oVirt-4.3 it then you can switch to Epyc (I have system
with Epyc's running in 4.3). For recovery I would try to roll back
changes you did to database.

-Juhani


-Juhani

On Mon, Jun 10, 2019 at 11:46 AM  wrote:
>
> Hi,
>
> I am having trouble in fixing the CPU type of my oVirt cluster.
>
> I have AMD EPYC 7551P 32-Core Processor hosts(x3) in a gluster cluster. The 
> HE and cluster by default had the: AMD Opteron 23xx (Gen 3 Class Opteron).
>
> I tried changing it with this 
> method(https://lists.ovirt.org/archives/list/users@ovirt.org/thread/XYY2WAGWXBL5YA6KAQY3ZEBVFOKELAKE/),
> it didn't work as I can't move the Host to another cluster it complains 
> (Error while executing action: ***:Cannot edit Host. Server having 
> Gluster volume.)
>
> Then I tried updating the cpu_name manually in DB by following: 
> https://www.mail-archive.com/users@ovirt.org/msg33177.html
> After the Update, the web UI shows it changed to AMD EPYC but if I click on 
> Edit cluster it shows the "Intel Conroe Family" and again doesn't allow to 
> modify.
>
> I am now stuck with unusable oVirt setup, my existing VMs won't start and 
> throw: Error while executing action: **: The CPU type of the cluster is 
> unknown. Its possible to change the cluster cpu or set a different one per 
> VM.)
>
> Please help or suggest to fix this issue.
>
> HE:
> cat /proc/cpuinfo |grep "model name"
> model name  : AMD Opteron 23xx (Gen 3 Class Opteron)
> model name  : AMD Opteron 23xx (Gen 3 Class Opteron)
> model name  : AMD Opteron 23xx (Gen 3 Class Opteron)
> model name  : AMD Opteron 23xx (Gen 3 Class Opteron)
>
> rpm -qa |grep -i ovirt
> ovirt-ansible-image-template-1.1.9-1.el7.noarch
> ovirt-engine-dwh-setup-4.2.4.3-1.el7.noarch
> ovirt-engine-backend-4.2.8.2-1.el7.noarch
> ovirt-engine-extension-aaa-ldap-1.3.8-1.el7.noarch
> ovirt-engine-extension-aaa-jdbc-1.1.7-1.el7.centos.noarch
> ovirt-engine-wildfly-overlay-14.0.1-3.el7.noarch
> ovirt-ansible-hosted-engine-setup-1.0.2-1.el7.noarch
> ovirt-host-deploy-java-1.7.4-1.el7.noarch
> ovirt-engine-setup-plugin-websocket-proxy-4.2.8.2-1.el7.noarch
> ovirt-engine-setup-plugin-vmconsole-proxy-helper-4.2.8.2-1.el7.noarch
> ovirt-ansible-engine-setup-1.1.6-1.el7.noarch
> ovirt-ansible-cluster-upgrade-1.1.10-1.el7.noarch
> ovirt-ansible-roles-1.1.6-1.el7.noarch
> ovirt-release42-4.2.8-1.el7.noarch
> ovirt-engine-setup-plugin-ovirt-engine-common-4.2.8.2-1.el7.noarch
> ovirt-engine-tools-backup-4.2.8.2-1.el7.noarch
> ovirt-provider-ovn-1.2.18-1.el7.noarch
> ovirt-imageio-common-1.4.6-1.el7.x86_64
> ovirt-js-dependencies-1.2.0-3.1.el7.centos.noarch
> ovirt-cockpit-sso-0.0.4-1.el7.noarch
> ovirt-engine-restapi-4.2.8.2-1.el7.noarch
> ovirt-engine-vmconsole-proxy-helper-4.2.8.2-1.el7.noarch
> ovirt-engine-api-explorer-0.0.2-1.el7.centos.noarch
> ovirt-engine-lib-4.2.8.2-1.el7.noarch
> ovirt-engine-setup-base-4.2.8.2-1.el7.noarch
> ovirt-engine-metrics-1.1.8.1-1.el7.noarch
> ovirt-ansible-repositories-1.1.3-1.el7.noarch
> ovirt-ansible-disaster-recovery-1.1.4-1.el7.noarch
> ovirt-ansible-infra-1.1.10-1.el7.noarch
> ovirt-ansible-shutdown-env-1.0.0-1.el7.noarch
> ovirt-engine-dwh-4.2.4.3-1.el7.noarch
> ovirt-iso-uploader-4.2.0-1.el7.centos.noarch
> ovirt-engine-webadmin-portal-4.2.8.2-1.el7.noarch
> ovirt-engine-dbscripts-4.2.8.2-1.el7.noarch
> ovirt-engine-setup-plugin-ovirt-engine-4.2.8.2-1.el7.noarch
> ovirt-engine-extension-aaa-ldap-setup-1.3.8-1.el7.noarch
> ovirt-engine-extensions-api-impl-4.2.8.2-1.el7.noarch
> ovirt-host-deploy-1.7.4-1.el7.noarch
> ovirt-vmconsole-1.0.6-2.el7.noarch
> python-ovirt-engine-sdk4-4.2.9-2.el7.x86_64
> ovirt-engine-wildfly-14.0.1-3.el7.x86_64
> ovirt-guest-agent-common-1.0.16-1.el7.noarch
> ovirt-ansible-v2v-conversion-host-1.9.0-1.el7.noarch
> ovirt-ansible-manageiq-1.1.13-1.el7.noarch
> ovirt-setup-lib-1.1.5-1.el7.noarch
> ovirt-engine-websocket-proxy-4.2.8.2-1.el7.noarch
> ovirt-engine-dashboard-1.2.4-1.el7.noarch
> ovirt-engine-setup-4.2.8.2-1.el7.noarch
> ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch
> ovirt-vmconsole-proxy-1.0.6-2.el7.noarch
> ovirt-web-ui-1.4.5-1.el7.noarch
> ovirt-ansible-vm-infra-1.1.12-1.el7.noarch
> ovirt-imageio-proxy-setup-1.4.6-1.el7.noarch
> ovirt-imageio-proxy-1.4.6-1.el7.noarch
> ovirt-engine-tools-4.2.8.2-1.el7.noarch
> ovirt-engine-4.2.8.2-1.el7.noarch
> ovirt-engine-cli-3.6.9.2-1.el7.centos.noarch
>
> 3xHosts:
> cat /proc/cpuinfo |grep "model name"
> model name  : AMD EPYC 7551P 32-Core Processor
> model name  : AMD EPYC 7551P 32-Core Processor
> model name  : AMD EPYC 7551P 32-Core Processor
>
> rpm -qa |grep -i ovirt
> ovirt-release42-4.2.8-1.el7.noarch
> cockpit-ovirt-dashboard-0.11.38-1.el7.noarch
> ovirt-vmconsole-1.0.6-2.el7.noarch
> python-ovirt-engine-sdk4-4.2.9-2.el7.x86_64
> cockpit-machines-ovirt-193-2.el7.noarch
> ovirt-imageio-daemon-1.4.6-1.el7.noarch
> 

[ovirt-users] The CPU type of the cluster is unknown. Its possible to change the cluster cpu or set a different one per VM.

2019-06-10 Thread sandeepkumar86k
Hi,

I am having trouble in fixing the CPU type of my oVirt cluster.

I have AMD EPYC 7551P 32-Core Processor hosts(x3) in a gluster cluster. The HE 
and cluster by default had the: AMD Opteron 23xx (Gen 3 Class Opteron).

I tried changing it with this 
method(https://lists.ovirt.org/archives/list/users@ovirt.org/thread/XYY2WAGWXBL5YA6KAQY3ZEBVFOKELAKE/),
 
it didn't work as I can't move the Host to another cluster it complains (Error 
while executing action: ***:Cannot edit Host. Server having Gluster 
volume.)

Then I tried updating the cpu_name manually in DB by following: 
https://www.mail-archive.com/users@ovirt.org/msg33177.html
After the Update, the web UI shows it changed to AMD EPYC but if I click on 
Edit cluster it shows the "Intel Conroe Family" and again doesn't allow to 
modify.

I am now stuck with unusable oVirt setup, my existing VMs won't start and 
throw: Error while executing action: **: The CPU type of the cluster is 
unknown. Its possible to change the cluster cpu or set a different one per VM.)

Please help or suggest to fix this issue.

HE: 
cat /proc/cpuinfo |grep "model name"
model name  : AMD Opteron 23xx (Gen 3 Class Opteron)
model name  : AMD Opteron 23xx (Gen 3 Class Opteron)
model name  : AMD Opteron 23xx (Gen 3 Class Opteron)
model name  : AMD Opteron 23xx (Gen 3 Class Opteron)

rpm -qa |grep -i ovirt
ovirt-ansible-image-template-1.1.9-1.el7.noarch
ovirt-engine-dwh-setup-4.2.4.3-1.el7.noarch
ovirt-engine-backend-4.2.8.2-1.el7.noarch
ovirt-engine-extension-aaa-ldap-1.3.8-1.el7.noarch
ovirt-engine-extension-aaa-jdbc-1.1.7-1.el7.centos.noarch
ovirt-engine-wildfly-overlay-14.0.1-3.el7.noarch
ovirt-ansible-hosted-engine-setup-1.0.2-1.el7.noarch
ovirt-host-deploy-java-1.7.4-1.el7.noarch
ovirt-engine-setup-plugin-websocket-proxy-4.2.8.2-1.el7.noarch
ovirt-engine-setup-plugin-vmconsole-proxy-helper-4.2.8.2-1.el7.noarch
ovirt-ansible-engine-setup-1.1.6-1.el7.noarch
ovirt-ansible-cluster-upgrade-1.1.10-1.el7.noarch
ovirt-ansible-roles-1.1.6-1.el7.noarch
ovirt-release42-4.2.8-1.el7.noarch
ovirt-engine-setup-plugin-ovirt-engine-common-4.2.8.2-1.el7.noarch
ovirt-engine-tools-backup-4.2.8.2-1.el7.noarch
ovirt-provider-ovn-1.2.18-1.el7.noarch
ovirt-imageio-common-1.4.6-1.el7.x86_64
ovirt-js-dependencies-1.2.0-3.1.el7.centos.noarch
ovirt-cockpit-sso-0.0.4-1.el7.noarch
ovirt-engine-restapi-4.2.8.2-1.el7.noarch
ovirt-engine-vmconsole-proxy-helper-4.2.8.2-1.el7.noarch
ovirt-engine-api-explorer-0.0.2-1.el7.centos.noarch
ovirt-engine-lib-4.2.8.2-1.el7.noarch
ovirt-engine-setup-base-4.2.8.2-1.el7.noarch
ovirt-engine-metrics-1.1.8.1-1.el7.noarch
ovirt-ansible-repositories-1.1.3-1.el7.noarch
ovirt-ansible-disaster-recovery-1.1.4-1.el7.noarch
ovirt-ansible-infra-1.1.10-1.el7.noarch
ovirt-ansible-shutdown-env-1.0.0-1.el7.noarch
ovirt-engine-dwh-4.2.4.3-1.el7.noarch
ovirt-iso-uploader-4.2.0-1.el7.centos.noarch
ovirt-engine-webadmin-portal-4.2.8.2-1.el7.noarch
ovirt-engine-dbscripts-4.2.8.2-1.el7.noarch
ovirt-engine-setup-plugin-ovirt-engine-4.2.8.2-1.el7.noarch
ovirt-engine-extension-aaa-ldap-setup-1.3.8-1.el7.noarch
ovirt-engine-extensions-api-impl-4.2.8.2-1.el7.noarch
ovirt-host-deploy-1.7.4-1.el7.noarch
ovirt-vmconsole-1.0.6-2.el7.noarch
python-ovirt-engine-sdk4-4.2.9-2.el7.x86_64
ovirt-engine-wildfly-14.0.1-3.el7.x86_64
ovirt-guest-agent-common-1.0.16-1.el7.noarch
ovirt-ansible-v2v-conversion-host-1.9.0-1.el7.noarch
ovirt-ansible-manageiq-1.1.13-1.el7.noarch
ovirt-setup-lib-1.1.5-1.el7.noarch
ovirt-engine-websocket-proxy-4.2.8.2-1.el7.noarch
ovirt-engine-dashboard-1.2.4-1.el7.noarch
ovirt-engine-setup-4.2.8.2-1.el7.noarch
ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch
ovirt-vmconsole-proxy-1.0.6-2.el7.noarch
ovirt-web-ui-1.4.5-1.el7.noarch
ovirt-ansible-vm-infra-1.1.12-1.el7.noarch
ovirt-imageio-proxy-setup-1.4.6-1.el7.noarch
ovirt-imageio-proxy-1.4.6-1.el7.noarch
ovirt-engine-tools-4.2.8.2-1.el7.noarch
ovirt-engine-4.2.8.2-1.el7.noarch
ovirt-engine-cli-3.6.9.2-1.el7.centos.noarch

3xHosts: 
cat /proc/cpuinfo |grep "model name"
model name  : AMD EPYC 7551P 32-Core Processor
model name  : AMD EPYC 7551P 32-Core Processor
model name  : AMD EPYC 7551P 32-Core Processor

rpm -qa |grep -i ovirt
ovirt-release42-4.2.8-1.el7.noarch
cockpit-ovirt-dashboard-0.11.38-1.el7.noarch
ovirt-vmconsole-1.0.6-2.el7.noarch
python-ovirt-engine-sdk4-4.2.9-2.el7.x86_64
cockpit-machines-ovirt-193-2.el7.noarch
ovirt-imageio-daemon-1.4.6-1.el7.noarch
ovirt-hosted-engine-setup-2.2.33-1.el7.noarch
ovirt-engine-appliance-4.2-20190121.1.el7.noarch
ovirt-vmconsole-host-1.0.6-2.el7.noarch
ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch
ovirt-setup-lib-1.1.5-1.el7.noarch
ovirt-host-deploy-1.7.4-1.el7.noarch
ovirt-host-dependencies-4.2.3-1.el7.x86_64
ovirt-hosted-engine-ha-2.2.19-1.el7.noarch
ovirt-provider-ovn-driver-1.2.18-1.el7.noarch
ovirt-imageio-common-1.4.6-1.el7.x86_64
ovirt-host-4.2.3-1.el7.x86_64



Thanks,
___
Users mailing list -- 

[ovirt-users] Re: [ovirt-devel] cannot add host (oVirt 4.3.3.7)

2019-06-10 Thread Sandro Bonazzola
Il giorno lun 10 giu 2019 alle ore 09:33 Hetz Ben Hamo  ha
scritto:

> Perhaps We're talking about 2 different things?
>
> I want to add a node to an existing ovirt installation.
>
> Previously, the only thing that was required to add a new node, was a
> simple install of centos on the new machine.
>

No, using centos you need to install the ovirt-release43 rpm in order to
provide needed repos.
We have ansible roles for it if needed.


>
> תודה,
> חץ בן חמו
>
> On Mon, Jun 10, 2019, 00:31 Nir Soffer  wrote:
>
>> On Mon, Jun 10, 2019 at 12:23 AM Hetz Ben Hamo  wrote:
>>
>>> I'm using version 4.3.3.7-1.el7
>>>
>>> No, I didn't install the RPM. From past experience (and according to the 
>>> docs
>>> here )
>>> you don't need to as it takes care of it. Few months ago when I tested it,
>>> it worked well on a new CentOS install.
>>>
>>
>> This is not required if you use ovit-node:
>> https://ovirt.org/download/#download-ovirt-node
>>
>> Otherwise you need to install that rpm:
>> https://ovirt.org/download/#or-setup-a-host
>>
>>> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZOLQ5FIZOBOWV4VATVVDLUF4B7WETPDZ/
>


-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZAIHWOXMSRTDCXHAHCBHZO2O5CLJ6AU3/


[ovirt-users] Re: [ovirt-devel] cannot add host (oVirt 4.3.3.7)

2019-06-10 Thread Hetz Ben Hamo
Perhaps We're talking about 2 different things?

I want to add a node to an existing ovirt installation.

Previously, the only thing that was required to add a new node, was a
simple install of centos on the new machine.

תודה,
חץ בן חמו

On Mon, Jun 10, 2019, 00:31 Nir Soffer  wrote:

> On Mon, Jun 10, 2019 at 12:23 AM Hetz Ben Hamo  wrote:
>
>> I'm using version 4.3.3.7-1.el7
>>
>> No, I didn't install the RPM. From past experience (and according to the docs
>> here )
>> you don't need to as it takes care of it. Few months ago when I tested it,
>> it worked well on a new CentOS install.
>>
>
> This is not required if you use ovit-node:
> https://ovirt.org/download/#download-ovirt-node
>
> Otherwise you need to install that rpm:
> https://ovirt.org/download/#or-setup-a-host
>
>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZOLQ5FIZOBOWV4VATVVDLUF4B7WETPDZ/


[ovirt-users] Re: [ovirt-devel] cannot add host (oVirt 4.3.3.7)

2019-06-10 Thread Hetz Ben Hamo
I'm using version 4.3.3.7-1.el7

No, I didn't install the RPM. From past experience (and according to the docs
here ) you
don't need to as it takes care of it. Few months ago when I tested it, it
worked well on a new CentOS install.

Thanks


On Mon, Jun 10, 2019 at 12:19 AM Nir Soffer  wrote:

> On Sun, Jun 9, 2019 at 9:15 PM Hetz Ben Hamo  wrote:
>
> Moving to users list,  devel list is for discussion about oVirt
> development, not troubleshooting.
>
> I'm trying to add a new host to oVirt. The main host is Xeon E5 while the
>> new host is AMD Ryzen 5.
>>
>> The main host is running oVirt 4.3.3 and the new node is a minimal
>> install of CentOS 7.6 (1810) with all the latest updates.
>>
>
> Which engine version? on which OS?
>
> I'm enclosing the log files. it complains that it cannot get the oVirt
>> packages, perhaps wrong channel(?). Looking at the log, it's trying to use
>> minidnf. I don't think CentOS 7 supports DNF..
>>
>
> Did you install ovirt-release43 rpm before adding the host?
> https://ovirt.org/download/
>
> Nir
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DGDJSLEYH2WDZZYRFCJ7ZYLKHJGZU3IU/


[ovirt-users] Re: Failed to activate Storage Domain --- ovirt 4.2

2019-06-10 Thread Eyal Shenitzky
Hi Aminur,

Can you please send the engine and vdsm versions?


On Fri, Jun 7, 2019 at 5:03 PM  wrote:

> Hi
> Has anyone experiencing the following issue with Storage Domain -
>
> Failed to activate Storage Domain cLUN-R940-DC2-dstore01 --
> VDSM command ActivateStorageDomainVDS failed: Storage domain does not
> exist: (u'1b0ef853-fd71-45ea-8165-cc6047a267bc',)
>
> Currently, the storge Domain is Inactive and strangely, the VMs are
> running as normal. We can't manage or extend the volume size of this
> storage domain. The pvscan shows as:
> [root@uk1-ion-ovm-18  pvscan
>   /dev/mapper/36000d31005697814: Checksum error at offset
> 4397954425856
>   Couldn't read volume group metadata from
> /dev/mapper/36000d31005697814.
>   Metadata location on /dev/mapper/36000d31005697814 at
> 4397954425856 has invalid summary for VG.
>   Failed to read metadata summary from
> /dev/mapper/36000d31005697814
>   Failed to scan VG from /dev/mapper/36000d31005697814
>
> I have tired the following steps:
> 1. Restarted ovirt-engine.service
> 2. tried to restore the metadata using vgcfgrestore but it failed with the
> following error:
>
> [root@uk1-ion-ovm-19 backup]# vgcfgrestore
> 36000d31005697814
>   Volume group 36000d31005697814 has active volume: .
>   WARNING: Found 1 active volume(s) in volume group
> "36000d31005697814".
>   Restoring VG with active LVs, may cause mismatch with its metadata.
> Do you really want to proceed with restore of volume group
> "36000d31005697814", while 1 volume(s) are active? [y/n]: y
>   /dev/mapper/36000d31005697814: Checksum error at offset
> 4397954425856
>   Couldn't read volume group metadata from
> /dev/mapper/36000d31005697814.
>   Metadata location on /dev/mapper/36000d31005697814 at
> 4397954425856 has invalid summary for VG.
>   Failed to read metadata summary from
> /dev/mapper/36000d31005697814
>   Failed to scan VG from /dev/mapper/36000d31005697814
>   /etc/lvm/backup/36000d31005697814: stat failed: No such
> file or directory
>   Couldn't read volume group metadata from file.
>   Failed to read VG 36000d31005697814 from
> /etc/lvm/backup/36000d31005697814
>   Restore failed.
>
> Please let me know if anyone knows any possible resolution.
>
> -AMinur
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/W2JP7ZO5XMV66ATT3N33IKCZHKM6XPWJ/
>


-- 
Regards,
Eyal Shenitzky
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BSEQ2QQQ3SHQQOTWNFWJWRPKH7QM2YWA/