[ovirt-users] Re: Hosted Engine I/O scheduler

2019-03-20 Thread Strahil
Unfortunately, the ideal scheduler really depends on storage configuration. 
Gluster, ZFS, iSCSI, FC, and NFS don't align on a single "best" configuration 
(to say nothing of direct LUNs on guests), then there's workload considerations.
>
> The scale team is aiming for a balanced "default" policy rather than one 
> which is best for a specific environment.
>
> That said, I'm optimistic that the results will let us give better 
> recommendations if your workload/storage benefits from a different scheduler

I completely disagree !
If you use anything other than noop/none (depending if multiqueue is on), your 
scheduler inside the VM will reorder and delay your I/O.
Then the I/O will be received by the Host and this repeats again.
I can point to SuSe and Red Hat knowledge base where both vendors highly 
recommend noop/none as schedulers for VM.
It has nothing in common with the backend - that's in control of the hosts I/O 
scheduler.

Can some one tell me under which section should I open a bug ? Bugzilla is not 
newbie-friendly and I should admit that opening bugs for RHEL/CentOS is far 
easier.

 The best bug section might be ovirt appliance  - related , as this is only 
valid for VMs and not bare-metal Engine.

Best Regards,
Strahil Nikolov___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SNSSDE33P2YWVJM5ADQAGEF465SDPM2Q/


[ovirt-users] Re: Hosted Engine I/O scheduler

2019-03-20 Thread Darrell Budic
> On Mar 20, 2019, at 12:42 PM, Ryan Barry  wrote:
> 
> On Wed, Mar 20, 2019, 1:16 PM Darrell Budic  > wrote:
> Inline:
> 
>> On Mar 20, 2019, at 4:25 AM, Roy Golan > > wrote:
>> 
>> On Mon, 18 Mar 2019 at 22:14, Darrell Budic > > wrote:
>> I agree, been checking some of my more disk intensive VMs this morning, 
>> switching them to noop definitely improved responsiveness. All the virtio 
>> ones I’ve found were using deadline (with RHEL/Centos guests), but some of 
>> the virt-scsi were using deadline and some were noop, so I’m not sure of a 
>> definitive answer on that level yet. 
>> 
>> For the hosts, it depends on what your backend is running. With a separate 
>> storage server on my main cluster, it doesn’t matter what the hosts set for 
>> me. You mentioned you run hyper converged, so I’d say it depends on what 
>> your disks are. If you’re using SSDs, go none/noop as they don’t benefit 
>> from the queuing. If they are HDDs, I’d test cfq or deadline and see which 
>> gave better latency and throughput to your vms. I’d guess you’ll find 
>> deadline to offer better performance, but cfq to share better amongst 
>> multiple VMs. Unless you use ZFS underneath, then go noop and let ZFS take 
>> care of it.
>> 
>>> On Mar 18, 2019, at 2:05 PM, Strahil >> > wrote:
>>> 
>>> Hi Darrel,
>>> 
>>> Still, based on my experience we shouldn't queue our I/O in the VM, just to 
>>> do the same in the Host.
>>> 
>>> I'm still considering if I should keep deadline  in my hosts or to switch 
>>> to 'cfq'.
>>> After all, I'm using Hyper-converged oVirt and this needs testing.
>>> What I/O scheduler  are  you using on the  host?
>>> 
>> 
>> 
>> Our internal scale team is testing now 'throughput-performance' tuned 
>> profile and it gives
>> promising results, I suggest you try it as well.
>> We will go over the results of a comparison against the virtual-guest profile
>> , if there will be evidence for improvements we will set it as the default 
>> (if it won't degrade small,medium scale envs). 
> 
> I don’t think that will make a difference in this case. Both virtual-host and 
> virtual-guest include the throughput-performance profile, just with “better” 
> virtual memory tunings for guest and hosts. None of those 3 modify the disk 
> queue schedulers, by default, at least not on my Centos 7.6 systems.
> 
> Re my testing, I have virtual-host on my hosts and virtual-guest on my guests 
> already.
> 
> Unfortunately, the ideal scheduler really depends on storage configuration. 
> Gluster, ZFS, iSCSI, FC, and NFS don't align on a single "best" configuration 
> (to say nothing of direct LUNs on guests), then there's workload 
> considerations.
> 
> The scale team is aiming for a balanced "default" policy rather than one 
> which is best for a specific environment.
> 
> That said, I'm optimistic that the results will let us give better 
> recommendations if your workload/storage benefits from a different scheduler

Agreed, but that wasn’t my point, I was commenting that those tuned profiles do 
not set schedulers, so that won’t make a difference, disk scheduler wise. Or 
are they testing changes to the default policy config? Good point on direct 
LUNs too.

And a question, why not virtual-guest if you’re talking about in guest/engine 
defaults? Or are they testing host profiles, in which case the question becomes 
why not virtual-host? Or am I missing where they are testing the scheduler?

I’m already using virtual-host on my hosts, which appears to have been set by 
the ovirt node setup process, and virtual-guest in my RHEL based guests, which 
I’ve been setting with puppet for a long time now.


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JYGAG3M3GKKP7WF7CEH4DJL6TQIR4PJF/


[ovirt-users] Re: Backup Ovirt VMs with Python not working

2019-03-20 Thread Jayme
I’ve recently tested out vprotect, it’s expensive but free for up to 10
vms. Works great with 4.3 and supports incremental backups which is very
handy.  I’d recommend checking it out if you can. I’m really happy with it
thus far.

On Wed, Mar 20, 2019 at 4:32 PM  wrote:

> Hello,
>
> I have set up Ovirt Backup how it is explained in this link:
> http://blog.infratic.com/create-ovirtrhevs-vm-backup/
>
> I also have changed the settings on the default.conf.
>
> When I started the backup it works until it begins to create the qcow2 file
>
> It writes:
> qemu-img: Could not open '/home/DC01/2019-03-20-20/DC01_Disk1.qcow2':
> Failed to get shared "write" lock
> Is another process using the image?
>
> Have anyone ideas and can help me?
>
> Thanks
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RYYI77SBCUYBNRAG74NWDSQOSAB7TWXN/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/K7Z2AI7DFEAVVHR6UFR5G3JIZLY6ZGO7/


[ovirt-users] Backup Ovirt VMs with Python not working

2019-03-20 Thread daniel94 . oeller
Hello,

I have set up Ovirt Backup how it is explained in this link:
http://blog.infratic.com/create-ovirtrhevs-vm-backup/

I also have changed the settings on the default.conf.

When I started the backup it works until it begins to create the qcow2 file

It writes:
qemu-img: Could not open '/home/DC01/2019-03-20-20/DC01_Disk1.qcow2': Failed to 
get shared "write" lock
Is another process using the image?

Have anyone ideas and can help me?

Thanks
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RYYI77SBCUYBNRAG74NWDSQOSAB7TWXN/


[ovirt-users] Re: Hosted Engine I/O scheduler

2019-03-20 Thread Ryan Barry
On Wed, Mar 20, 2019, 1:16 PM Darrell Budic  wrote:

> Inline:
>
> On Mar 20, 2019, at 4:25 AM, Roy Golan  wrote:
>
> On Mon, 18 Mar 2019 at 22:14, Darrell Budic 
> wrote:
>
>> I agree, been checking some of my more disk intensive VMs this morning,
>> switching them to noop definitely improved responsiveness. All the virtio
>> ones I’ve found were using deadline (with RHEL/Centos guests), but some of
>> the virt-scsi were using deadline and some were noop, so I’m not sure of a
>> definitive answer on that level yet.
>>
>> For the hosts, it depends on what your backend is running. With a
>> separate storage server on my main cluster, it doesn’t matter what the
>> hosts set for me. You mentioned you run hyper converged, so I’d say it
>> depends on what your disks are. If you’re using SSDs, go none/noop as they
>> don’t benefit from the queuing. If they are HDDs, I’d test cfq or deadline
>> and see which gave better latency and throughput to your vms. I’d guess
>> you’ll find deadline to offer better performance, but cfq to share better
>> amongst multiple VMs. Unless you use ZFS underneath, then go noop and let
>> ZFS take care of it.
>>
>> On Mar 18, 2019, at 2:05 PM, Strahil  wrote:
>>
>> Hi Darrel,
>>
>> Still, based on my experience we shouldn't queue our I/O in the VM, just
>> to do the same in the Host.
>>
>> I'm still considering if I should keep deadline  in my hosts or to switch
>> to 'cfq'.
>> After all, I'm using Hyper-converged oVirt and this needs testing.
>> What I/O scheduler  are  you using on the  host?
>>
>>
> Our internal scale team is testing now 'throughput-performance' tuned
> profile and it gives
> promising results, I suggest you try it as well.
> We will go over the results of a comparison against the virtual-guest
> profile
> , if there will be evidence for improvements we will set it as the default
> (if it won't degrade small,medium scale envs).
>
>
> I don’t think that will make a difference in this case. Both virtual-host
> and virtual-guest include the throughput-performance profile, just with
> “better” virtual memory tunings for guest and hosts. None of those 3 modify
> the disk queue schedulers, by default, at least not on my Centos 7.6
> systems.
>
> Re my testing, I have virtual-host on my hosts and virtual-guest on my
> guests already.
>

Unfortunately, the ideal scheduler really depends on storage configuration.
Gluster, ZFS, iSCSI, FC, and NFS don't align on a single "best"
configuration (to say nothing of direct LUNs on guests), then there's
workload considerations.

The scale team is aiming for a balanced "default" policy rather than one
which is best for a specific environment.

That said, I'm optimistic that the results will let us give better
recommendations if your workload/storage benefits from a different scheduler


>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/FH5LLYXSEJKXTVVOAZCSMV6AAU33CNCA/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/74CWGUYUKEKSV3ANEGGEE2L5GJZVCN23/


[ovirt-users] Re: Hosted Engine I/O scheduler

2019-03-20 Thread Darrell Budic
Inline:

> On Mar 20, 2019, at 4:25 AM, Roy Golan  wrote:
> 
> On Mon, 18 Mar 2019 at 22:14, Darrell Budic  > wrote:
> I agree, been checking some of my more disk intensive VMs this morning, 
> switching them to noop definitely improved responsiveness. All the virtio 
> ones I’ve found were using deadline (with RHEL/Centos guests), but some of 
> the virt-scsi were using deadline and some were noop, so I’m not sure of a 
> definitive answer on that level yet. 
> 
> For the hosts, it depends on what your backend is running. With a separate 
> storage server on my main cluster, it doesn’t matter what the hosts set for 
> me. You mentioned you run hyper converged, so I’d say it depends on what your 
> disks are. If you’re using SSDs, go none/noop as they don’t benefit from the 
> queuing. If they are HDDs, I’d test cfq or deadline and see which gave better 
> latency and throughput to your vms. I’d guess you’ll find deadline to offer 
> better performance, but cfq to share better amongst multiple VMs. Unless you 
> use ZFS underneath, then go noop and let ZFS take care of it.
> 
>> On Mar 18, 2019, at 2:05 PM, Strahil > > wrote:
>> 
>> Hi Darrel,
>> 
>> Still, based on my experience we shouldn't queue our I/O in the VM, just to 
>> do the same in the Host.
>> 
>> I'm still considering if I should keep deadline  in my hosts or to switch to 
>> 'cfq'.
>> After all, I'm using Hyper-converged oVirt and this needs testing.
>> What I/O scheduler  are  you using on the  host?
>> 
> 
> 
> Our internal scale team is testing now 'throughput-performance' tuned profile 
> and it gives
> promising results, I suggest you try it as well.
> We will go over the results of a comparison against the virtual-guest profile
> , if there will be evidence for improvements we will set it as the default 
> (if it won't degrade small,medium scale envs). 

I don’t think that will make a difference in this case. Both virtual-host and 
virtual-guest include the throughput-performance profile, just with “better” 
virtual memory tunings for guest and hosts. None of those 3 modify the disk 
queue schedulers, by default, at least not on my Centos 7.6 systems.

Re my testing, I have virtual-host on my hosts and virtual-guest on my guests 
already.


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FH5LLYXSEJKXTVVOAZCSMV6AAU33CNCA/


[ovirt-users] Re: Save Virtual Machine in NFS partition

2019-03-20 Thread Strahil
Hi ,

Without the engine - this will be hard.The engine knows the configuration and 
the disks of each VM so the Hypervisor can start it.
In your case I would focus on the engine as this could be (or not) the easier 
approach.

Otherwise you need to run qemu-img  against each image (just an uuid, no meta 
or lease ) , find your image and copy it to a standalone KVM.

Once you have the disk, you should try to recall what was the host CPU type 
(should be the same on the whole cluster) and create a new VM using a COPY of 
your image .

If KVM manages to start and the machine boots - you are done.

As you can see , fixing the engine might be easier - especially with the help 
of the community.

Best Regards,
Strahil NikolovOn Mar 20, 2019 18:01, siove...@gmail.com wrote:
>
> Hi, I had problems with my ovirt-engine and it does not raise. I have some 
> virtual machines in an NFS domain that is located in one of the nodes and I 
> want to get one of the virtual machines. The problem is that what appears is 
> a series of identifiers (ID) and I don't know what the virtual machine is, 
> because by name I can not find it. There are some .meta files where I can see 
> the Description of some Virtual Machines. Please I need help
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/NXNAZBOSGTIRCGLT2WCW7DMOMOXQZ3G6/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VWIIPBUMZ7WZUCXEZFV3UVQPCA5MDVRO/


[ovirt-users] Save Virtual Machine in NFS partition

2019-03-20 Thread siovelrm
Hi, I had problems with my ovirt-engine and it does not raise. I have some 
virtual machines in an NFS domain that is located in one of the nodes and I 
want to get one of the virtual machines. The problem is that what appears is a 
series of identifiers (ID) and I don't know what the virtual machine is, 
because by name I can not find it. There are some .meta files where I can see 
the Description of some Virtual Machines. Please I need help
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NXNAZBOSGTIRCGLT2WCW7DMOMOXQZ3G6/


[ovirt-users] SR-IOV and linux bridges

2019-03-20 Thread opiekelly
Hello, I believe I have setup my environment to support SR-IOV.  I am not sure 
why the system creates a linux bridge when you setup and bind a VM network to a 
virtual function on the NIC.

I have 2 VLANs 102 and 112 however, there are linux bridges now setup.  

I would assume with SR-IOV the VF would be connected only to the VM.


[root@rhv1 ~]# brctl show
bridge name bridge id   STP enabled interfaces
;vdsmdummy; 8000.   no
ovirtmgmt   8000.64122536772a   no  enp2s0f0
vnet0
vlan-1028000.0201   no  enp130s16f4.102
vlan-1128000.163d99a43b4a   no  enp130s16f2.112

[root@rhv1 ~]# ip a | grep enp130s16f
28: enp130s16f2:  mtu 1500 qdisc mq state UP 
group default qlen 1000
29: enp130s16f4:  mtu 1500 qdisc mq state UP 
group default qlen 1000
30: enp130s16f6:  mtu 1500 qdisc mq state UP 
group default qlen 1000
35: enp130s16f2.112@enp130s16f2:  mtu 1500 
qdisc noqueue master vlan-112 state UP group default qlen 1000
45: enp130s16f4.102@enp130s16f4:  mtu 1500 
qdisc noqueue master vlan-102 state UP group default qlen 1000

[root@rhv1 ~]# virsh dumpxml nsg-v-west
Please enter your authentication name: vuser
Please enter your password:

  nsg-v-west
  27e8e6b0-62a9-4acd-8d88-0c777adb6dc1
  http://ovirt.org/vm/tune/1.0; 
xmlns:ovirt-vm="http://ovirt.org/vm/1.0;>

http://ovirt.org/vm/1.0;>
4.2
False
false
4096
4096
auto_resume
1553092201.47

;vdsmdummy;




;vdsmdummy;





200ee819-d377-4069-bcfa-e6e168ca7adf

e7ff0aff-4f25-4279-92d8-1f4928dcabb7
ec42e166-4a9e-11e9-b2f6-00163e2d699c

91620241-bcb5-4177-ace1-918050841d0c





200ee819-d377-4069-bcfa-e6e168ca7adf

e7ff0aff-4f25-4279-92d8-1f4928dcabb7
0

/rhev/data-center/mnt/192.168.0.15:_volume1_rhv/200ee819-d377-4069-bcfa-e6e168ca7adf/images/e7ff0aff-4f25-4279-92d8-1f4928dcabb7/91620241-bcb5-4177-ace1-918050841d0c.lease

/rhev/data-center/mnt/192.168.0.15:_volume1_rhv/200ee819-d377-4069-bcfa-e6e168ca7adf/images/e7ff0aff-4f25-4279-92d8-1f4928dcabb7/91620241-bcb5-4177-ace1-918050841d0c

91620241-bcb5-4177-ace1-918050841d0c








1048576


  
  16777216
  4194304
  4194304
  

  

  
  32
  1
  
/machine
  
  

  Red Hat
  RHEV Hypervisor
  7.6-4.el7
  20ba27d8-a49a-2a45-b993-0cd2851eea03
  27e8e6b0-62a9-4acd-8d88-0c777adb6dc1

  
  
hvm



  
  

  
  
SandyBridge









  

  
  



  
  destroy
  restart
  destroy
  


  
  
/usr/libexec/qemu-kvm

  
  
  
  
  e7ff0aff-4f25-4279-92d8-1f4928dcabb7
  
  


  
  
  
  
  
  


  
  


  
  


  
  


  
  
  


  


  
  
  

  
  

  
  
  


  
  
  

  
  

  
  
  


  
  

  
  


  
  
  


  
  
  
  


  
  
  
  


  


  


  
  
  

  
  
+107:+107
+107:+107
  

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IELSQ3XBA7TXRT5MKCEMNGKFOY7ITNK3/


[ovirt-users] CLUSTER_CANNOT_DISABLE_GLUSTER_WHEN_CLUSTER_CONTAINS_VOLUMES

2019-03-20 Thread Arsène Gschwind
Hi,

I updated ou oVirt cluster to 4.3.2 and when i try to update the cluster 
version to 4.3 i get the following error:
CLUSTER_CANNOT_DISABLE_GLUSTER_WHEN_CLUSTER_CONTAINS_VOLUMES

So far I remember I could update the cluster version to 4.2 without having to 
stop everything.
I've searched around about this error but couldn't find anything.

The engine log says:

2019-03-20 14:33:28,125+01 INFO  
[org.ovirt.engine.core.bll.UpdateClusterCommand] (default task-59) 
[c93c7f4f-b9a3-4e10-82bf-f8bbda46cc87] Lock Acquired to object 
'EngineLock:{exclusiveLocks='[25682477-0bd2-4303-a5fd-0ae9adfd276c=TEMPLATE]', 
sharedLocks='[119fad69-b4a2-441f-9056-354cd1b8a7aa=VM, 
1f696eca-c0a0-48ff-8aa9-b977345b5618=VM, 
95ee485d-7992-45b8-b1be-256223b5a89f=VM, 
432ed647-c150-425e-aac7-3cb5410f4bc8=VM, 
7c2646b1-8a4c-4618-b3c9-dfe563f29e00=VM, 
629e40c0-e83b-47e0-82bc-df42ec310ca4=VM, 
2e0741bd-33e2-4a8e-9624-a9b7bb70b664=VM, 
136d0ca0-478c-48d9-9af4-530f98ac30fd=VM, 
dafaa8d2-c60c-44c8-b9a3-2b5f80f5aee3=VM, 
2112902c-ed42-4fbb-a187-167a5f5a446c=VM, 
5330b948-f0cd-4b2f-b722-28918f59c5ca=VM, 
3f06be8c-8af9-45e2-91bc-9946315192bf=VM, 
8cf338bf-8c94-4db4-b271-a85dbc5d6996=VM, 
c0be6ae6-3d25-4a99-b93d-81b4ecd7c9d7=VM, 
8f4acfc6-3bb1-4863-bf97-d7924641b394=VM]'}'

2019-03-20 14:33:28,214+01 WARN  
[org.ovirt.engine.core.bll.UpdateClusterCommand] (default task-59) 
[c93c7f4f-b9a3-4e10-82bf-f8bbda46cc87] Validation of action 'UpdateCluster' 
failed for user @yyy. Reasons: 
VAR__TYPE__CLUSTER,VAR__ACTION__UPDATE,CLUSTER_CANNOT_DISABLE_GLUSTER_WHEN_CLUSTER_CONTAINS_VOLUMES

2019-03-20 14:33:28,215+01 INFO  
[org.ovirt.engine.core.bll.UpdateClusterCommand] (default task-59) 
[c93c7f4f-b9a3-4e10-82bf-f8bbda46cc87] Lock freed to object 
'EngineLock:{exclusiveLocks='[25682477-0bd2-4303-a5fd-0ae9adfd276c=TEMPLATE]', 
sharedLocks='[119fad69-b4a2-441f-9056-354cd1b8a7aa=VM, 
1f696eca-c0a0-48ff-8aa9-b977345b5618=VM, 
95ee485d-7992-45b8-b1be-256223b5a89f=VM, 
432ed647-c150-425e-aac7-3cb5410f4bc8=VM, 
7c2646b1-8a4c-4618-b3c9-dfe563f29e00=VM, 
629e40c0-e83b-47e0-82bc-df42ec310ca4=VM, 
2e0741bd-33e2-4a8e-9624-a9b7bb70b664=VM, 
136d0ca0-478c-48d9-9af4-530f98ac30fd=VM, 
dafaa8d2-c60c-44c8-b9a3-2b5f80f5aee3=VM, 
2112902c-ed42-4fbb-a187-167a5f5a446c=VM, 
5330b948-f0cd-4b2f-b722-28918f59c5ca=VM, 
3f06be8c-8af9-45e2-91bc-9946315192bf=VM, 
8cf338bf-8c94-4db4-b271-a85dbc5d6996=VM, 
c0be6ae6-3d25-4a99-b93d-81b4ecd7c9d7=VM, 
8f4acfc6-3bb1-4863-bf97-d7924641b394=VM]'}'


Any Idea for the reason of this error?

Thanks
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KIPTI7NP2RTZGV4CW57D77G4AG5V2KY4/


[ovirt-users] 4.2.7 Importing qcow2 linux as disk

2019-03-20 Thread Arnaud DEBEC
Hi,
I want to import a Centos 7 qcow2 VM on oVirt 4.2.7.

0. the Centos 7 VM was previously running under libvirt / kvm hypervisor.
1. on Ovirt 4.2.7, I imported the VM qcow2 via Storage > Disks > Upload > Start 
> etc.
2. I created a VM and attached the disk and specifying the disk is bootable
3. When the machine boots, I get "Warning: /dev/disk/by-uuid/XX does 
not exist"

Before going in deeper detail in the investigation, I would like to make sure 
if the previous steps make sens for importing a VM under qcow2 format. Most of 
documentation I found to import libvirt /KVM are done by using the Compute > VM 
> import > KVM (via Libvirt) which is not what I am looking for.

Thank you.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YXYHMDIFT57PDNW2ROZOAWEQFXFRFF54/


[ovirt-users] Re: How to fix ovn apparent inconsistency?

2019-03-20 Thread Gianluca Cecchi
On Wed, Mar 20, 2019 at 1:26 PM Marcin Mirecki  wrote:

> Looking at the original state we had:
> switch 32367d8a-460f-4447-b35a-abe9ea5187e0 (ovn192)
> switch 6110649a-db2b-4de7-8fbc-601095cfe510 (ovn192)
> switch 64c4c17f-cd67-4e29-939e-2b952495159f (ovn172)
> switch 04501f6b-3977-4ba1-9ead-7096768d796d (ovn172)
>
> In the output of GET, 6110649a-db2b-4de7-8fbc-601095cfe510 is not longer
> there, so it has been deleted.
> Did you maybe try to submit the request twice?
>

With that switch, as it had no ports attached, I tried the command line
option with:
 ovn-nbctl destroy logical_switch 6110649a-db2b-4de7-8fbc-601095cfe510




>
> About  8fd63a10-a2ba-4c56-a8e0-0bc8d70be8b5. There was never a network
> with that id, so this is correct.
>

Yes, but that was the id provided by web admin gui for the network
- ovn192
Id: 8fd63a10-a2ba-4c56-a8e0-0bc8d70be8b5
External ID: 32367d8a-460f-4447-b35a-abe9ea5187e0

or did I misunderstood?



>
> Also note that to delete a network you will first have to delete its ports.
>

OK.
Is there a command to clean all so that I can restart with a new OVN setup
in this infra?
I think I messed up too many things on it

Thanks,
Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/26U4L7TA5SJOBRKQEYQ3RSD6LZ6H6DZO/


[ovirt-users] Fwd: Re: Change cluster cpu type with hosted engine

2019-03-20 Thread Fabrice SOLER

Hi,

As you can read on the screenshot below, my servers have got Nehalem CPU 
type, so the hosted engine detect the correct CPU type.




You can read on this link 
https://www.intel.com/content/www/us/en/support/articles/06105/processors.html 
that Windows 10 x64 does not support Nehalem CPU type. Bill is cutting 
the fonctionnality for the old CPU, I will be able to comment but i will 
not.


But I need to use Windows 10 x64 with this servers for my students !!!
I am in a dead-end.

Sincerely,
Fabrice SOLER


 Message transféré 
Sujet : Re: [ovirt-users] Change cluster cpu type with hosted engine
Date :  Tue, 12 Mar 2019 15:49:14 -0400
De :Fabrice SOLER 
Pour :  Simone Tiraboschi 



Le 12/03/2019 à 11:21, Simone Tiraboschi a écrit :



On Tue, Mar 12, 2019 at 1:03 PM Fabrice SOLER 
> wrote:


Hello,

I need to create a windows 10 virtual machine but I have an error :

I have a fresh ovirt installation (version 4.2.8) with an hosted
engine. At the hosted engine installation there was no question
about the cluster cpu type, it should be great if in the future
version it could be.


It's by design: we simply let the engine choose the best CPU according 
to the characteristics of the first host.
Can you please detail which CPU are you using on the first host and 
how the engine detected it adding the first host to the first cluster?

Hello,
The first host is a DELL R610 (an old server). At the installation, the 
cluster was created with  an CPU Architecture x86_64 and CPU type Intel 
Nehalem Family.


To change an host to another cluster this host need to be in
maintenance mode, and the hosted engine will be power off.

I have created another Cluster with an SandyBridge Family CPU
type, but to move the hosted engine to this new cluster the hosted
should be power off.

Is there someone who can help ?

Sincerely,

-- 
___

Users mailing list -- users@ovirt.org 
To unsubscribe send an email to users-le...@ovirt.org

Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:

https://lists.ovirt.org/archives/list/users@ovirt.org/message/KFH3ZLPA7KZSSJG3DGOGW2F4OMXE4KZK/



--
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/E5YJQRR3WJIPK3C7ZM2WLQY7JE23EGA4/


[ovirt-users] Re: How to fix ovn apparent inconsistency?

2019-03-20 Thread Marcin Mirecki
Looking at the original state we had:
switch 32367d8a-460f-4447-b35a-abe9ea5187e0 (ovn192)
switch 6110649a-db2b-4de7-8fbc-601095cfe510 (ovn192)
switch 64c4c17f-cd67-4e29-939e-2b952495159f (ovn172)
switch 04501f6b-3977-4ba1-9ead-7096768d796d (ovn172)

In the output of GET, 6110649a-db2b-4de7-8fbc-601095cfe510 is not longer
there, so it has been deleted.
Did you maybe try to submit the request twice?

About  8fd63a10-a2ba-4c56-a8e0-0bc8d70be8b5. There was never a network with
that id, so this is correct.

Also note that to delete a network you will first have to delete its ports.



On Tue, Mar 19, 2019 at 4:58 PM Gianluca Cecchi 
wrote:

>
>
> On Tue, Mar 19, 2019 at 4:44 PM Gianluca Cecchi 
> wrote:
>
>> On Tue, Mar 19, 2019 at 4:31 PM Miguel Duarte de Mora Barroso <
>> mdbarr...@redhat.com> wrote:
>>
>> [snip]
>>
>>
>>> >> >> >> @Gianluca Cecchi , I notice that one of your duplicate networks
>>> -
>>> >> >> >> 'ovn192'  - has no ports attached. That makes it the perfect
>>> candidate
>>> >> >> >> to be deleted, and see if it becomes 'listable' on engine. That
>>> would
>>> >> >> >> help rule out the 'duplicate name' theory.
>>> >> >> >
>>> >> >> >
>>> >> >> >  I can try. Can you give me the command to be run?
>>> >> >> > It is a test oVirt so It would be not a big problem in case of
>>> failures in this respect.
>>> >> >>
>>> >> >> You can delete it via the UI; just be sure to delete the one
>>> without
>>> >> >> ports - it's external ID is 6110649a-db2b-4de7-8fbc-601095cfe510.
>>> >> >>
>>> >> >> It will ask you if you also want to delete it from the external
>>> >> >> provider, say yes.
>>> >> >
>>> >> >
>>> >> >
>>> >> > Inside the GUI I see only one ovn192 network and one ovn172 network
>>> and their external ids don't match the ones without ports...
>>> >> >
>>> >> > - ovn192
>>> >> > Id: 8fd63a10-a2ba-4c56-a8e0-0bc8d70be8b5
>>> >> > External ID: 32367d8a-460f-4447-b35a-abe9ea5187e0
>>> >> >
>>> >> > - ovn172
>>> >> > Id: 7546d5d3-a0e3-40d5-9d22-cf355da47d3a
>>> >> > External ID: 64c4c17f-cd67-4e29-939e-2b952495159f
>>> >> >
>>> >> > So I think I have to delete from command line
>>> >>
>>> >> Check pastebin [0],  with it you can safely delete those 2 networks.
>>> >> Last course of action would be to delete via ovn-nbctl - e.g.
>>> >> ovn-nbctl destroy logical_switch  - but hopefully it won't
>>> >> come to that.
>>> >>
>>> >> [0] - https://paste.fedoraproject.org/paste/mxVUEJZWxG-QHX0mJO1VhA
>>> >>
>>>
>>>
>> I get "not found" for both:
>>
>>  [root@ovmgr1 ~]# curl -k -X DELETE   '
>> https://localhost:9696/v2/networks/6110649a-db2b-4de7-8fbc-601095cfe510'
>>  -H 'X-Auth-Token:
>> WyutJuakjpSzJ4nj7drptpDfbAb3sKcZWvhF3NqRVXRyUpIHz9QGG_ZeeLi7u7trv7Er2D3vAcSX9LIFpXzz7w'
>> {
>>   "error": {
>> "message": "Cannot find Logical_Switch with
>> name=6110649a-db2b-4de7-8fbc-601095cfe510",
>> "code": 404,
>> "title": "Not Found"
>>   }
>> }
>> [root@ovmgr1 ~]# curl -k -X DELETE   '
>> https://localhost:9696/v2/networks/8fd63a10-a2ba-4c56-a8e0-0bc8d70be8b5'
>>  -H 'X-Auth-Token:
>> WyutJuakjpSzJ4nj7drptpDfbAb3sKcZWvhF3NqRVXRyUpIHz9QGG_ZeeLi7u7trv7Er2D3vAcSX9LIFpXzz7w'
>> {
>>   "error": {
>> "message": "Cannot find Logical_Switch with
>> name=8fd63a10-a2ba-4c56-a8e0-0bc8d70be8b5",
>> "code": 404,
>> "title": "Not Found"
>>   }
>> }
>> [root@ovmgr1 ~]#
>>
>> Is there a command to get the supposed list?
>>
>> Thanks for your help.
>> I'm also available to completely reset the OVN config if there is a way
>> for it...
>>
>> Gianluca
>>
>
>
> A GET call outputs this information :
>  [root@ovmgr1 ~]# curl -k -X GET 'https://localhost:9696/v2/networks' -H
> 'X-Auth-Token:
> WyutJuakjpSzJ4nj7drptpDfbAb3sKcZWvhF3NqRVXRyUpIHz9QGG_ZeeLi7u7trv7Er2D3vAcSX9LIFpXzz7w'
> {"networks": [{"status": "ACTIVE", "name": "ovn172", "tenant_id":
> "0001", "mtu": 1442, "port_security_enabled":
> false, "id": "64c4c17f-cd67-4e29-939e-2b952495159f"}, {"status": "ACTIVE",
> "name": "ovn172", "tenant_id": "0001", "mtu":
> 1442, "port_security_enabled": false, "id":
> "04501f6b-3977-4ba1-9ead-7096768d796d"}, {"status": "ACTIVE", "name":
> "ovn192", "tenant_id": "0001", "mtu": 1442,
> "port_security_enabled": false, "id":
> "32367d8a-460f-4447-b35a-abe9ea5187e0"}]}[root@ovmgr1 ~]#
> [root@ovmgr1 ~]#
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PPB523FBFURVOUE5O3RWKFHXOH2RCYE7/


[ovirt-users] Re: [ANN] oVirt 4.3.2 is now generally available

2019-03-20 Thread Sandro Bonazzola
Il giorno mer 20 mar 2019 alle ore 11:50 Stefano Danzi 
ha scritto:

> Hi! Documentation report to run "yum install
> https://resources.ovirt.org/pub/ovirt-4.3/rpm/el7/noarch/ovirt-node-ng-image-update-4.3.2-1.el7.noarch.rpm;
> to update from Node NG 4.2, but this rpm is missing.
>

Thanks for reporting, this should be fixed now.



>
> Il 19/03/2019 15:53, Sandro Bonazzola ha scritto:
>
>
>
> Il giorno mar 19 mar 2019 alle ore 10:59 Sandro Bonazzola <
> sbona...@redhat.com> ha scritto:
>
>> The oVirt Project is pleased to announce the general availability of
>> oVirt 4.3.2, as of March 19th, 2019.
>>
>> This update is the second in a series of stabilization updates to the 4.3
>> series.
>>
>> This release is available now on x86_64 architecture for:
>> * Red Hat Enterprise Linux 7.6 or later
>> * CentOS Linux (or similar) 7.6 or later
>>
>> This release supports Hypervisor Hosts on x86_64 and ppc64le
>> architectures for:
>> * Red Hat Enterprise Linux 7.6 or later
>> * CentOS Linux (or similar) 7.6 or later
>> * oVirt Node 4.3 (available for x86_64 only)
>>
>> Experimental tech preview for x86_64 and s390x architectures for Fedora
>> 28 is also included.
>>
>> See the release notes [1] for installation / upgrade instructions and
>> a list of new features and bugs fixed.
>>
>> Notes:
>> - oVirt Appliance is already available
>> - oVirt Node is already available[2]
>>
>> oVirt Node has been updated including:
>> - oVirt 4.3.2: http://www.ovirt.org/release/4.3.2/
>> - Latest CentOS updates (no relevant errata available up to now on
>> https://lists.centos.org/pipermail/centos-announce )
>>
>
> Relevant errata have been published:
> CESA-2019:0512 Important CentOS 7 kernel Security Update
> 
> CESA-2019:0483 Moderate CentOS 7 openssl Security Update
> 
>
>
>
>
>
>
>>
>> Additional Resources:
>> * Read more about the oVirt 4.3.2 release highlights:
>> http://www.ovirt.org/release/4.3.2/
>> * Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
>> * Check out the latest project news on the oVirt blog:
>> http://www.ovirt.org/blog/
>>
>> [1] http://www.ovirt.org/release/4.3.2/
>> [2] http://resources.ovirt.org/pub/ovirt-4.3/iso/
>>
>>
>> --
>>
>> SANDRO BONAZZOLA
>>
>> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>>
>> Red Hat EMEA 
>>
>> sbona...@redhat.com
>> 
>>
>

-- 

SANDRO BONAZZOLA

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2YYH526DHM27LO3BFOML74WU7YMWXMUK/


[ovirt-users] Re: [ANN] oVirt 4.3.2 is now generally available

2019-03-20 Thread Stefano Danzi
Hi! Documentation report to run "yum install 
https://resources.ovirt.org/pub/ovirt-4.3/rpm/el7/noarch/ovirt-node-ng-image-update-4.3.2-1.el7.noarch.rpm; 
to update from Node NG 4.2, but this rpm is missing.

Il 19/03/2019 15:53, Sandro Bonazzola ha scritto:



Il giorno mar 19 mar 2019 alle ore 10:59 Sandro Bonazzola 
mailto:sbona...@redhat.com>> ha scritto:


The oVirt Project is pleased to announce the general availability
of oVirt 4.3.2, as of March 19th, 2019.
This update is the second in a series of stabilization updates to
the 4.3 series.
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 7.6 or later
* CentOS Linux (or similar) 7.6 or later

This release supports Hypervisor Hosts on x86_64 and ppc64le
architectures for:
* Red Hat Enterprise Linux 7.6 or later
* CentOS Linux (or similar) 7.6 or later
* oVirt Node 4.3 (available for x86_64 only)

Experimental tech preview for x86_64 and s390x architectures for
Fedora 28 is also included.
See the release notes [1] for installation / upgrade instructions and
a list of new features and bugs fixed.
Notes:
- oVirt Appliance is already available
- oVirt Node is already available[2]

oVirt Node has been updated including:
- oVirt 4.3.2: http://www.ovirt.org/release/4.3.2/
- Latest CentOS updates (no relevant errata available up to now on
https://lists.centos.org/pipermail/centos-announce )


Relevant errata have been published:
CESA-2019:0512 Important CentOS 7 kernel Security Update 

CESA-2019:0483 Moderate CentOS 7 openssl Security Update 





Additional Resources:
* Read more about the oVirt 4.3.2 release
highlights:http://www.ovirt.org/release/4.3.2/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt
blog:http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.3.2/
[2] http://resources.ovirt.org/pub/ovirt-4.3/iso/


-- 


SANDRO BONAZZOLA

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com 



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HOC67ATH4Y74CMD3IC7FM6PO652FBBGN/


[ovirt-users] Re: Can't connect "https://FQDN/ovirt-engine" after reinstall ovirt-engine

2019-03-20 Thread Yedidyah Bar David
Hi,

Are you the same person sending an email with identical subject, but
from a different (similar) email address? Please see my answer to that
one. Thanks.

On Wed, Mar 20, 2019 at 12:25 PM Александр Егоров  wrote:
>
> How do I repair the ovirt-engine?
>
> engine.log in attachment
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZQWNO2O57PCQN2FIJM2D3HI36REHUCMR/



-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HK2UGDND2YPVXHJU6Q2REVHNFVYOB6US/


[ovirt-users] Re: oVirt Node on CentOS 7.5 and AMD EPYC Support

2019-03-20 Thread Paul Martin
I've got 4.3.2 and still not seeing ThreadRipper/Ryzen in the cluster 
CPU list.  Yes, it's probably identical to Epyc but the hypervisor needs 
to know this.



How do we add this?

--
Paul Martin | Senior Solutions Architect
PureWeb Inc.


--

If you believe you have received this electronic transmission in error, 
please notify the original sender of this email and destroy all copies of 
this communication.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PPR6H2VZ7AMQCHQFO4YLK437SGUGEE7U/


[ovirt-users] Re: vender_id syntax UserDefinedVMProperties

2019-03-20 Thread Darin Schmidt
I was hoping someone would know how to because I don't know python

On Wed, Mar 20, 2019, 12:52 AM Strahil  wrote:

> Can't you make the script check if it windows or Linux and skip if it's
> Linux?
>
> Best Regards,
> Strahil Nikolov
> On Mar 19, 2019 23:02, Darin Schmidt  wrote:
>
> You also need to have this code hooked in:
> cd /usr/libexec/vdsm/hooks/before_vm_start/
> vi 99_mask_kvm
>
> #!/usr/bin/python2
>
> import hooking
> domxml = hooking.read_domxml()
>
> hyperv = domxml.getElementsByTagName('hyperv')[0]
> smm = domxml.createElement('vendor_id')
> smm.setAttribute('state', 'on')
> smm.setAttribute('value', '1234567890ab')
> hyperv.appendChild(smm)
>
> features = domxml.getElementsByTagName('features')[0]
> kvm = domxml.createElement('kvm')
> hidden = domxml.createElement('hidden')
> hidden.setAttribute('state', 'on')
> kvm.appendChild(hidden)
> features.appendChild(kvm)
>
> hooking.write_domxml(domxml)
>
>
> only problem now is that I cant boot a linux VM with the vendor_is portion
> there..
>
> On Mon, Mar 18, 2019 at 3:30 PM Darin Schmidt 
> wrote:
>
> Seems that the system has to be running with bios Q35 UEFI. Standard bios
> does not work. System is operational now.
>
> On Mon, Mar 18, 2019, 6:30 AM Darin Schmidt 
> wrote:
>
> Still no luck getting the gtx 1080 to enable inside the VM. I see the code
> is being generated in the xml with the hook. But I still get error code 43.
> Someone mentioned doing it with eufi bios and that worked for them. So when
> I get back from work today, perhaps ill give that a try.
>
> On Mon, Mar 18, 2019, 6:10 AM Darin Schmidt 
> wrote:
>
> I have gotten the system to see the card, its in device manager. The
> problem seems to be that I cannot use it in the VM because from what I have
> been finding out is that it gets and error code 43. Nvidia drivers disable
> the card if it detects that its being used in a VM. I have found some code
> to use to hook it into the xml before_vm_starts.
>
> 99_mask_kvm
> #!/usr/bin/python2
>
> import hooking
> domxml = hooking.read_domxml()
>
> hyperv = domxml.getElementsByTagName('hyperv')[0]
> smm = domxml.createElement('vendor_id')
> smm.setAttribute('state', 'on')
> smm.setAttribute('value', '1234567890ab')
> hyperv.appendChild(smm)
>
> features = domxml.getElementsByTagName('features')[0]
> kvm = domxml.createElement('kvm')
> hidden = domxml.createElement('hidden')
> hidden.setAttribute('state', 'on')
> kvm.appendChild(hidden)
> features.appendChild(kvm)
>
> hooking.write_domxml(domxml)
>
>
> I am currently reinstalling the drivers to see if this helps.
>
> kvm off and vender_id is now in the xml code that get generated when the
> VM is started. Im going off of examples Im finding online. Perhaps I just
> need to add the 10de to it instead of some generic # others are using.
>
> On Mon, Mar 18, 2019 at 6:02 AM Nisim Simsolo  wrote:
>
> Hi
>
> Vendor ID of Nvidia is usually 10de.
> You can locate 'vendor ID:
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MHVEAMILIV66X3ORCO3G5OOZ5XZFZVG3/


[ovirt-users] Re: vm_network not sync

2019-03-20 Thread fz
Any help??
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/F4GSCGHQTWRGDILZV7FRNLLQ7ARUZ5M5/


[ovirt-users] Re: Hosted Engine I/O scheduler

2019-03-20 Thread Roy Golan
On Mon, 18 Mar 2019 at 22:14, Darrell Budic  wrote:

> I agree, been checking some of my more disk intensive VMs this morning,
> switching them to noop definitely improved responsiveness. All the virtio
> ones I’ve found were using deadline (with RHEL/Centos guests), but some of
> the virt-scsi were using deadline and some were noop, so I’m not sure of a
> definitive answer on that level yet.
>
> For the hosts, it depends on what your backend is running. With a separate
> storage server on my main cluster, it doesn’t matter what the hosts set for
> me. You mentioned you run hyper converged, so I’d say it depends on what
> your disks are. If you’re using SSDs, go none/noop as they don’t benefit
> from the queuing. If they are HDDs, I’d test cfq or deadline and see which
> gave better latency and throughput to your vms. I’d guess you’ll find
> deadline to offer better performance, but cfq to share better amongst
> multiple VMs. Unless you use ZFS underneath, then go noop and let ZFS take
> care of it.
>
> On Mar 18, 2019, at 2:05 PM, Strahil  wrote:
>
> Hi Darrel,
>
> Still, based on my experience we shouldn't queue our I/O in the VM, just
> to do the same in the Host.
>
> I'm still considering if I should keep deadline  in my hosts or to switch
> to 'cfq'.
> After all, I'm using Hyper-converged oVirt and this needs testing.
> What I/O scheduler  are  you using on the  host?
>
> Best Regards,
> Strahil Nikolov
> On Mar 18, 2019 19:15, Darrell Budic  wrote:
>
> Checked this on mine, see the same thing. Switching the engine to noop
> definitely feels more responsive.
>
> I checked on some VMs as well, it looks like virtio drives (vda, vdb….)
> get mq-deadline by default, but virtscsi gets noop. I used to think the
> tuned profile for virtual-guest would set noop, but apparently not…
>
>   -Darrell
>
>
Our internal scale team is testing now 'throughput-performance' tuned
profile and it gives
promising results, I suggest you try it as well.
We will go over the results of a comparison against the virtual-guest
profile
, if there will be evidence for improvements we will set it as the default
(if it won't degrade small,medium scale envs).


On Mar 18, 2019, at 1:58 AM, Strahil Nikolov  wrote:
>
> Hi All,
>
> I have changed my I/O scheduler to none and here are the results so far:
>
> Before (mq-deadline):
> Adding a disk to VM (initial creation) START: 2019-03-17 16:34:46.709
> Adding a disk to VM (initial creation) COMPLETED: 2019-03-17 16:45:17.996
>
> After (none):
> Adding a disk to VM (initial creation) START: 2019-03-18 08:52:02.xxx
> Adding a disk to VM (initial creation) COMPLETED: 2019-03-18 08:52:20.xxx
>
> Of course the results are inconclusive, as I have tested only once - but I
> feel the engine more responsive.
>
> Best Regards,
> Strahil Nikolov
>
> В неделя, 17 март 2019 г., 18:30:23 ч. Гринуич+2, Strahil <
> hunter86...@yahoo.com> написа:
>
>
> Dear All,
>
> I have just noticed that my Hosted Engine has  a strange I/O scheduler:
>
> Last login: Sun Mar 17 18:14:26 2019 from 192.168.1.43
> [root@engine ~]# cat /sys/block/vda/queue/scheduler
> [mq-deadline] kyber none
> [root@engine ~]#
>
> Based on my experience  anything than noop/none  is useless and
> performance degrading  for a VM.
>
> Is there any reason that we have this scheduler ?
> It is quite pointless  to process (and delay) the I/O in the VM and then
> process (and again delay)  on Host Level .
>
> If there is no reason to keep the deadline, I will open a bug about it.
>
> Best Regards,
> Strahil Nikolov
> Dear All,
>
> I have just noticed that my Hosted Engine has  a strange I/O scheduler:
>
> Last login: Sun Mar 17 18:14:26 2019 from 192.168.1.43
> [root@engine ~]# cat /sys/block/vda/queue/scheduler
> [mq-deadline] kyber none
> [root@engine ~]#
>
> Based on my experience  anything than noop/none  is useless and
> performance degrading  for a VM.
>
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/MMTH6225GKYQEZ26BXDBTB52LNWMWVBH/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PQWW5U23OSXOR5KKLP7I42GYHB2R6COG/


[ovirt-users] Re: Undeploy oVirt Metrics Store

2019-03-20 Thread Shirly Radco
It is mentioned in the help. You should run the following from your
engine machine:

/usr/share/ovirt-engine-metrics/configure_ovirt_machines_for_metrics.sh
 --playbook=manage-ovirt-metrics-services.yml -e "service_enabled='no'
service_state='stopped'"

--

SHIRLY RADCO

BI SENIOR SOFTWARE ENGINEER

Red Hat Israel 



On Wed, Mar 13, 2019 at 7:34 PM  wrote:

> Deployed MetricStore on the engine and hosts as per the instructions. But
> using some time I realized that for me the functionality is redundant,
> enough data collected by DWH.
>
> https://www.ovirt.org/develop/release-management/features/metrics/metrics-store-installation.html
> Is there an instruction on how undeploy oVirt MetricStore?
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/IO5BHQ6O3VYKIWVX35TKD2PN3DY7YSTX/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/X7HAEGXABB2PQBQCI4UAZBWSOPVOROUL/


[ovirt-users] Re: Can't connect "https://FQDN/ovirt-engine" after reinstall ovirt-engine

2019-03-20 Thread Yedidyah Bar David
On Wed, Mar 20, 2019 at 7:35 AM  wrote:
>
> 19.03.2019 on the server with the ovirt engine, I reinstalled the GUI and it 
> seemed to delete the ovirt-engine, in any case, the installed ovirt-engine 
> packages were not. I reinstalled the ovirt-engine package, but could not 
> connect to the web interface. There are such errors in the 
> /var/log/ovirt-engine/engine.log:
>
> 2019-03-20 11:37:12,660+09 INFO  
> [org.ovirt.engine.core.extensions.mgr.ExtensionsManager] (ServerService 
> Thread Pool -- 58) [] Initializing extension 'internal-authz'
> 2019-03-20 11:37:12,661+09 ERROR 
> [org.ovirt.engine.extension.aaa.jdbc.binding.api.AuthzExtension] 
> (ServerService Thread Pool -- 58) [] Unexpected Exception invoking: 
> EXTENSION_INITIALIZE[e5ae1b7f-9104-4f23-a444-7b9175ff68d2]
> 2019-03-20 11:37:12,662+09 ERROR 
> [org.ovirt.engine.core.extensions.mgr.ExtensionsManager] (ServerService 
> Thread Pool -- 58) [] Error in activating extension 'internal-authz': 
> Connection refused. Check that the hostname and port are correct and that the 
> postmaster is accepting TCP/IP connections.
> 2019-03-20 11:37:12,662+09 ERROR 
> [org.ovirt.engine.core.sso.utils.SsoExtensionsManager] (ServerService Thread 
> Pool -- 58) [] Could not initialize extension 'internal-authz'. Exception 
> message is: Class: class 
> org.ovirt.engine.core.extensions.mgr.ExtensionInvokeCommandFailedException
> ...
> 2019-03-20 11:37:13,046+09 ERROR 
> [org.ovirt.engine.ui.frontend.server.dashboard.DashboardDataServlet] 
> (ServerService Thread Pool -- 44) [] Could not access engine's DWH 
> configuration table: java.sql.SQLException: javax.resource.ResourceException: 
> IJ000453: Unable to get managed connection for java:/ENGINEDataSource
> ...
> 2019-03-20 11:37:13,049+09 WARN  
> [org.ovirt.engine.ui.frontend.server.dashboard.DashboardDataServlet] 
> (ServerService Thread Pool -- 44) [] No valid DWH configurations were found, 
> assuming DWH database isn't setup.
> 2019-03-20 11:37:13,049+09 INFO  
> [org.ovirt.engine.ui.frontend.server.dashboard.DashboardDataServlet] 
> (ServerService Thread Pool -- 44) [] Dashboard DB query cache has been 
> disabled.
>
> ovirt-engine-4.2.8.2-1
> CentOS 7,  3.10.0-693.2.2.el7.x86_64
>
> I have a backup file, but version 4.1. Recovering from it using engine-backup 
> is impossible.
> I also have a backup /etc before deleting the ovirt-engine.
>
> Как мне восстановить ovirt-engine?

Please provide more details. What exactly did you do?

If it's something like:

yum install ovirt-engine
engine-setup
yum remove ovirt-engine
yum install ovirt-engine

then:
1. Recovering from that is not supported nor tested, so take the
following with a grain of salt.
2. The biggest damage from 'yum remove' is removal of pki. You should
find a backup, though, in /etc/pki/ovirt-engine-backups/something. You
can try to restore that to /etc/pki/ovirt-engine and see what happens.
3. You can also simply try running 'engine-setup'. It should recreate
pki. This will also then require reinstalling all hosts (or at least
"Enroll Certificate").
4. Alternatively, you can try using your 4.1 engine-backup. Best way
is by installing 4.1, restoring the backup, then upgrading.
Alternatively, you can patch engine-backup to not refuse restoring the
older version, see e.g.
https://bugzilla.redhat.com/show_bug.cgi?id=1425788 .
5. Alternatively, you can compare your current /etc with the backup
you took, and carefully decide which files should be restored. Then
you can try running 'egnine-setup' and see what happens. Obviously,
this is the best option in terms of chances for success, but is also
the most time-consuming.

Good luck and best regards,
-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/F2XBYZQCONFCA7VOMYU44HYLGRGO2ZAR/


[ovirt-users] Re: Daily reboots of Hosted Engine?

2019-03-20 Thread Juhani Rautiainen
On Tue, Mar 19, 2019 at 3:40 PM Juhani Rautiainen
 wrote:
>

> > while true;
> >do ping -c 1 -W 2 10.168.8.1 > /dev/null; echo $?; sleep 0.5;
> > done
>
> I'll try this tomorrow during the expected failure time.

And I found the reason. Nothing wrong with the ovirt. There is big
filetransfer going through FW every fifteen minutes and it's ping
response goes beyond horrible. And it's Enterprise level FW.


Sorry for wasting the time and thanks for the help,
  Juhani
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BP6KFMNYIOMH3YS55UEF5NZEWKQH34KH/