Re: [ovirt-users] Q: Optimal settings for DB hosting

2018-01-20 Thread Yaniv Kaul
On Jan 19, 2018 12:31 PM, "Gianluca Cecchi" 
wrote:

On Fri, Jan 19, 2018 at 11:15 AM, Yaniv Kaul  wrote:

>
>
> On Jan 19, 2018 10:52 AM, "andreil1"  wrote:
>
>
>
> Migration disabled.
>
>
Why this enforcing? If the VM is so important I see it as a limitation not
to be able to move it in case of need


It's related to CPU pinning and NUMA.



> Pass-through host CPU enabled
>
>
I don't know if this is so important.
Tested with Oracle RDBMS and not used in my case.


In the specific case of Oracle, I actually suspect you must use CPU pinning
for licensing reasons. I suggest you check.

As for CPU passthrough, might depend on which features you use.



>
> Any idea of NUMA settings ?
>
>
> Indeed. + Huge pages, in both host and guest.
>

Do you think NUMA so essential? It implies non-migratable VM...
In my tests I didn't set NUMA


Depends on the workload really. It and CPU pinning are many times critical
for IO bound workloads. Also depends on how much optimization you are
after.
Y.



>
> In short, use high performance VM. See ovirt.org feature page.
> Y.
>
>
>
In my opinion the  main limitation of "High performance VM" is to be
not-migratable (probably implied because you set NUMA?)
In that case could it be possible to have NUMA as choice, so that at the
same time you can choose if you want a migratable or not-migratable high
performance VM?
Also CPU passthrough, I don't remember if it is included/fixed option in
high perf VM...

Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VirtIO-SCSI and viodiskcache custom property

2018-01-20 Thread Yaniv Kaul
On Jan 19, 2018 3:29 PM, "Matthias Leopold" <
matthias.leop...@meduniwien.ac.at> wrote:

Hi,

is there a reason why the viodiskcache custom property isn't honored when
using VirtIO-SCSI?

On a Cinder (Ceph) disk "viodiskcache=writeback" is ignored with
VirtIO-SCSI and honored when using VirtIO.

On an iSCSI disk "viodiskcache=writeback" is ignored with VirtIO-SCSI and
the VM can't be started when using VirtIO with "unsupported configuration:
native I/O needs either no disk cache or directsync cache mode, QEMU will
fallback to aio=threads"

We actually want to use "viodiskcache=writeback" with Cinder (Ceph) disks.


That's because on block storage we use native io and not threads. I assume
the hook needs to change to use native io in this case.
Y.


oVirt version: 4.1.8

Thanks
Matthias

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] All in one adds host failure

2018-01-20 Thread Pym
Hi:


I'm doing ovirt all in one. I first used the iso of node to do the basic 
environment, and then I did the source compilation and installation ovirt on 
the node environment. When I add a New Host to the web interface Host- >New, in 
the log: Ansible playbook command has exited with value: 1, then Host 
installation failed.


The logs are as follows:


engine.log:
"2018-01-05 14:49:32,445+08 INFO  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(VdsDeploy) [78c6a9cb-ecc5-4e44-9366-13960ef04559] EVENT_ID: 
VDS_INSTALL_IN_PROGRESS(509), Installing Host tchyp-test.ecr.com. Stage: 
Termination.
2018-01-05 14:49:32,506+08 INFO  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-engine-Thread-12) 
[78c6a9cb-ecc5-4e44-9366-13960ef04559] EVENT_ID: 
VDS_ANSIBLE_INSTALL_STARTED(560), Ansible host-deploy playbook execution has 
started on host tchyp-test.ecr.com.
2018-01-05 14:49:32,507+08 INFO  
[org.ovirt.engine.core.common.utils.ansible.AnsibleExecutor] 
(EE-ManagedThreadFactory-engine-Thread-12) 
[78c6a9cb-ecc5-4e44-9366-13960ef04559] Executing Ansible command: 
/usr/bin/ansible-playbook 
--private-key=/home/pan/ovirt-engine/etc/pki/ovirt-engine/keys/engine_id_rsa 
--inventory=/tmp/ansible-inventory4777891582855267855 
--extra-vars=host_deploy_cluster_version=4.2 
--extra-vars=host_deploy_gluster_enabled=false 
--extra-vars=host_deploy_virt_enabled=true 
--extra-vars=host_deploy_vdsm_port=54321 
--extra-vars=host_deploy_override_firewall=true 
--extra-vars=host_deploy_firewall_type=FIREWALLD --extra-vars=ansible_port=22 
--extra-vars=host_deploy_post_tasks=/home/pan/ovirt-engine/etc/ovirt-engine/ansible/ovirt-host-deploy-post-tasks.yml
 --extra-vars=host_deploy_ovn_tunneling_interface=127.0.0.1 
--extra-vars=host_deploy_ovn_central=null 
/home/pan/ovirt-engine/share/ovirt-engine/playbooks/ovirt-host-deploy.yml
2018-01-05 14:49:32,788+08 INFO  
[org.ovirt.engine.core.common.utils.ansible.AnsibleExecutor] 
(EE-ManagedThreadFactory-engine-Thread-12) 
[78c6a9cb-ecc5-4e44-9366-13960ef04559] Ansible playbook command has exited with 
value: 1
2018-01-05 14:49:32,792+08 ERROR 
[org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand] 
(EE-ManagedThreadFactory-engine-Thread-12) 
[78c6a9cb-ecc5-4e44-9366-13960ef04559] Host installation failed for host 
'dbefa58b-3ee2-439b-8e9f-7ece571abe32', 'tchyp-test.ecr.com': 250
2018-01-05 14:49:32,796+08 INFO  
[org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] 
(EE-ManagedThreadFactory-engine-Thread-12) 
[78c6a9cb-ecc5-4e44-9366-13960ef04559] START, SetVdsStatusVDSCommand(HostName = 
tchyp-test.ecr.com, 
SetVdsStatusVDSCommandParameters:{hostId='dbefa58b-3ee2-439b-8e9f-7ece571abe32',
 status='InstallFailed', nonOperationalReason='NONE', 
stopSpmFailureLogged='false', maintenanceReason='null'}), log id: af0b8d1
2018-01-05 14:49:32,803+08 INFO  
[org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] 
(EE-ManagedThreadFactory-engine-Thread-12) 
[78c6a9cb-ecc5-4e44-9366-13960ef04559] FINISH, SetVdsStatusVDSCommand, log id: 
af0b8d1
2018-01-05 14:49:32,809+08 ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-engine-Thread-12) 
[78c6a9cb-ecc5-4e44-9366-13960ef04559] EVENT_ID: VDS_INSTALL_FAILED(505), Host 
tchyp-test.ecr.com installation failed. 250."


ovirt-host-deploy-ansible-20180105144932-127.0.0.1-78c6a9cb-ecc5-4e44-9366-13960ef04559.log:
"ERROR! Unexpected Exception: [Errno 13] Permission denied
to see the full traceback, use -vvv"


What am I going to do now to add host success?


Thanks.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Configuration of FCoE in oVirt 4.2 on HP BladeSystem c7000

2018-01-20 Thread Gunder Johansen
Thanks, Fred.
I have been looking at FCoE VDSM Hooks to no help. Looking at it again, I am 
still not able to see any FCoE 
[root@ovirtengine ~]# engine-config -g 
UserDefinedNetworkCustomPropertiesUserDefinedNetworkCustomProperties:  version: 
3.6UserDefinedNetworkCustomProperties:  version: 
4.0UserDefinedNetworkCustomProperties: 
fcoe=^((enable|dcb|auto_vlan)=(yes|no),?)*$ version: 
4.1UserDefinedNetworkCustomProperties: 
fcoe=^((enable|dcb|auto_vlan)=(yes|no),?)*$ version: 4.2
I finally managed to find the "custom Property" where I could set 
"enable=yes,dcb=no" for fcoe, but when applying the change, I get unexepected 
error. Yes, the host was in local maintenance mode when I applied the change.
I am afraid I am not understanding all the steps needed from the Virtual 
Connect configuration in the blade rack to the network interfaces inside oVirt. 
Should I add a new FCoE only network interface and should this have VLAN/IP 
address in a special range compared to the internal network in the rack?
Thanks again.




 
 
From: Fred Rolland [mailto:froll...@redhat.com] 
Sent: 13. januar 2018 14:21
To: Luca 'remix_tj' Lorenzetto
Cc: Gunder Johansen; users
Subject: Re: [ovirt-users] Configuration of FCoE in oVirt 4.2 on HPBladeSystem 
c7000


 
Take a look also at the FCoE Vdsm hook:

oVirt/vdsm


  
|  
|   
|   
|   ||

   |

  |
|  
||  
oVirt/vdsm
 vdsm - This is a mirror for http://gerrit.ovirt.org, for issues use 
http://bugzilla.redhat.com  |   |

  |

  |

 
 
   ___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Configuration of FCoE in oVirt 4.2 on HP BladeSystem c7000

2018-01-20 Thread Gunder Johansen
Thanks, Fred.
I have been looking at FCoE VDSM Hooks to no help. Looking at it again, I am 
still not able to see any FCoE 
[root@ovirtengine ~]# engine-config -g 
UserDefinedNetworkCustomPropertiesUserDefinedNetworkCustomProperties:  version: 
3.6UserDefinedNetworkCustomProperties:  version: 
4.0UserDefinedNetworkCustomProperties: 
fcoe=^((enable|dcb|auto_vlan)=(yes|no),?)*$ version: 
4.1UserDefinedNetworkCustomProperties: 
fcoe=^((enable|dcb|auto_vlan)=(yes|no),?)*$ version: 4.2
I finally managed to find the "custom Property" where I could set 
"enable=yes,dcb=no" for fcoe, but when applying the change, I get unexepected 
error. Yes, the host was in local maintenance mode when I applied the change.
I am afraid I am not understanding all the steps needed from the Virtual 
Connect configuration in the blade rack to the network interfaces inside oVirt. 
Should I add a new FCoE only network interface and should this have VLAN/IP 
address in a special range compared to the internal network in the rack?
Thanks again.




 From: Fred Rolland [mailto:froll...@redhat.com] 
Sent: 13. januar 2018 14:21
To: Luca 'remix_tj' Lorenzetto
Cc: Gunder Johansen; users
Subject: Re: [ovirt-users] Configuration of FCoE in oVirt 4.2 on HP BladeSystem 
c7000 Take a look also at the FCoE Vdsm hook:

oVirt/vdsm

| 
| 
| 
|  |  |

 |

 |
| 
|  | 
oVirt/vdsm
vdsm - This is a mirror for http://gerrit.ovirt.org, for issues use 
http://bugzilla.redhat.com |  |

 |

 |



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users