[ovirt-users] Re: Future of the oVirt CSI Driver
On Tue, Jul 11, 2023 at 6:32 PM Mike Rochefort wrote: > But not having anyone develop the CSI driver means it will likely end up > with bit rot. With OpenShift 4.14 the oVirt platform becomes less > attractive to use, though IPI/UPI installations should be possible. A > different storage backend would probably be recommended, however. OKD can still use oVirt CSI driver as optional component (installable via operator). CSI driver in the payload will, probably, be removed soon anyway -- Cheers, Vadim ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/NOU72LMLYDBQP26S7CP7M7JY7ERSZHMF/
[ovirt-users] Re: oVirt tools
On Mon, 2019-08-05 at 16:09 +0300, Алексей Иванов wrote: > Greetings, > Thank you for your reply. I do know there is such option in vSphere > (you need to modify vm config file for this). Does this option for > oVirt is on the roadmap? > Hi Alexey.Honestly, I don't recall if we ever were asked to implement it.In any case, it should be doable. You are welcome to file a bug report at https://bugzilla.redhat.com Classification: CommunityProduct: Virtualization ToolsComponent: virtio-win if want to escalate this issue. Best,Vadim. > С уважением, > Иванов Алексей. > > On Mon, Aug 5, 2019 at 5:31 AM Vadim Rozenfeld > wrote: > > On Sun, 2019-08-04 at 11:55 +0300, Gal Zaidman wrote: > > > Hi, > > > I don't think it is related to the wgt installer but to the > > > vitio-win drivers adding @Vadim Rozenfeld and @Gal Hammer > > > > Unfortunately there is no easy way to make a virtio device non- > > ejectable at the system level.However you can try couple of simple > > solutions, like removing "Safely Remove Hardware and Eject > > Media"from the Task Bar or making a simple batch file that will be > > running on the system startup and zeroingCM_DEVCAP_REMOVABLE bit > > (bit 4) in device's Capabilities registry parameter. > > Best,Vadim. > > > > > > > On Fri, Aug 2, 2019 at 4:32 PM wrote: > > > > Greetings, > > > > > > > > > > > > > > > > After installing oVirt tools on my w2016 virtual server I've > > > > noticed that all devices (nic, disks, etc.) are connected as > > > > USB devices and I have option available to eject those devices. > > > > Is there any way to prvent those devices being ejected on > > > > behalf of VM user? (e.g. make sure noone inside the VM can > > > > actually eject nic, disk or something) > > > > > > > > ___ > > > > > > > > Users mailing list -- users@ovirt.org > > > > > > > > To unsubscribe send an email to users-le...@ovirt.org > > > > > > > > Privacy Statement: https://www.ovirt.org/site/privacy-policy/ > > > > > > > > oVirt Code of Conduct: > > > > https://www.ovirt.org/community/about/community-guidelines/ > > > > > > > > List Archives: > > > > https://lists.ovirt.org/archives/list/users@ovirt.org/message/PWVBJ3MPW6KKBB4FLEOGLCQTFVQFCBBA/ > > > > ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/OTIYVLZ5LK3UFQT2IBL4N5CGNOYTHIZN/
[ovirt-users] Re: oVirt tools
On Sun, 2019-08-04 at 11:55 +0300, Gal Zaidman wrote: > Hi, > I don't think it is related to the wgt installer but to the vitio-win > drivers adding @Vadim Rozenfeld and @Gal Hammer Unfortunately there is no easy way to make a virtio device non- ejectable at the system level.However you can try couple of simple solutions, like removing "Safely Remove Hardware and Eject Media"from the Task Bar or making a simple batch file that will be running on the system startup and zeroingCM_DEVCAP_REMOVABLE bit (bit 4) in device's Capabilities registry parameter. Best,Vadim. > On Fri, Aug 2, 2019 at 4:32 PM wrote: > > Greetings, > > > > > > > > After installing oVirt tools on my w2016 virtual server I've > > noticed that all devices (nic, disks, etc.) are connected as USB > > devices and I have option available to eject those devices. Is > > there any way to prvent those devices being ejected on behalf of VM > > user? (e.g. make sure noone inside the VM can actually eject nic, > > disk or something) > > > > ___ > > > > Users mailing list -- users@ovirt.org > > > > To unsubscribe send an email to users-le...@ovirt.org > > > > Privacy Statement: https://www.ovirt.org/site/privacy-policy/ > > > > oVirt Code of Conduct: > > https://www.ovirt.org/community/about/community-guidelines/ > > > > List Archives: > > https://lists.ovirt.org/archives/list/users@ovirt.org/message/PWVBJ3MPW6KKBB4FLEOGLCQTFVQFCBBA/ > > ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/I2HSXUWLDFGIAYZ4XPWDFAKK4O6SJAJX/
[ovirt-users] Re: Windows Server 2019: Driver Signature Enforcement
+Amnon & Martin On Fri, 2019-06-14 at 17:52 +1000, Vadim Rozenfeld wrote: On Thu, 2019-06-13 at 15:24 -0300, Vinícius Ferrão wrote: Lev, thanks for the reply. So basically Windows on Secureboot UEFI is simply “broken” within oVirt? Will Red Hat reconsider this? Since one of the “selling points” of oVirt 4.3 was UEFI support. Can the RH WHQL drivers be shipped with oVirt? Technically, WHQL-signing is not required to satisfy secure boot requirements. UEFI signing should be enough. But it might be some license issues. https://techcommunity.microsoft.com/t5/Windows-Hardware-Certification/Microsoft-UEFI-CA-Signing-policy-updates/ba-p/364828 Thanks, On 13 Jun 2019, at 07:25, Lev Veyde mailto:lve...@redhat.com>> wrote: Hi, I think that it's expected behaviour. In secure mode only WHQL'd drivers are allowed to be loaded into the OS kernel, and while RHEV/RHV provides WHQL'd drivers, oVirt users receive RH signed ones, which from the OS standpoint are basically not certified. Thanks in advance, On Thu, Jun 13, 2019 at 1:12 PM Yedidyah Bar David mailto:d...@redhat.com>> wrote: On Mon, Jun 10, 2019 at 10:54 PM Vinícius Ferrão mailto:fer...@versatushpc.com.br>> wrote: RHV drivers works. oVirt drivers does not. Checked this now. I’m not sure if this is intended or not. But oVirt drivers aren’t signed for Windows. oVirt's drivers are simply copied from virtio-win repositories. Adding Vadim from virtio-win team. Best regards, > On 29 May 2019, at 21:41, > mich...@wanderingmad.com<mailto:mich...@wanderingmad.com> wrote: > > I'm running server 2012R2, 2016, and 2019 with no issue using the Redhat > signed drivers from RHEV. > ___ > Users mailing list -- users@ovirt.org<mailto:users@ovirt.org> > To unsubscribe send an email to > users-le...@ovirt.org<mailto:users-le...@ovirt.org> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/ > oVirt Code of Conduct: > https://www.ovirt.org/community/about/community-guidelines/ > List Archives: > https://lists.ovirt.org/archives/list/users@ovirt.org/message/GBQEMLOW5DCSW7XSLNZKNX532BQHRFUB/ ___ Users mailing list -- users@ovirt.org<mailto:users@ovirt.org> To unsubscribe send an email to users-le...@ovirt.org<mailto:users-le...@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/PNZWHYEWW23N4GQKHMJ2RUSQR363NSYH/ -- Didi ___ Users mailing list -- users@ovirt.org<mailto:users@ovirt.org> To unsubscribe send an email to users-le...@ovirt.org<mailto:users-le...@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/OIBSOZEBGSF65JEVMNMPOCAB6ICTVKIU/ -- Lev Veyde Software Engineer, RHCE | RHCVA | MCITP Red Hat Israel <https://www.redhat.com/> l...@redhat.com<mailto:l...@redhat.com> | lve...@redhat.com<mailto:lve...@redhat.com> [https://www.redhat.com/profiles/rh/themes/redhatdotcom/img/logo-red-hat-black.png]<https://red.ht/sig> TRIED. TESTED. TRUSTED.<https://redhat.com/trusted> ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/SWERPW2RUCGBK5OMMF2OJUZR2I2SSPTE/
[ovirt-users] Re: Windows Server 2019: Driver Signature Enforcement
On Fri, 2019-06-14 at 12:01 -0300, Vinícius Ferrão wrote: > Hello Vadim, > I got the working drivers from RHEL8: > eecdff62b5d148f02dc92d7115631175 virtio-win-1.9.7-rhel8.iso > > Non working ones directly from the Hosted Engine: > c55e2815bc7090f077cf82aed5c90423 virtio-win-0.1.171.iso > > Thanks, Hello Vinícius,Do you use OVMF or SeaBios? If it is OVMF then Lev already nailed the problem.Only WHQL or UEFI signed drivers will work with secure boot. Best regards,Vadim. > > On 13 Jun 2019, at 23:33, Vadim Rozenfeld > > wrote: > > On Thu, 2019-06-13 at 11:36 +0300, Yedidyah Bar David wrote: > > > On Mon, Jun 10, 2019 at 10:54 PM Vinícius Ferrão > > hpc.com.br> wrote: > > > > RHV drivers works. > > > > oVirt drivers does not. > > > > > > > > Checked this now. > > > > > > > > I’m not sure if this is intended or not. But oVirt drivers > > > > aren’t signed for Windows. > > > > > > oVirt's drivers are simply copied from virtio-win repositories. > > > > > > Adding Vadim from virtio-win team. > > > > > > Best regards, > > > > Hi guys, > > > > Can you please help us to reproduce the problem? > > > > It will be great if you can provide me with the following > > information: > > > > - qemu and host kernel versions, > > - qemu command line, > > - RHV and oVirt drivers versions. > > > > Cheers, > > Vadim. > > > > > > > > > > > > On 29 May 2019, at 21:41, mich...@wanderingmad.com wrote: > > > > > > > > > > I'm running server 2012R2, 2016, and 2019 with no issue using > > > > the Redhat signed drivers from RHEV. > > > > > ___ > > > > > Users mailing list -- users@ovirt.org > > > > > To unsubscribe send an email to users-le...@ovirt.org > > > > > Privacy Statement: https://www.ovirt.org/site/privacy-policy/ > > > > > oVirt Code of Conduct: https://www.ovirt.org/community/about/ > > > > community-guidelines/ > > > > > List Archives: https://lists.ovirt.org/archives/list/users@ov > > > > irt.org/message/GBQEMLOW5DCSW7XSLNZKNX532BQHRFUB/ > > > > ___ > > > > Users mailing list -- users@ovirt.org > > > > To unsubscribe send an email to users-le...@ovirt.org > > > > Privacy Statement: https://www.ovirt.org/site/privacy-policy/ > > > > oVirt Code of Conduct: https://www.ovirt.org/community/about/co > > > > mmunity-guidelines/ > > > > List Archives: https://lists.ovirt.org/archives/list/users@ovir > > > > t.org/message/PNZWHYEWW23N4GQKHMJ2RUSQR363NSYH/ > > ul[class*='mb-extra__public-links'], ul[class*='mb-note__public- > > links'], ul[class*='mb-task__public-links'] { display: none > > !important; }___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/24CCAXCKI6C3SIOTOLIDZCGX3QRCCI6H/
[ovirt-users] Re: Windows Server 2019: Driver Signature Enforcement
On Thu, 2019-06-13 at 15:24 -0300, Vinícius Ferrão wrote: > Lev, thanks for the reply. > So basically Windows on Secureboot UEFI is simply “broken” within > oVirt? > > Will Red Hat reconsider this? Since one of the “selling points” of > oVirt 4.3 was UEFI support. Can the RH WHQL drivers be shipped with > oVirt? Technically, WHQL-signing is not required to satisfy secure boot requirements. UEFI signing should be enough. But it might be some license issues.https://techcommunity.microsoft.com/t5/Windows-Hardware- Certification/Microsoft-UEFI-CA-Signing-policy-updates/ba-p/364828 > Thanks, > > > On 13 Jun 2019, at 07:25, Lev Veyde wrote: > > Hi, > > > > I think that it's expected behaviour. > > > > In secure mode only WHQL'd drivers are allowed to be loaded into > > the OS kernel, and while RHEV/RHV provides WHQL'd drivers, oVirt > > users receive RH signed ones, which from the OS standpoint are > > basically not certified. > > > > Thanks in advance, > > > > On Thu, Jun 13, 2019 at 1:12 PM Yedidyah Bar David > > wrote: > > > On Mon, Jun 10, 2019 at 10:54 PM Vinícius Ferrão > > hpc.com.br> wrote: > > > > RHV drivers works. > > > > > > > > oVirt drivers does not. > > > > > > > > > > > > > > > > Checked this now. > > > > > > > > > > > > > > > > I’m not sure if this is intended or not. But oVirt drivers > > > > aren’t signed for Windows. > > > > > > oVirt's drivers are simply copied from virtio-win repositories. > > > Adding Vadim from virtio-win team. > > > Best regards, > > > > > > > > > > > > On 29 May 2019, at 21:41, mich...@wanderingmad.com wrote: > > > > > > > > > > > > > > > > > > I'm running server 2012R2, 2016, and 2019 with no issue using > > > > the Redhat signed drivers from RHEV. > > > > > > > > > ___ > > > > > > > > > Users mailing list -- users@ovirt.org > > > > > > > > > To unsubscribe send an email to users-le...@ovirt.org > > > > > > > > > Privacy Statement: https://www.ovirt.org/site/privacy-policy/ > > > > > > > > > oVirt Code of Conduct: https://www.ovirt.org/community/about/ > > > > community-guidelines/ > > > > > > > > > List Archives: https://lists.ovirt.org/archives/list/users@ov > > > > irt.org/message/GBQEMLOW5DCSW7XSLNZKNX532BQHRFUB/ > > > > > > > > ___ > > > > > > > > Users mailing list -- users@ovirt.org > > > > > > > > To unsubscribe send an email to users-le...@ovirt.org > > > > > > > > Privacy Statement: https://www.ovirt.org/site/privacy-policy/ > > > > > > > > oVirt Code of Conduct: https://www.ovirt.org/community/about/co > > > > mmunity-guidelines/ > > > > > > > > List Archives: https://lists.ovirt.org/archives/list/users@ovir > > > > t.org/message/PNZWHYEWW23N4GQKHMJ2RUSQR363NSYH/ > > > > > > > > > > > > > -- > > > Didi > > > > > > ___ > > > > > > Users mailing list -- users@ovirt.org > > > > > > To unsubscribe send an email to users-le...@ovirt.org > > > > > > Privacy Statement: https://www.ovirt.org/site/privacy-policy/ > > > > > > oVirt Code of Conduct: https://www.ovirt.org/community/about/comm > > > unity-guidelines/ > > > > > > List Archives: https://lists.ovirt.org/archives/list/users@ovirt. > > > org/message/OIBSOZEBGSF65JEVMNMPOCAB6ICTVKIU/ > > > > > > > -- > > > > Lev Veyde > > Software Engineer, RHCE | RHCVA | MCITP > > Red Hat Israel > > > > > > > > l...@redhat.com | lve...@redhat.com > > > > > > > > > > > > TRIED. TESTED. TRUSTED.___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/VW3VU46FEB5UITUF54XRHI4FOI6TSR26/
[ovirt-users] Re: Windows Server 2019: Driver Signature Enforcement
On Thu, 2019-06-13 at 11:36 +0300, Yedidyah Bar David wrote: > On Mon, Jun 10, 2019 at 10:54 PM Vinícius Ferrão com.br> wrote: > > RHV drivers works. > > > > oVirt drivers does not. > > > > > > > > Checked this now. > > > > > > > > I’m not sure if this is intended or not. But oVirt drivers aren’t > > signed for Windows. > > oVirt's drivers are simply copied from virtio-win repositories. > > Adding Vadim from virtio-win team. > > Best regards, Hi guys, Can you please help us to reproduce the problem? It will be great if you can provide me with the followinginformation: - qemu and host kernel versions,- qemu command line,- RHV and oVirt drivers versions. Cheers,Vadim. > > > > > > On 29 May 2019, at 21:41, mich...@wanderingmad.com wrote: > > > > > > > > > > I'm running server 2012R2, 2016, and 2019 with no issue using the > > Redhat signed drivers from RHEV. > > > > > ___ > > > > > Users mailing list -- users@ovirt.org > > > > > To unsubscribe send an email to users-le...@ovirt.org > > > > > Privacy Statement: https://www.ovirt.org/site/privacy-policy/ > > > > > oVirt Code of Conduct: https://www.ovirt.org/community/about/comm > > unity-guidelines/ > > > > > List Archives: https://lists.ovirt.org/archives/list/users@ovirt. > > org/message/GBQEMLOW5DCSW7XSLNZKNX532BQHRFUB/ > > > > ___ > > > > Users mailing list -- users@ovirt.org > > > > To unsubscribe send an email to users-le...@ovirt.org > > > > Privacy Statement: https://www.ovirt.org/site/privacy-policy/ > > > > oVirt Code of Conduct: https://www.ovirt.org/community/about/commun > > ity-guidelines/ > > > > List Archives: https://lists.ovirt.org/archives/list/us...@ovirt.or > > g/message/PNZWHYEWW23N4GQKHMJ2RUSQR363NSYH/ > > ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/33ZCZ4U4SX7Z6KAVR6KHWZPOEPENBHRQ/
[ovirt-users] Can't get hosted-engine console in cli
Hi, Users I installed on clean Centos7.4 ovirt-4.2.2.6-1 with HE on domain iscsi using cockpit and found that 1. can't get console in cli # hosted-engine --console The engine VM is running on this host Connected to domain HostedEngine Escape character is ^] error: internal error: cannot find character device from gui console works fine. may this is because vm.conf in 4.2.2 display=qxl ? 2. can't import vm from ova errors from engine.log 2018-05-03 12:19:29,112+03 ERROR [org.ovirt.engine.core.bll.exportimport.ImportVmFromOvaCommand] (EE- ManagedThreadFactory-engine-Thread-11299) [8cc80ae5-cccf-4554-b52d-4892548eea8c] Command 'org.ovirt.engine.core.bll.exportimport.ImportVmFromOvaCommand' failed: For input string: "" 2018-05-03 12:19:29,112+03 ERROR [org.ovirt.engine.core.bll.exportimport.ImportVmFromOvaCommand] (EE- ManagedThreadFactory-engine-Thread-11299) [8cc80ae5-cccf-4554-b52d-4892548eea8c] Exception: java.lang.NumberFormatException: For input string: "" 2018-05-03 12:19:29,119+03 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE- ManagedThreadFactory-engine-Thread-11299) [8cc80ae5-cccf-4554-b52d-4892548eea8c] EVENT_ID: IMPORTEXPORT_IMPORT_VM_FAILED(1,153), Failed to import Vm VM7.2.0-x86_64 to Data Center Default, Cluster Default on ovirt-4.1.9.1-1 import works fine Can you help me to solve these problems. Thanks, Vadim ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] ovirt-ha-agent service errors
Hi, All after upgrade to 4.1.5 no any error messages ;-) Thanks to TEAM Пнд 21 Авг 2017 14:29:49 +0300, Vadim написал: > Hi, All > > ovirt 4.1.4 fresh install on two hosts with hosted-engine on both. > gluster volume is replica 3 with two vdsm hosts and one VM under esxi > > working only one vm for HE > > sometimes have such errors in ha-agent > > # service ovirt-ha-agent status > Redirecting to /bin/systemctl status ovirt-ha-agent.service > ● ovirt-ha-agent.service - oVirt Hosted Engine High Availability Monitoring > Agent >Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-agent.service; enabled; > vendor preset: disabled) >Active: active (running) since Thu 2017-08-17 11:43:44 MSK; 3 days ago > Main PID: 2534 (ovirt-ha-agent) >CGroup: /system.slice/ovirt-ha-agent.service >└─2534 /usr/bin/python > /usr/share/ovirt-hosted-engine-ha/ovirt-ha-agent --no-daemon > > Aug 21 00:29:11 kvm03 ovirt-ha-agent[2534]: ovirt-ha-agent > ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR Engine VM has > bad health status, timeout in 300 seconds > Aug 21 00:48:32 kvm03 ovirt-ha-agent[2534]: ovirt-ha-agent > ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR Engine VM has > bad health status, timeout in 300 seconds > Aug 21 01:12:05 kvm03 ovirt-ha-agent[2534]: ovirt-ha-agent > ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR Engine VM has > bad health status, timeout in 300 seconds > Aug 21 02:12:09 kvm03 ovirt-ha-agent[2534]: ovirt-ha-agent > ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR Engine VM has > bad health status, timeout in 300 seconds > Aug 21 03:55:08 kvm03 ovirt-ha-agent[2534]: ovirt-ha-agent > ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR Engine VM has > bad health status, timeout in 300 seconds > Aug 21 08:14:05 kvm03 ovirt-ha-agent[2534]: ovirt-ha-agent > ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR Engine VM has > bad health status, timeout in 300 seconds > Aug 21 08:25:06 kvm03 ovirt-ha-agent[2534]: ovirt-ha-agent > ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR Engine VM has > bad health status, timeout in 300 seconds > Aug 21 08:46:05 kvm03 ovirt-ha-agent[2534]: ovirt-ha-agent > ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR Engine VM has > bad health status, timeout in 300 seconds > Aug 21 09:20:06 kvm03 ovirt-ha-agent[2534]: ovirt-ha-agent > ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR Engine VM has > bad health status, timeout in 300 seconds > Aug 21 09:21:40 kvm03 ovirt-ha-agent[2534]: ovirt-ha-agent > ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR Engine VM has > bad health status, timeout in 300 seconds > > > in agent log: > > MainThread::INFO::2017-08-21 > 09:21:40,314::ovf_store::103::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan) > Found OVF_STORE: imgUUID:c7b33006-e7c7-4e39-8d80-2301149ac8f9, > volUUID:184f9e45-ab1b-44b8-8a68-238042dba1a7 > MainThread::INFO::2017-08-21 > 09:21:40,594::ovf_store::103::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan) > Found OVF_STORE: imgUUID:e7a3c173-9f87-4f6c-a807-63118b9b7cb2, > volUUID:92317b81-1bb0-43e6-b029-8931aa5d0af0 > MainThread::INFO::2017-08-21 > 09:21:40,716::ovf_store::112::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF) > Extracting Engine VM OVF from the OVF_STORE > MainThread::INFO::2017-08-21 > 09:21:40,749::ovf_store::119::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF) > OVF_STORE volume path: > /rhev/data-center/mnt/glusterSD/localhost:_ovha/7b6badfb-4986-4983-9f62-ae55da33d15e/images/e7a3c173-9f87-4f6c-a807-63118b9b7cb2/92317b81-1bb0-43e6-b029-8931aa5d0af0 > MainThread::INFO::2017-08-21 > 09:21:40,787::config::431::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(_get_vm_conf_content_from_ovf_store) > Found an OVF for HE VM, trying to convert > MainThread::INFO::2017-08-21 > 09:21:40,792::config::436::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(_get_vm_conf_content_from_ovf_store) > Got vm.conf from OVF_STORE > MainThread::ERROR::2017-08-21 > 09:21:40,792::states::606::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume) > Engine VM has bad health status, timeout in 300 seconds > MainThread::INFO::2017-08-21 > 09:21:40,792::states::430::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume) > Engine vm running on localhost > MainThread::INFO::2017-08-21 > 09:21:40,796::state_decorators::88::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check) > Timeout cleared while transitioning 'ovirt_hosted_engine_ha.agent.states.Eng
[ovirt-users] ovirt-ha-agent service errors
ializing VDSM There is no errors in web on dashboard How is it possible to find the reason for an error? -- Thanks, Vadim ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] glusterfs Error message constantly being reported
Hi, Kasturi Thanks for replay, after restart of supervdsm no messages in log. Птн 18 Авг 2017 10:37:53 +0300, Kasturi Narra написал: > Hi, > >Can you please check if you have vdsm-gluster package installed on the > system ? > > Thanks > kasturi > > On Wed, Aug 16, 2017 at 6:12 PM, Vadim wrote: > Hi, All > > ovirt 4.1.4 fresh install > Constantly seeing this message in the logs, how to fix this: > > VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]' object > has no attribute 'glusterTasksList' > VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]' object > has no attribute 'glusterTasksList' > VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]' object > has no attribute 'glusterTasksList' > VDSM kvm04 command GetGlusterVolumeAdvancedDetailsVDS failed: > 'AutoProxy[instance]' object has no attribute 'glusterVolumeStatvfs' > VDSM kvm04 command GetGlusterVolumeAdvancedDetailsVDS failed: > 'AutoProxy[instance]' object has no attribute 'glusterVolumeStatvfs' > VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]' object > has no attribute 'glusterTasksList' > VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]' object > has no attribute 'glusterTasksList' > VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]' object > has no attribute 'glusterTasksList' > > -- > Thanks, > Vadim > ___ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -- Thanks, Vadim ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] glusterfs Error message constantly being reported
Hi, Sahina on all hosts # rpm -qa | grep vdsm-gluster vdsm-gluster-4.19.24-1.el7.centos.noarch Срд 16 Авг 2017 17:49:14 +0300, Sahina Bose написал: > Can you check if you have vdsm-gluster rpm installed on the hosts? > > On Wed, Aug 16, 2017 at 7:08 PM, Vadim wrote: > In vdsm.log > > 2017-08-16 16:34:15,314+0300 ERROR (jsonrpc/5) [jsonrpc.JsonRpcServer] > Internal server error (__init__:577) > Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 572, in > _handle_request > res = method(**params) > File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 198, in > _dynamicMethod > result = fn(*methodArgs) > File "/usr/lib/python2.7/site-packages/vdsm/gluster/apiwrapper.py", line > 117, in status > return self._gluster.volumeStatus(volumeName, brick, statusOption) > File "/usr/lib/python2.7/site-packages/vdsm/gluster/api.py", line 89, in > wrapper > rv = func(*args, **kwargs) > File "/usr/lib/python2.7/site-packages/vdsm/gluster/api.py", line 411, in > volumeStatus > data = self.svdsmProxy.glusterVolumeStatvfs(volumeName) > File "/usr/lib/python2.7/site-packages/vdsm/supervdsm.py", line 53, in > __call__ > return callMethod() > File "/usr/lib/python2.7/site-packages/vdsm/supervdsm.py", line 50, in > > getattr(self._supervdsmProxy._svdsm, self._funcName)(*args, > AttributeError: 'AutoProxy[instance]' object has no attribute > 'glusterVolumeStatvfs' > > 2017-08-16 16:37:39,566+0300 ERROR (jsonrpc/3) [jsonrpc.JsonRpcServer] > Internal server error (__init__:577) > Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 572, in > _handle_request > res = method(**params) > File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 198, in > _dynamicMethod > result = fn(*methodArgs) > File "/usr/lib/python2.7/site-packages/vdsm/gluster/apiwrapper.py", line > 109, in list > return self._gluster.tasksList(taskIds) > File "/usr/lib/python2.7/site-packages/vdsm/gluster/api.py", line 89, in > wrapper > rv = func(*args, **kwargs) > File "/usr/lib/python2.7/site-packages/vdsm/gluster/api.py", line 507, in > tasksList > status = self.svdsmProxy.glusterTasksList(taskIds) > File "/usr/lib/python2.7/site-packages/vdsm/supervdsm.py", line 53, in > __call__ > return callMethod() > File "/usr/lib/python2.7/site-packages/vdsm/supervdsm.py", line 50, in > > getattr(self._supervdsmProxy._svdsm, self._funcName)(*args, > AttributeError: 'AutoProxy[instance]' object has no attribute > 'glusterTasksList' > > Срд 16 Авг 2017 16:08:24 +0300, Vadim написал: > > Hi, All > > > > ovirt 4.1.4 fresh install > > Constantly seeing this message in the logs, how to fix this: > > > > VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]' object > > has no attribute 'glusterTasksList' > > VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]' object > > has no attribute 'glusterTasksList' > > VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]' object > > has no attribute 'glusterTasksList' > > VDSM kvm04 command GetGlusterVolumeAdvancedDetailsVDS failed: > > 'AutoProxy[instance]' object has no attribute 'glusterVolumeStatvfs' > > VDSM kvm04 command GetGlusterVolumeAdvancedDetailsVDS failed: > > 'AutoProxy[instance]' object has no attribute 'glusterVolumeStatvfs' > > VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]' object > > has no attribute 'glusterTasksList' > > VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]' object > > has no attribute 'glusterTasksList' > > VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]' object > > has no attribute 'glusterTasksList' > > > > -- > > Thanks, > > Vadim > > ___ > > Users mailing list > > Users@ovirt.org > > http://lists.ovirt.org/mailman/listinfo/users > > -- > Thanks, > Vadim > ___ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -- Thanks, Vadim ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] glusterfs Error message constantly being reported
In vdsm.log 2017-08-16 16:34:15,314+0300 ERROR (jsonrpc/5) [jsonrpc.JsonRpcServer] Internal server error (__init__:577) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 572, in _handle_request res = method(**params) File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 198, in _dynamicMethod result = fn(*methodArgs) File "/usr/lib/python2.7/site-packages/vdsm/gluster/apiwrapper.py", line 117, in status return self._gluster.volumeStatus(volumeName, brick, statusOption) File "/usr/lib/python2.7/site-packages/vdsm/gluster/api.py", line 89, in wrapper rv = func(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/gluster/api.py", line 411, in volumeStatus data = self.svdsmProxy.glusterVolumeStatvfs(volumeName) File "/usr/lib/python2.7/site-packages/vdsm/supervdsm.py", line 53, in __call__ return callMethod() File "/usr/lib/python2.7/site-packages/vdsm/supervdsm.py", line 50, in getattr(self._supervdsmProxy._svdsm, self._funcName)(*args, AttributeError: 'AutoProxy[instance]' object has no attribute 'glusterVolumeStatvfs' 2017-08-16 16:37:39,566+0300 ERROR (jsonrpc/3) [jsonrpc.JsonRpcServer] Internal server error (__init__:577) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 572, in _handle_request res = method(**params) File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 198, in _dynamicMethod result = fn(*methodArgs) File "/usr/lib/python2.7/site-packages/vdsm/gluster/apiwrapper.py", line 109, in list return self._gluster.tasksList(taskIds) File "/usr/lib/python2.7/site-packages/vdsm/gluster/api.py", line 89, in wrapper rv = func(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/gluster/api.py", line 507, in tasksList status = self.svdsmProxy.glusterTasksList(taskIds) File "/usr/lib/python2.7/site-packages/vdsm/supervdsm.py", line 53, in __call__ return callMethod() File "/usr/lib/python2.7/site-packages/vdsm/supervdsm.py", line 50, in getattr(self._supervdsmProxy._svdsm, self._funcName)(*args, AttributeError: 'AutoProxy[instance]' object has no attribute 'glusterTasksList' Срд 16 Авг 2017 16:08:24 +0300, Vadim написал: > Hi, All > > ovirt 4.1.4 fresh install > Constantly seeing this message in the logs, how to fix this: > > VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]' object > has no attribute 'glusterTasksList' > VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]' object > has no attribute 'glusterTasksList' > VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]' object > has no attribute 'glusterTasksList' > VDSM kvm04 command GetGlusterVolumeAdvancedDetailsVDS failed: > 'AutoProxy[instance]' object has no attribute 'glusterVolumeStatvfs' > VDSM kvm04 command GetGlusterVolumeAdvancedDetailsVDS failed: > 'AutoProxy[instance]' object has no attribute 'glusterVolumeStatvfs' > VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]' object > has no attribute 'glusterTasksList' > VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]' object > has no attribute 'glusterTasksList' > VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]' object > has no attribute 'glusterTasksList' > > -- > Thanks, > Vadim > ___ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -- Thanks, Vadim ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[ovirt-users] glusterfs Error message constantly being reported
Hi, All ovirt 4.1.4 fresh install Constantly seeing this message in the logs, how to fix this: VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]' object has no attribute 'glusterTasksList' VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]' object has no attribute 'glusterTasksList' VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]' object has no attribute 'glusterTasksList' VDSM kvm04 command GetGlusterVolumeAdvancedDetailsVDS failed: 'AutoProxy[instance]' object has no attribute 'glusterVolumeStatvfs' VDSM kvm04 command GetGlusterVolumeAdvancedDetailsVDS failed: 'AutoProxy[instance]' object has no attribute 'glusterVolumeStatvfs' VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]' object has no attribute 'glusterTasksList' VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]' object has no attribute 'glusterTasksList' VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]' object has no attribute 'glusterTasksList' -- Thanks, Vadim ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[ovirt-users] Live migration error in 4.1.2 (next attempt)
Can anybody help me to solve this. I'm having trouble in live migration. Migration always finished with error. May be it is essential, on dashboard cluster status always N/A. VM can run on both hosts. After turn on debug log for libvirt i'v got such errors 2017-06-06 09:41:04.842+: 1302: error : qemuDomainObjBeginJobInternal:3107 : Timed out during operation: cannot acquire state change lock (held by remoteDispatchDomainMigratePrepare3Params) 2017-06-06 09:42:04.847+: 1305: error : qemuDomainObjBeginJobInternal:3107 : Timed out during operation: cannot acquire state change lock (held by remoteDispatchDomainMigratePrepare3Params) 2017-06-06 09:43:04.850+: 1304: error : qemuDomainObjBeginJobInternal:3107 : Timed out during operation: cannot acquire state change lock (held by remoteDispatchDomainMigratePrepare3Params) 2017-06-06 09:44:04.841+: 1301: error : qemuDomainObjBeginJobInternal:3107 : Timed out during operation: cannot acquire state change lock (held by remoteDispatchDomainMigratePrepare3Params) 2017-06-06 09:44:25.373+: 10320: error : qemuDomainObjBeginJobInternal:3107 : Timed out during operation: cannot acquire state change lock (held by remoteDispatchDomainMigratePrepare3Params) 2017-06-06 09:44:55.373+: 10320: error : qemuDomainObjBeginJobInternal:3107 : Timed out during operation: cannot acquire state change lock (held by remoteDispatchDomainMigratePrepare3Params) 2017-06-06 09:45:04.851+: 1303: error : qemuDomainObjBeginJobInternal:3107 : Timed out during operation: cannot acquire state change lock (held by remoteDispatchDomainMigratePrepare3Params) 2017-06-06 09:45:25.373+: 10320: error : qemuDomainObjBeginJobInternal:3107 : Timed out during operation: cannot acquire state change lock (held by remoteDispatchDomainMigratePrepare3Params) 2017-06-06 09:46:04.852+: 1302: error : qemuDomainObjBeginJobInternal:3107 : Timed out during operation: cannot acquire state change lock (held by remoteDispatchDomainMigratePrepare3Params) 2017-06-06 09:47:04.858+: 1305: error : qemuDomainObjBeginJobInternal:3107 : Timed out during operation: cannot acquire state change lock (held by remoteDispatchDomainMigratePrepare3Params) 2017-06-06 09:47:19.950+: 1263: error : qemuMonitorIO:695 : internal error: End of file from monitor 2017-06-06 09:47:19.951+: 1263: error : qemuProcessReportLogError:1810 : internal error: qemu unexpectedly closed the monitor: 2017-06-06T09:40:26.681446Z qemu-kvm: warning: CPU(s) not present in any NUMA nodes: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Ovirt 4.1.1 clean install ugraded to 4.1.2 I tried different migration policies but all of them ended by error. libvirt debug log attached. #rpm- qa | grep -e libvirt -e qemu | sort centos-release-qemu-ev-1.0-1.el7.noarch ipxe-roms-qemu-20160127-5.git6366fa7a.el7.noarch libvirt-client-2.0.0-10.el7_3.9.x86_64 libvirt-daemon-2.0.0-10.el7_3.9.x86_64 libvirt-daemon-config-nwfilter-2.0.0-10.el7_3.9.x86_64 libvirt-daemon-driver-interface-2.0.0-10.el7_3.9.x86_64 libvirt-daemon-driver-network-2.0.0-10.el7_3.9.x86_64 libvirt-daemon-driver-nodedev-2.0.0-10.el7_3.9.x86_64 libvirt-daemon-driver-nwfilter-2.0.0-10.el7_3.9.x86_64 libvirt-daemon-driver-qemu-2.0.0-10.el7_3.9.x86_64 libvirt-daemon-driver-secret-2.0.0-10.el7_3.9.x86_64 libvirt-daemon-driver-storage-2.0.0-10.el7_3.9.x86_64 libvirt-daemon-kvm-2.0.0-10.el7_3.9.x86_64 libvirt-lock-sanlock-2.0.0-10.el7_3.9.x86_64 libvirt-python-2.0.0-2.el7.x86_64 qemu-guest-agent-2.5.0-3.el7.x86_64 qemu-img-ev-2.6.0-28.el7_3.9.1.x86_64 qemu-kvm-common-ev-2.6.0-28.el7_3.9.1.x86_64 qemu-kvm-ev-2.6.0-28.el7_3.9.1.x86_64 qemu-kvm-tools-ev-2.6.0-28.el7_3.9.1.x86_64 -- Thanks, Vadim migration.tar.bz2 Description: application/bzip ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Question about for oVirt AAA LDAP plugin - configuration file format
Good morning, Ondra. Thank you for fast reply, this example perfectly answers to my question! Respectfully yours, Vadim Reutskiy. E-mail: v.reut...@innopolis.ru Skype: vadim.reutskiy Phone: +7 (927) 289-48-19 From: Ondra Machacek Sent: Tuesday, July 19, 2016 9:09 PM To: Vadim Reutsky; users@ovirt.org Subject: Re: [ovirt-users] Question about for oVirt AAA LDAP plugin - configuration file format On 07/19/2016 02:57 PM, Vadim Reutsky wrote: > Good afternoon! > > First of all, thank you for writing very useful software, which we are > applying in our studio project at Innopolis University. It makes our > lives much easier. > > I have one little question about oVirt AAA LDAP plugin. Can you provide > structure of configuration file, which can be used > by 'ovirt-engine-extension-aaa-ldap-setup' tool by passing > '--config=filename' parameter? (Or, please, give me a clue, when I can > find this information). Here is one of the examples: [environment:default] # Profile IPA OVAAALDAP_LDAP/profile=str:ipa # don't use DNS OVAAALDAP_LDAP/useDNS=bool:False # Profile name is ipa OVAAALDAP_LDAP/aaaProfileName=str:ipa # using round-robin server set OVAAALDAP_LDAP/serverset=str:round-robin # plain connection OVAAALDAP_LDAP/protocol=str:plain # IPA hosts OVAAALDAP_LDAP/hosts=str:ipa1.example.com ipa2.example.com # search user OVAAALDAP_LDAP/user=str:uid=user1,cn=users,cn=accounts,,dc=example,dc=com # search user password OVAAALDAP_LDAP/password=str:password OVAAALDAP_LDAP/toolEnable=bool:False # don't use tester tool OVAAALDAP_LDAP/configOverwrite=bool:True Unfortunatelly the only way you can find to find out these constants is to check source of 'ovirt-engine-extension-aaa-ldap-setup'. > > Thank you in advance! > > > > Respectfully yours, > Vadim Reutskiy. > E-mail: v.reut...@innopolis.ru > Skype: vadim.reutskiy > Phone: +7 (927) 289-48-19 > > > ___ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Question about for oVirt AAA LDAP plugin - configuration file format
Алексей, спасибо за ответ. Возможно, я не слишком точно сформулировал свой вопрос. К сожалению, в указанном вами README файле про утилиту "ovirt-engine-extension-aaa-ldap-setup" сказано лишь то, что мы можем использовать её в качестве альтернативы ручной настройке. И в нем нет ни слова о том, в каком формате ей нужно передавать конфигурационный файл. На данный момент, я уже разобрался с этой проблемой. Я думаю, было бы полезно поместить эту информацию в том или ином виде в ридми. Примерная структура файла такова: [environment:default] OVAAALDAP_LDAP/aaaProfileName=str:saml_auth OVAAALDAP_LDAP/profile=str:ad OVAAALDAP_LDAP/domain=str:local.ad.domain OVAAALDAP_LDAP/protocol=str:plain OVAAALDAP_LDAP/user=str:user@ad.domain OVAAALDAP_LDAP/password=str:password Respectfully yours, Vadim Reutskiy. E-mail: v.reut...@innopolis.ru Skype: vadim.reutskiy Phone: +7 (927) 289-48-19 From: Николаев Алексей Sent: Tuesday, July 19, 2016 8:28 PM To: Vadim Reutsky; users@ovirt.org Subject: Re: [ovirt-users] Question about for oVirt AAA LDAP plugin - configuration file format Как насчёт https://github.com/oVirt/ovirt-engine-extension-aaa-ldap/blob/master/README В частности есть примеры в директории хоста с engine /usr/share/ovirt-engine-extension-aaa-ldap/examples 19.07.2016, 18:40, "Vadim Reutsky" : Good afternoon! First of all, thank you for writing very useful software, which we are applying in our studio project at Innopolis University. It makes our lives much easier. I have one little question about oVirt AAA LDAP plugin. Can you provide structure of configuration file, which can be used by 'ovirt-engine-extension-aaa-ldap-setup' tool by passing '--config=filename' parameter? (Or, please, give me a clue, when I can find this information). Thank you in advance! Respectfully yours, Vadim Reutskiy. E-mail: v.reut...@innopolis.ru<mailto:v.reut...@innopolis.ru> Skype: vadim.reutskiy Phone: +7 (927) 289-48-19 , ___ Users mailing list Users@ovirt.org<mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[ovirt-users] Question about for oVirt AAA LDAP plugin - configuration file format
Good afternoon! First of all, thank you for writing very useful software, which we are applying in our studio project at Innopolis University. It makes our lives much easier. I have one little question about oVirt AAA LDAP plugin. Can you provide structure of configuration file, which can be used by 'ovirt-engine-extension-aaa-ldap-setup' tool by passing '--config=filename' parameter? (Or, please, give me a clue, when I can find this information). Thank you in advance! Respectfully yours, Vadim Reutskiy. E-mail: v.reut...@innopolis.ru Skype: vadim.reutskiy Phone: +7 (927) 289-48-19 ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Bad performance with Windows 2012 guests
On Mon, 2015-05-18 at 10:41 -0500, Sven Achtelik wrote: > Hi Vadim, > > could you reproduce my issue with your system ? Do you have any advice on > getting more performance with Win2012 ? > > Thank you, > > Sven Hi Sven, Sorry for the delay in my response. I got pretty much mixed results compering WS2008R2 vs WS2012R2 running on RHEL6.5, as well as WS2012R2 running on RHEL6.5 vs WS2012R2 on top of RHEL7.0. As you can see, no matter what, on my setup WS2012R2 always performs better than WS2008R2. However, read performance is really poor for WS2012R2 running on top of RHEL 6.5 host. I'm still running some tests to get better understanding of this issue. Best regards, Vadim. RHEL 6.5RHEL 7 2012R2 2008R2 2012R2 WRITE QueueDepth Write MBps Write MBps Write MBps 1 37.216108 21.226565 35.023852 2 49.235277 25.0704649.721528 4 74.1260326.803523 70.124819 8 85.755117 33.1615484.640384 16 85.082841 41.244631 88.212587 32 98.192543 39.191664 101.577272 64 94.408442 39.071777 98.469119 128 READQueueDepth Read MBps Read MBps Read MBps 1 3.9012766.14455610.166953 2 2.4397665.48212 6.1118124 2.42859 5.6951896.8734598 2.4544455.5731976.50337316 2.5017445.8606256.62816332 3.0776497.36016610.912386 64 6.61292 10.185036 13.890179 124 > > -Ursprüngliche Nachricht- > Von: Vadim Rozenfeld [mailto:vroze...@redhat.com] > Gesendet: Dienstag, 5. Mai 2015 02:57 > An: Sven Achtelik > Cc: Doron Fediuck; Martijn Grendelman; Karen Noel; users@ovirt.org > Betreff: Re: AW: AW: AW: Bad performance with Windows 2012 guests > > On Mon, 2015-05-04 at 03:32 -0500, Sven Achtelik wrote: > > Hi Vadim, > > > > the command line: > > > > /usr/libexec/qemu-kvm -name wc_db01 -S -machine rhel6.5.0,accel=kvm,usb=off > > -cpu Westmere -m 12288 -realtime mlock=off -smp > > 2,maxcpus=16,sockets=16,cores=1,threads=1 -uuid > > fbbdc0a0-23a4-4d32-a526-a35c59eb790d -smbios > > type=1,manufacturer=oVirt,product=oVirt > > Node,version=7-1.1503.el7.centos.2.8,serial=4C4C4544-0035-4E10-8034-B4C04F4B4E31,uuid=fbbdc0a0-23a4-4d32-a526-a35c59eb790d > > -no-user-config -nodefaults -chardev > > socket,id=charmonitor,path=/var/lib/libvirt/qemu/wc_db01.monitor,server,nowait > > -mon chardev=charmonitor,id=monitor,mode=control -rtc > > base=2015-05-04T03:26:39,driftfix=slew -global > > kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -boot strict=on > > -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device > > virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device > > virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5 -drive > > file=/rhev/data-center/mnt/ovirt-engine.mgmt.asl.local:_var_lib_exports_iso/d1559536-71da-4b7a-ad71-171b0b528d7f/images/----/SVR2012EVAL.iso,if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= > > -device > > ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=2 -drive > > file=/rhev/data-center/0002-0002-0002-0002-03e2/a7d4ddb9-4486-4e37-b524-29625d6a7e61/images/23672c7f-ec3c-4686-bc29-89a0f95eae1c/9741917b-9134-4e14-892d-d16abf13e406,if=none,id=drive-virtio-disk0,format=raw,serial=23672c7f-ec3c-4686-bc29-89a0f95eae1c,cache=none,werror=stop,rerror=stop,aio=native > > -device > > virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 > > -drive > > file=/rhev/data-center/0002-0002-0002-0002-03e2/a7d4ddb9-4486-4e37-b524-29625d6a7e61/images/238e79c3-378b-4117-9b6d-18f73832f286/a8730e05-ed95-4d41-a10d-e249b601ebd3,if=none,id=drive-virtio-disk1,format=qcow2,serial=238e79c3-378b-4117-9b6d-18f73832f286,cache=none,werror=stop,rerror=stop,aio=native > > -device > > virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk1,id=virtio-disk1 > > -netdev tap,fd=29,id=hostnet0,vhost=on,vhostfd=30 -device > > virtio-net-pci,netdev=hostnet0,id=net0,mac=00:00:1a:4a:ae:02,bus=pci.0,addr=0x3 > > -chardev > > socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/fbbdc0a0-23a4-4d32-a526-a35c59eb790d.com.redhat.rhevm.vdsm,server,nowait > > -device > > virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm > >
Re: [ovirt-users] Bad performance with Windows 2012 guests
On Mon, 2015-05-04 at 03:32 -0500, Sven Achtelik wrote: > Hi Vadim, > > the command line: > > /usr/libexec/qemu-kvm -name wc_db01 -S -machine rhel6.5.0,accel=kvm,usb=off > -cpu Westmere -m 12288 -realtime mlock=off -smp > 2,maxcpus=16,sockets=16,cores=1,threads=1 -uuid > fbbdc0a0-23a4-4d32-a526-a35c59eb790d -smbios > type=1,manufacturer=oVirt,product=oVirt > Node,version=7-1.1503.el7.centos.2.8,serial=4C4C4544-0035-4E10-8034-B4C04F4B4E31,uuid=fbbdc0a0-23a4-4d32-a526-a35c59eb790d > -no-user-config -nodefaults -chardev > socket,id=charmonitor,path=/var/lib/libvirt/qemu/wc_db01.monitor,server,nowait > -mon chardev=charmonitor,id=monitor,mode=control -rtc > base=2015-05-04T03:26:39,driftfix=slew -global > kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -boot strict=on > -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device > virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device > virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5 -drive > file=/rhev/data-center/mnt/ovirt-engine.mgmt.asl.local:_var_lib_exports_iso/d1559536-71da-4b7a-ad71-171b0b528d7f/images/----/SVR2012EVAL.iso,if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= > -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=2 > -drive > file=/rhev/data-center/0002-0002-0002-0002-03e2/a7d4ddb9-4486-4e37-b524-29625d6a7e61/images/23672c7f-ec3c-4686-bc29-89a0f95eae1c/9741917b-9134-4e14-892d-d16abf13e406,if=none,id=drive-virtio-disk0,format=raw,serial=23672c7f-ec3c-4686-bc29-89a0f95eae1c,cache=none,werror=stop,rerror=stop,aio=native > -device > virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 > -drive > file=/rhev/data-center/0002-0002-0002-0002-03e2/a7d4ddb9-4486-4e37-b524-29625d6a7e61/images/238e79c3-378b-4117-9b6d-18f73832f286/a8730e05-ed95-4d41-a10d-e249b601ebd3,if=none,id=drive-virtio-disk1,format=qcow2,serial=238e79c3-378b-4117-9b6d-18f73832f286,cache=none,werror=stop,rerror=stop,aio=native > -device > virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk1,id=virtio-disk1 > -netdev tap,fd=29,id=hostnet0,vhost=on,vhostfd=30 -device > virtio-net-pci,netdev=hostnet0,id=net0,mac=00:00:1a:4a:ae:02,bus=pci.0,addr=0x3 > -chardev > socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/fbbdc0a0-23a4-4d32-a526-a35c59eb790d.com.redhat.rhevm.vdsm,server,nowait > -device > virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm > -chardev > socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/fbbdc0a0-23a4-4d32-a526-a35c59eb790d.org.qemu.guest_agent.0,server,nowait > -device > virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 > -chardev pty,id=charconsole0 -device > virtconsole,chardev=charconsole0,id=console0 -device usb-tablet,id=input0 > -vnc 172.16.1.14:2,password -k en-us -vga cirrus -device > virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x8 -msg timestamp=on > > Sven > Thanks a lot. I will try trace this issue on my local setup. Best regards, Vadim. > -Ursprüngliche Nachricht- > Von: Vadim Rozenfeld [mailto:vroze...@redhat.com] > Gesendet: Montag, 4. Mai 2015 05:00 > An: Sven Achtelik > Cc: Doron Fediuck; Martijn Grendelman; Karen Noel; users@ovirt.org > Betreff: Re: AW: AW: Bad performance with Windows 2012 guests > > On Sun, 2015-05-03 at 07:46 -0500, Sven Achtelik wrote: > > Hi Vadim, > > > > I've tested the performance with CrystalDiskMark from inside the Windows > > guest. Using Win2k8 R2 I got expected values for my system, about 88 MB/s > > on 4k random with 32 queues and 500MB/s + sequential writes with 32 queues. > > Using a Windows 2012 VM on the same system it's only 33MB/s on 4k random > > with 32 queues and 300MB/s sequential writes. Similar tests with a linux VM > > show a bit better values than the Win2k8 R2 and respond ultra-fast. > > > > My hosts are connected via iSCSI using a 10 GbE link and a ZFS appliance as > > the storage system. All tests have been run several times with the same > > results. > > Sven, > Can I ask you to post the Windows 2012 VM qemu command line? > > Thanks, > Vadim. > > > > > Sven > > > > -Ursprüngliche Nachricht- > > Von: Vadim Rozenfeld [mailto:vroze...@redhat.com] > > Gesendet: Sonntag, 3. Mai 2015 14:35 > > An: Sven Achtelik > > Cc: Doron Fediuck; Martijn Grendelman; Karen Noel; users@ovirt.org > > Betreff: Re: AW: Bad performance with Windows 2012 guests > > > > On Sun, 2015-05-03 at 06:48 -0500, Sven Achtelik wrote: > >
Re: [ovirt-users] Bad performance with Windows 2012 guests
On Mon, 2015-05-04 at 08:28 -0400, Michal Skrivanek wrote: > > On 4 May 2015, at 09:35, Martijn Grendelman > wrote: > > > Hi, > > > > > >>> Ever since our first Windows Server 2012 deployment on oVirt (3.4 back > >>> then, now 3.5.1), I have noticed that working on these VMs via RDP or on > >>> the console via VNC is noticeably slower than on Windows 2008 guests on > >>> the same oVirt environment. > > [snip] > >>> > >>> Does anyone share this experience? > >>> Any idea why this could happen and how it can be fixed? > >>> Any other information I should share to get a better idea? > >> Hi Martijn, > >> Can you please provide the QEMU command line, together with kvm and qemu > >> version? > >> > >> This information will be helpful for reproducing the problem. > >> However, if the problem is not reproducible on a local setup, we will > >> probably need to ask collecting some performance information with > >> xperf tool. > > > > Sure! > > > > Command line is this: > > > > /usr/libexec/qemu-kvm -name Getafix -S -M rhel6.5.0 -cpu Penryn,hv_relaxed > > -enable-kvm -m 2048 -realtime mlock=off -smp > > 2,maxcpus=16,sockets=16,cores=1,threads=1 -uuid > > 34951c25-9a37-4712-a16a-fdfc98f4febc -smbios > > type=1,manufacturer=oVirt,product=oVirt > > Node,version=6-6.el6.centos.12.2,serial=44454C4C-3400-1058-804C-B1C04F42344A,uuid=34951c25-9a37-4712-a16a-fdfc98f4febc > > -nodefconfig -nodefaults -chardev > > socket,id=charmonitor,path=/var/lib/libvirt/qemu/Getafix.monitor,server,nowait > > -mon chardev=charmonitor,id=monitor,mode=control -rtc > > base=2015-01-12T11:14:02,clock=vm,driftfix=slew -no-kvm-pit-reinjection > > -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device > > virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x4 -drive > > if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,serial= > > -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive > > file=/rhev/data-center/aefd5844-6e01-4070-b3b9-c0d73cc40c78/52678e67-a202-4306-b7e d-5fed8df10edf/images/28cc9a6c-6f2e-4b09-b361-f2a09f27dbc5/4c7b571e-4b29-47b9-ab4b-5799d64f28f9,if=none,id=drive-virtio-disk0,format=raw,serial=28cc9a6c-6f2e-4b09-b361-f2a09f27dbc5,cache=none,werror=stop,rerror=stop,aio=threads -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=41,id=hostnet0,vhost=on,vhostfd=43 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:74:59:a2,bus=pci.0,addr=0x3 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/34951c25-9a37-4712-a16a-fdfc98f4febc.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/34951c25-9a37-4712-a16a-fdfc98f4febc.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -device usb-table t,id=input0 -vnc 172.17.6.14:7,password -k en-us -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 -msg timestamp=on > > > > Qemu version: > > > > qemu-kvm-rhev-0.12.1.2-2.415.el6_5.14.x86_64 > > > > Please let me know if I can do more to help! > > How about trying virtio-scsi? Same difference? > > We'll be supporting virtio blk dataplane in 3.6, that may affect the > performance significantly. > Also a EL7 hypervisor could change results a lot. Do you have any at hand to > give it a try? > Well, also hyperv enlightnment? Not sure, but worth a try. It's currently > disabled in osinfo entry for Win8/2012, can you try that?(on a non-production > VM;) It worth trying, especially hv_time flag. Vadim. > Thanks, > michal > > > > > Best regards, > > Martijn. > > ___ > > Users mailing list > > Users@ovirt.org > > http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Bad performance with Windows 2012 guests
On Mon, 2015-05-04 at 09:34 +0200, Martijn Grendelman wrote: > Hi, > > > >> Ever since our first Windows Server 2012 deployment on oVirt (3.4 back > >> then, now 3.5.1), I have noticed that working on these VMs via RDP or on > >> the console via VNC is noticeably slower than on Windows 2008 guests on > >> the same oVirt environment. > [snip] > >> > >> Does anyone share this experience? > >> Any idea why this could happen and how it can be fixed? > >> Any other information I should share to get a better idea? > >> > > Hi Martijn, > > Can you please provide the QEMU command line, together with kvm and qemu > > version? > > > > This information will be helpful for reproducing the problem. > > However, if the problem is not reproducible on a local setup, we will > > probably need to ask collecting some performance information with > > xperf tool. > > Sure! > > Command line is this: > > /usr/libexec/qemu-kvm -name Getafix -S -M rhel6.5.0 -cpu > Penryn,hv_relaxed -enable-kvm -m 2048 -realtime mlock=off -smp > 2,maxcpus=16,sockets=16,cores=1,threads=1 -uuid > 34951c25-9a37-4712-a16a-fdfc98f4febc -smbios > type=1,manufacturer=oVirt,product=oVirt > Node,version=6-6.el6.centos.12.2,serial=44454C4C-3400-1058-804C-B1C04F42344A,uuid=34951c25-9a37-4712-a16a-fdfc98f4febc > > -nodefconfig -nodefaults -chardev > socket,id=charmonitor,path=/var/lib/libvirt/qemu/Getafix.monitor,server,nowait > > -mon chardev=charmonitor,id=monitor,mode=control -rtc > base=2015-01-12T11:14:02,clock=vm,driftfix=slew -no-kvm-pit-reinjection > -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 > -device > virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x4 > -drive > if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,serial= > -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 > -drive > file=/rhev/data-center/aefd5844-6e01-4070-b3b9-c0d73cc40c78/52678e67-a202-4306-b7ed-5fed8df10edf/images/28cc9a6c-6f2e-4b09-b361-f2a09f27dbc5/4c7b571e-4b29-47b9-ab4b-5799d64f28f9,if=none,id=drive-virtio-disk0,format=raw,serial=28cc9a6c-6f2e-4b09-b361-f2a09f27dbc5,cache=none,werror=stop,rerror=stop,aio=threads > > -device > virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 > > -netdev tap,fd=41,id=hostnet0,vhost=on,vhostfd=43 -device > virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:74:59:a2,bus=pci.0,addr=0x3 > > -chardev > socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/34951c25-9a37-4712-a16a-fdfc98f4febc.com.redhat.rhevm.vdsm,server,nowait > > -device > virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm > > -chardev > socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/34951c25-9a37-4712-a16a-fdfc98f4febc.org.qemu.guest_agent.0,server,nowait > > -device > virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 > > -device usb-tablet,id=input0 -vnc 172.17.6.14:7,password -k en-us -vga > cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 -msg > timestamp=on > > Qemu version: > > qemu-kvm-rhev-0.12.1.2-2.415.el6_5.14.x86_64 > > Please let me know if I can do more to help! > Thank you Martijn, Just curious, when opening Device Manager dialog, do you see "High precision event timer" device under "System devices" category? Best regards, Vadim. > Best regards, > Martijn. ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Bad performance with Windows 2012 guests
On Sun, 2015-05-03 at 07:46 -0500, Sven Achtelik wrote: > Hi Vadim, > > I've tested the performance with CrystalDiskMark from inside the Windows > guest. Using Win2k8 R2 I got expected values for my system, about 88 MB/s on > 4k random with 32 queues and 500MB/s + sequential writes with 32 queues. > Using a Windows 2012 VM on the same system it's only 33MB/s on 4k random with > 32 queues and 300MB/s sequential writes. Similar tests with a linux VM show a > bit better values than the Win2k8 R2 and respond ultra-fast. > > My hosts are connected via iSCSI using a 10 GbE link and a ZFS appliance as > the storage system. All tests have been run several times with the same > results. Sven, Can I ask you to post the Windows 2012 VM qemu command line? Thanks, Vadim. > > Sven > > -Ursprüngliche Nachricht- > Von: Vadim Rozenfeld [mailto:vroze...@redhat.com] > Gesendet: Sonntag, 3. Mai 2015 14:35 > An: Sven Achtelik > Cc: Doron Fediuck; Martijn Grendelman; Karen Noel; users@ovirt.org > Betreff: Re: AW: Bad performance with Windows 2012 guests > > On Sun, 2015-05-03 at 06:48 -0500, Sven Achtelik wrote: > > Hi Doron, > > > > I've also noticed that there seems to be a difference in performance > > between Win2k8 R2/Linux and Windows Server 2012. After reading Martijns > > post I've done some speed test regarding the drive speeds and was looking > > for a way to compare the VMs on a more professional way. My tests showed, > > that on the same hardware, the Win2k8 R2 was faster in response and > > throughput on the disks. I found a utility that somehow measures the > > latency on a system and that also showed a significant difference. What is > > the correct way to do a performance test on a VM running in KVM ? > > > > Sven > > > Hi Sven, > > Can you specify the type of disk on your system - ide, virtio-blk, or > virtio-scsi? > We usually use iometer for disk performance profiling. Sequential read/write, > block size from 4K up to 256K, queue depth from 1 to 64. > > Vadim. > > -Ursprüngliche Nachricht- > > Von: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] Im > > Auftrag von Doron Fediuck > > Gesendet: Samstag, 2. Mai 2015 10:50 > > An: Martijn Grendelman > > Cc: Karen Noel; users@ovirt.org; Vadim Rozenfeld > > Betreff: Re: [ovirt-users] Bad performance with Windows 2012 guests > > > > > > On Apr 30, 2015 14:03, Martijn Grendelman > > wrote: > > > > > > Hi, > > > > > > Ever since our first Windows Server 2012 deployment on oVirt (3.4 > > > back then, now 3.5.1), I have noticed that working on these VMs via > > > RDP or on the console via VNC is noticeably slower than on Windows > > > 2008 guests on the same oVirt environment. > > > > > > Basic things like starting an application (even the Server Manager > > > that get started automatically on login) take a very long time, > > > sometimes minutes. Everything is just... slow. > > > > > > We have recently deployed Microsoft Exchange on a Windows Server > > > 2012 guest on RHEV, and it doesn't perform well at all. > > > > > > I haven't been able to find the cause for this slowness; CPU usage > > > is not excessive and it doesn't seem I/O related. Moreover, other > > > types of guests (Linux and even Windows 2008) do not have this problem. > > > > > > We have 3 different environments: > > > - oVirt 3.5.1, on old Dell servers with Penryn Family CPUs with > > > fairly slow storage on replicated GlusterFS, running CentOS 6.6 > > > - oVirt 3.5.1, on modern 6-core SandyBridge servers with local > > > storage via NFS, running CentOS 7.0) > > > - RHEV 3.4.4 on modern 10-core SandyBridge servers with an iSCSI SAN > > > behind it, running on RHEV Hypervisor 6.5 > > > > > > All of these -very different- environments expose the same behaviour: > > > Linux, Windows 2008 fast (or as fast as can be expected given the > > > hardware), Windows 2012 painfully slow. > > > > > > All Windows 2012 servers use VirtIO disk and network. I think all > > > drivers are from the virtio-win-0.1-74 ISO. > > > > > > Does anyone share this experience? > > > Any idea why this could happen and how it can be fixed? > > > Any other information I should share to get a better idea? > > > > > > Btw, for the guests on the RHEV environment, we have a case with > > > RedHat support, but that doesn't seem to
Re: [ovirt-users] Bad performance with Windows 2012 guests
On Sun, 2015-05-03 at 06:48 -0500, Sven Achtelik wrote: > Hi Doron, > > I've also noticed that there seems to be a difference in performance between > Win2k8 R2/Linux and Windows Server 2012. After reading Martijns post I've > done some speed test regarding the drive speeds and was looking for a way to > compare the VMs on a more professional way. My tests showed, that on the same > hardware, the Win2k8 R2 was faster in response and throughput on the disks. I > found a utility that somehow measures the latency on a system and that also > showed a significant difference. What is the correct way to do a performance > test on a VM running in KVM ? > > Sven > Hi Sven, Can you specify the type of disk on your system - ide, virtio-blk, or virtio-scsi? We usually use iometer for disk performance profiling. Sequential read/write, block size from 4K up to 256K, queue depth from 1 to 64. Vadim. > -Ursprüngliche Nachricht- > Von: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] Im Auftrag von > Doron Fediuck > Gesendet: Samstag, 2. Mai 2015 10:50 > An: Martijn Grendelman > Cc: Karen Noel; users@ovirt.org; Vadim Rozenfeld > Betreff: Re: [ovirt-users] Bad performance with Windows 2012 guests > > > On Apr 30, 2015 14:03, Martijn Grendelman wrote: > > > > Hi, > > > > Ever since our first Windows Server 2012 deployment on oVirt (3.4 back > > then, now 3.5.1), I have noticed that working on these VMs via RDP or > > on the console via VNC is noticeably slower than on Windows 2008 > > guests on the same oVirt environment. > > > > Basic things like starting an application (even the Server Manager > > that get started automatically on login) take a very long time, > > sometimes minutes. Everything is just... slow. > > > > We have recently deployed Microsoft Exchange on a Windows Server 2012 > > guest on RHEV, and it doesn't perform well at all. > > > > I haven't been able to find the cause for this slowness; CPU usage is > > not excessive and it doesn't seem I/O related. Moreover, other types > > of guests (Linux and even Windows 2008) do not have this problem. > > > > We have 3 different environments: > > - oVirt 3.5.1, on old Dell servers with Penryn Family CPUs with fairly > > slow storage on replicated GlusterFS, running CentOS 6.6 > > - oVirt 3.5.1, on modern 6-core SandyBridge servers with local storage > > via NFS, running CentOS 7.0) > > - RHEV 3.4.4 on modern 10-core SandyBridge servers with an iSCSI SAN > > behind it, running on RHEV Hypervisor 6.5 > > > > All of these -very different- environments expose the same behaviour: > > Linux, Windows 2008 fast (or as fast as can be expected given the > > hardware), Windows 2012 painfully slow. > > > > All Windows 2012 servers use VirtIO disk and network. I think all > > drivers are from the virtio-win-0.1-74 ISO. > > > > Does anyone share this experience? > > Any idea why this could happen and how it can be fixed? > > Any other information I should share to get a better idea? > > > > Btw, for the guests on the RHEV environment, we have a case with > > RedHat support, but that doesn't seem to lead to a quick solution, > > hence I'm writing here, too. > > > > Thanks for any help. > > > > Regards, > > Martijn Grendelman > > Hi Martijn, > Can you please provide the QEMU command line, together with kvm and qemu > version? > > This information will be helpful for reproducing the problem. However, if the > problem is not reproducible on a local setup, we will probably need to ask > collecting some performance information with xperf tool. > > Doron > ___ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Extremely poor disk access speeds in Windows guest
On Fri, 2014-01-31 at 11:37 -0500, Steve Dainard wrote: > I've reconfigured my setup (good succes below, but need clarity on > gluster option): > > > Two nodes total, both running virt and glusterfs storage (2 node > replica, quorum). > > > I've created an NFS storage domain, pointed at the first nodes IP > address. I've launched a 2008 R2 SP1 install with a virtio-scsi disk, > and the SCSI pass-through driver on the same node as the NFS domain is > pointing at. > > > Windows guest install has been running for roughly 1.5 hours, still > "Expanding Windows files (55%) ..." [VR] Does it work faster with IDE? Do you have kvm enabled? Thanks, Vadim. > > > top is showing: > PID USER PR NI VIRT RES SHR S %CPU %MEMTIME+ COMMAND > > 3609 root 20 0 1380m 33m 2604 S 35.4 0.1 231:39.75 > glusterfsd > 21444 qemu 20 0 6362m 4.1g 6592 S 10.3 8.7 10:11.53 qemu-kvm > > > > This is a 2 socket, 6 core xeon machine with 48GB of RAM, and 6x > 7200rpm enterprise sata disks in RAID5 so I don't think we're hitting > hardware limitations. > > > dd on xfs (no gluster) > > > time dd if=/dev/zero of=test bs=1M count=2048 > 2048+0 records in > 2048+0 records out > 2147483648 bytes (2.1 GB) copied, 4.15787 s, 516 MB/s > > > real 0m4.351s > user 0m0.000s > sys 0m1.661s > > > > > time dd if=/dev/zero of=test bs=1k count=200 > 200+0 records in > 200+0 records out > 204800 bytes (2.0 GB) copied, 4.06949 s, 503 MB/s > > > real 0m4.260s > user 0m0.176s > sys 0m3.991s > > > > > I've enabled nfs.trusted-sync > (http://gluster.org/community/documentation/index.php/Gluster_3.2:_Setting_Volume_Options#nfs.trusted-sync) > on the gluster volume, and the speed difference is immeasurable . Can anyone > explain what this option does, and what the risks are with a 2 node gluster > replica volume with quorum enabled? > > > Thanks, ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Extremely poor disk access speeds in Windows guest
On Wed, 2014-01-29 at 12:35 -0500, Steve Dainard wrote: > On Wed, Jan 29, 2014 at 5:11 AM, Vadim Rozenfeld > wrote: > On Wed, 2014-01-29 at 11:30 +0200, Ronen Hod wrote: > > Adding the virtio-scsi developers. > > Anyhow, virtio-scsi is newer and less established than > viostor (the > > block device), so you might want to try it out. > > > [VR] > Was it "SCSI Controller" or "SCSI pass-through controller"? > If it's "SCSI Controller" then it will be viostor (virtio-blk) > device > driver. > > > > > "SCSI Controller" is listed in device manager. > > > Hardware ID's: > PCI\VEN_1AF4&DEV_1004&SUBSYS_00081AF4&REV_00 > PCI\VEN_1AF4&DEV_1004&SUBSYS_00081AF4 There is something strange here. Subsystem ID 0008 means it is a virtio scsi pass-through controller. And you shouldn't be able to install "SCSI Controller" device driver (viostor.sys) on top of "SCSI pass-through Controller". vioscsi.sys should be installed on top of VEN_1AF4&DEV_1004&SUBSYS_00081AF4&REV_00 viostor.sys should be installed on top of VEN_1AF4&DEV_1001&SUBSYS_00021AF4&REV_00 > PCI\VEN_1AF4&DEV_1004&CC_01 > PCI\VEN_1AF4&DEV_1004&CC_0100 > > > > > > A disclaimer: There are time and patches gaps between RHEL > and other > > versions. > > > > Ronen. > > > > On 01/28/2014 10:39 PM, Steve Dainard wrote: > > > > > I've had a bit of luck here. > > > > > > > > > Overall IO performance is very poor during Windows > updates, but a > > > contributing factor seems to be the "SCSI Controller" > device in the > > > guest. This last install I didn't install a driver for > that device, > > > [VR] > Does it mean that your system disk is IDE and the data disk > (virtio-blk) > is not accessible? > > > In Ovirt 3.3.2-1.el6 I do not have an option to add a virtio-blk > device: > Screenshot here: > https://dl.dropboxusercontent.com/u/21916057/Screenshot%20from% > 202014-01-29%2010%3A04%3A57.png my guess is that VirtIO means virtio-blk, and you should use viostor.sys for it. VirtIO-SCSI is for virtio-scsi, and need install vioscsi.sys to make it working in Windows. > > > VM disk drive is "Red Hat VirtIO SCSI Disk Device", storage controller > is listed as "Red Hat VirtIO SCSI Controller" as shown in device > manager. > Screenshot here: > https://dl.dropboxusercontent.com/u/21916057/Screenshot%20from% > 202014-01-29%2009%3A57%3A24.png > > > > In Ovirt manager the disk interface is listed as "VirtIO". > Screenshot > here: https://dl.dropboxusercontent.com/u/21916057/Screenshot%20from% > 202014-01-29%2009%3A58%3A35.png > > > > > and my performance is much better. Updates still chug > along quite > > > slowly, but I seem to have more than the < 100KB/s write > speeds I > > > was seeing previously. > > > > > > > > > Does anyone know what this device is for? I have the "Red > Hat VirtIO > > > SCSI Controller" listed under storage controllers. > > > [VR] > It's a virtio-blk device. OS cannot see this volume unless you > have > viostor.sys driver installed on it. > > > Interesting that my VM's can see the controller, but I can't add a > disk for that controller in Ovirt. Is there a package I have missed on > install? > > > rpm -qa | grep ovirt > ovirt-host-deploy-java-1.1.3-1.el6.noarch > ovirt-engine-backend-3.3.2-1.el6.noarch > ovirt-engine-lib-3.3.2-1.el6.noarch > ovirt-engine-restapi-3.3.2-1.el6.noarch > ovirt-engine-sdk-python-3.3.0.8-1.el6.noarch > ovirt-log-collector-3.3.2-2.el6.noarch > ovirt-engine-dbscripts-3.3.2-1.el6.noarch > ovirt-engine-webadmin-portal-3.3.2-1.el6.noarch > ovirt-host-deploy-1.1.3-1.el6.noarch > ovirt-image-uploader-3.3.2-1.el6.noarch > ovirt-engine-websocket-proxy-3.3.2-1.el6.noarch > ovirt-engine-userportal-3.3.2-1.el6.noarch > ovirt-engine-setup-3.3.2-1.el6.noarch > ovirt-iso-uploader-3.3.2-1.el6.noarch > ovirt-engine-cli-3.3.0.6-1.el6.noarch > ovirt-engine-3.3.2-1.el6.noarch > ovirt-engine-tools-3.3.2-1.el6.
Re: [Users] Extremely poor disk access speeds in Windows guest
would provide). > > > > Increasing bs against NFS mount point (gluster > > backend): > > dd if=/dev/zero of=/mnt/rep2-nfs/test1 bs=128k > > count=16000 > > 16000+0 records in > > 16000+0 records out > > > > 2097152000 bytes (2.1 GB) copied, > > 19.1089 s, 110 MB/s > > > > > > > > GLUSTER host top reports: > >PID USER PR NI VIRT RES SHR S %CPU %MEM > >TIME+ COMMAND > > 2141 root 20 0 550m 183m 2844 R 88.9 2.3 > > 17:30.82 glusterfs > > 2126 root 20 0 1414m 31m 2408 S 46.1 0.4 > > 14:18.18 glusterfsd > > > > So roughly the same performance as 4k writes > > remotely. I'm guessing if I > > could randomize these writes we'd see a large > > difference. > > > > > > Check this thread out, > > > > > > http://raobharata.wordpress.com/2012/10/29/qemu-glusterfs-native-integration/ > > it's > > quite dated but I remember seeing similar > > figures. > > > > In fact when I used FIO on a libgfapi mounted VM > > I got slightly > > faster read/write speeds than on the physical > > box itself (I assume > > because of some level of caching). On NFS it was > > close to half.. > > You'll probably get a little more interesting > > results using FIO > > opposed to dd > > > > ( -Andrew) > > > > > > Sorry Andrew, I meant to reply to your other message > > - it looks like > > CentOS 6.5 can't use libgfapi right now, I stumbled > > across this info in > > a couple threads. Something about how the CentOS > > build has different > > flags set on build for RHEV snapshot support then > > RHEL, so native > > gluster storage domains are disabled because > > snapshot support is assumed > > and would break otherwise. I'm assuming this is > > still valid as I cannot > > get a storage lock when I attempt a gluster storage > > domain. > > > > > > > > > > > > I've setup a NFS storage domain on my desktops SSD. > > I've re-installed > > win 2008 r2 and initially it was running smoother. > > > > Disk performance peaks at 100MB/s. > > > > If I copy a 250MB file from a share into the Windows > > VM, it writes out [VR] Do you copy it with Explorer or any other copy program? Do you have HPET enabled? How does it work with if you copy from/to local (non-NFS) storage? What is your virtio-win drivers package origin and version? Thanks, Vadim. > > quickly, less than 5 seconds. > > > > If I copy 20 files, ranging in file sizes from 4k to > > 200MB, totaling in > > 650MB from the share - windows becomes unresponsive, > > in top the > > desktop's nfs daemon is barely being touched at all, > > and then eventually > > is not hit. I can still interact with the VM's > > windows through the spice > > console. Eventually the file transfer will start and > > rocket through the > > transfer. > > > > I'
Re: [Users] best disk type for WIn XP guests
On Wednesday, January 23, 2013 06:17:16 PM Gianluca Cecchi wrote: > Hello, > I have a WIn XP guest configured with one ide disk. > I would like to pass to virtio. Is it supported/usable for Win XP as a > disk type on oVirt? > What else are using other ones in case, apart IDE? > My attempt is to add a second 1Gb disk configured as virtio and then > if successful change disk type for the first disk too. > But when powering up the guest it finds new hardware for the second > disk, I point it to the directory > WXP\X86 of the iso using virtio-win-1.1.16.vfd > > It finds the viostor.xxx files but at the end it fails installing the > driver (see > https://docs.google.com/file/d/0BwoPbcrMv8mvMUQ2SWxYZWhSV0E/edit > ) > > Any help/suggestion is welcome. Error code 39 means that OS cannot load the device driver. On 32 bit platforms it usually happens with corrupted installation media or platform/architecture mismatch. Vadim. > > Gianluca > ___ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users