Re: [ovirt-users] Problem starting VMs

2016-09-09 Thread Michal Privoznik
On 07.09.2016 14:02, Petr Horacek wrote:
> 2016-08-30 6:49 GMT+02:00 knarra :
>> On 08/29/2016 07:13 PM, Petr Horacek wrote:
>>>
>>> Hello,
>>>
>>> could you please attach /var/log/vdsm/vdsm.log and
>>> /var/log/vdsm/supervdsm.log here?
>>>
>>> Regards,
>>> Petr
>>
>> Hi petr,
>>
>>I am not able to send vdsm and supervdsm log through mail. I have shared
>> it with you using dropbox, hope you have got the email.
>>
>> Thanks
>> kasturi
>>
>>>
>>> 2016-08-29 11:51 GMT+02:00 knarra :

 Hi,

  I am unable to launch vms on one of my host.  The problem is vm is
 stuck
 at "waiting for launch" and never comes up. I see the following messages
 in
 /var/log/messages. Can some one help me to resolve the issue.


 Aug 29 12:16:20 rhsqa-grafton3 systemd-machined: New machine
 qemu-20-appwinvm19.
 Aug 29 12:16:20 rhsqa-grafton3 systemd: Started Virtual Machine
 qemu-20-appwinvm19.
 Aug 29 12:16:20 rhsqa-grafton3 systemd: Starting Virtual Machine
 qemu-20-appwinvm19.
 Aug 29 12:16:20 rhsqa-grafton3 kvm: 11 guests now active
 Aug 29 12:16:21 rhsqa-grafton3 kernel: device vnet11 entered promiscuous
 mode
 Aug 29 12:16:21 rhsqa-grafton3 kernel: ovirtmgmt: port 13(vnet11) entered
 forwarding state
 Aug 29 12:16:21 rhsqa-grafton3 kernel: ovirtmgmt: port 13(vnet11) entered
 forwarding state
 Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
 COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -D PREROUTING -i
 vnet11
 -j libvirt-J-vnet11' failed:
   Illegal target name 'libvirt-J-vnet11'.
 Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
 COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -D POSTROUTING -o
 vnet11
 -j libvirt-P-vnet11' failed
 : Illegal target name 'libvirt-P-vnet11'.
 Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
 COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -L libvirt-J-vnet11'
 failed: Chain 'libvirt-J-vnet11
 ' doesn't exist.
 Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
 COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -L libvirt-P-vnet11'
 failed: Chain 'libvirt-P-vnet11
 ' doesn't exist.
 Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
 COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -F libvirt-J-vnet11'
 failed: Chain 'libvirt-J-vnet11
 ' doesn't exist.
 Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
 COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -X libvirt-J-vnet11'
 failed: Chain 'libvirt-J-vnet11
 ' doesn't exist.
 Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
 COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -F libvirt-P-vnet11'
 failed: Chain 'libvirt-P-vnet11
 ' doesn't exist.
 Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
 COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -X libvirt-P-vnet11'
 failed: Chain 'libvirt-P-vnet11
 ' doesn't exist.
 Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
 COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -F J-vnet11-mac'
 failed:
 Chain 'J-vnet11-mac' doesn'
 t exist.
 Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
 COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -X J-vnet11-mac'
 failed:
 Chain 'J-vnet11-mac' doesn'
 t exist.
 Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
 COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -F J-vnet11-arp-mac'
 failed: Chain 'J-vnet11-arp-mac
 ' doesn't exist.
 Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
 COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -X J-vnet11-arp-mac'
 failed: Chain 'J-vnet11-arp-mac
 ' doesn't exist.

 Thanks

 kasturi

 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>>
> 
> Hello Michal,
> 
> could you please take a look? Are you familiar with such issue?

No, this is the first time I see it. What's the reproducer? Or have you
just removed some ebtables and therefore libvirt complains?

Michal
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Problem starting VMs

2016-09-07 Thread Petr Horacek
2016-08-30 6:49 GMT+02:00 knarra :
> On 08/29/2016 07:13 PM, Petr Horacek wrote:
>>
>> Hello,
>>
>> could you please attach /var/log/vdsm/vdsm.log and
>> /var/log/vdsm/supervdsm.log here?
>>
>> Regards,
>> Petr
>
> Hi petr,
>
>I am not able to send vdsm and supervdsm log through mail. I have shared
> it with you using dropbox, hope you have got the email.
>
> Thanks
> kasturi
>
>>
>> 2016-08-29 11:51 GMT+02:00 knarra :
>>>
>>> Hi,
>>>
>>>  I am unable to launch vms on one of my host.  The problem is vm is
>>> stuck
>>> at "waiting for launch" and never comes up. I see the following messages
>>> in
>>> /var/log/messages. Can some one help me to resolve the issue.
>>>
>>>
>>> Aug 29 12:16:20 rhsqa-grafton3 systemd-machined: New machine
>>> qemu-20-appwinvm19.
>>> Aug 29 12:16:20 rhsqa-grafton3 systemd: Started Virtual Machine
>>> qemu-20-appwinvm19.
>>> Aug 29 12:16:20 rhsqa-grafton3 systemd: Starting Virtual Machine
>>> qemu-20-appwinvm19.
>>> Aug 29 12:16:20 rhsqa-grafton3 kvm: 11 guests now active
>>> Aug 29 12:16:21 rhsqa-grafton3 kernel: device vnet11 entered promiscuous
>>> mode
>>> Aug 29 12:16:21 rhsqa-grafton3 kernel: ovirtmgmt: port 13(vnet11) entered
>>> forwarding state
>>> Aug 29 12:16:21 rhsqa-grafton3 kernel: ovirtmgmt: port 13(vnet11) entered
>>> forwarding state
>>> Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
>>> COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -D PREROUTING -i
>>> vnet11
>>> -j libvirt-J-vnet11' failed:
>>>   Illegal target name 'libvirt-J-vnet11'.
>>> Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
>>> COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -D POSTROUTING -o
>>> vnet11
>>> -j libvirt-P-vnet11' failed
>>> : Illegal target name 'libvirt-P-vnet11'.
>>> Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
>>> COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -L libvirt-J-vnet11'
>>> failed: Chain 'libvirt-J-vnet11
>>> ' doesn't exist.
>>> Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
>>> COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -L libvirt-P-vnet11'
>>> failed: Chain 'libvirt-P-vnet11
>>> ' doesn't exist.
>>> Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
>>> COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -F libvirt-J-vnet11'
>>> failed: Chain 'libvirt-J-vnet11
>>> ' doesn't exist.
>>> Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
>>> COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -X libvirt-J-vnet11'
>>> failed: Chain 'libvirt-J-vnet11
>>> ' doesn't exist.
>>> Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
>>> COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -F libvirt-P-vnet11'
>>> failed: Chain 'libvirt-P-vnet11
>>> ' doesn't exist.
>>> Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
>>> COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -X libvirt-P-vnet11'
>>> failed: Chain 'libvirt-P-vnet11
>>> ' doesn't exist.
>>> Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
>>> COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -F J-vnet11-mac'
>>> failed:
>>> Chain 'J-vnet11-mac' doesn'
>>> t exist.
>>> Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
>>> COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -X J-vnet11-mac'
>>> failed:
>>> Chain 'J-vnet11-mac' doesn'
>>> t exist.
>>> Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
>>> COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -F J-vnet11-arp-mac'
>>> failed: Chain 'J-vnet11-arp-mac
>>> ' doesn't exist.
>>> Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
>>> COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -X J-vnet11-arp-mac'
>>> failed: Chain 'J-vnet11-arp-mac
>>> ' doesn't exist.
>>>
>>> Thanks
>>>
>>> kasturi
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>
>
>

Hello Michal,

could you please take a look? Are you familiar with such issue?

Thanks a lot,
Petr
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Problem starting VMs

2016-08-29 Thread knarra

On 08/29/2016 07:13 PM, Petr Horacek wrote:

Hello,

could you please attach /var/log/vdsm/vdsm.log and
/var/log/vdsm/supervdsm.log here?

Regards,
Petr

Hi petr,

   I am not able to send vdsm and supervdsm log through mail. I have 
shared it with you using dropbox, hope you have got the email.


Thanks
kasturi


2016-08-29 11:51 GMT+02:00 knarra :

Hi,

 I am unable to launch vms on one of my host.  The problem is vm is stuck
at "waiting for launch" and never comes up. I see the following messages in
/var/log/messages. Can some one help me to resolve the issue.


Aug 29 12:16:20 rhsqa-grafton3 systemd-machined: New machine
qemu-20-appwinvm19.
Aug 29 12:16:20 rhsqa-grafton3 systemd: Started Virtual Machine
qemu-20-appwinvm19.
Aug 29 12:16:20 rhsqa-grafton3 systemd: Starting Virtual Machine
qemu-20-appwinvm19.
Aug 29 12:16:20 rhsqa-grafton3 kvm: 11 guests now active
Aug 29 12:16:21 rhsqa-grafton3 kernel: device vnet11 entered promiscuous
mode
Aug 29 12:16:21 rhsqa-grafton3 kernel: ovirtmgmt: port 13(vnet11) entered
forwarding state
Aug 29 12:16:21 rhsqa-grafton3 kernel: ovirtmgmt: port 13(vnet11) entered
forwarding state
Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -D PREROUTING -i vnet11
-j libvirt-J-vnet11' failed:
  Illegal target name 'libvirt-J-vnet11'.
Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -D POSTROUTING -o vnet11
-j libvirt-P-vnet11' failed
: Illegal target name 'libvirt-P-vnet11'.
Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -L libvirt-J-vnet11'
failed: Chain 'libvirt-J-vnet11
' doesn't exist.
Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -L libvirt-P-vnet11'
failed: Chain 'libvirt-P-vnet11
' doesn't exist.
Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -F libvirt-J-vnet11'
failed: Chain 'libvirt-J-vnet11
' doesn't exist.
Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -X libvirt-J-vnet11'
failed: Chain 'libvirt-J-vnet11
' doesn't exist.
Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -F libvirt-P-vnet11'
failed: Chain 'libvirt-P-vnet11
' doesn't exist.
Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -X libvirt-P-vnet11'
failed: Chain 'libvirt-P-vnet11
' doesn't exist.
Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -F J-vnet11-mac' failed:
Chain 'J-vnet11-mac' doesn'
t exist.
Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -X J-vnet11-mac' failed:
Chain 'J-vnet11-mac' doesn'
t exist.
Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -F J-vnet11-arp-mac'
failed: Chain 'J-vnet11-arp-mac
' doesn't exist.
Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -X J-vnet11-arp-mac'
failed: Chain 'J-vnet11-arp-mac
' doesn't exist.

Thanks

kasturi

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Problem starting VMs

2016-08-29 Thread Petr Horacek
Hello,

could you please attach /var/log/vdsm/vdsm.log and
/var/log/vdsm/supervdsm.log here?

Regards,
Petr

2016-08-29 11:51 GMT+02:00 knarra :
> Hi,
>
> I am unable to launch vms on one of my host.  The problem is vm is stuck
> at "waiting for launch" and never comes up. I see the following messages in
> /var/log/messages. Can some one help me to resolve the issue.
>
>
> Aug 29 12:16:20 rhsqa-grafton3 systemd-machined: New machine
> qemu-20-appwinvm19.
> Aug 29 12:16:20 rhsqa-grafton3 systemd: Started Virtual Machine
> qemu-20-appwinvm19.
> Aug 29 12:16:20 rhsqa-grafton3 systemd: Starting Virtual Machine
> qemu-20-appwinvm19.
> Aug 29 12:16:20 rhsqa-grafton3 kvm: 11 guests now active
> Aug 29 12:16:21 rhsqa-grafton3 kernel: device vnet11 entered promiscuous
> mode
> Aug 29 12:16:21 rhsqa-grafton3 kernel: ovirtmgmt: port 13(vnet11) entered
> forwarding state
> Aug 29 12:16:21 rhsqa-grafton3 kernel: ovirtmgmt: port 13(vnet11) entered
> forwarding state
> Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
> COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -D PREROUTING -i vnet11
> -j libvirt-J-vnet11' failed:
>  Illegal target name 'libvirt-J-vnet11'.
> Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
> COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -D POSTROUTING -o vnet11
> -j libvirt-P-vnet11' failed
> : Illegal target name 'libvirt-P-vnet11'.
> Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
> COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -L libvirt-J-vnet11'
> failed: Chain 'libvirt-J-vnet11
> ' doesn't exist.
> Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
> COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -L libvirt-P-vnet11'
> failed: Chain 'libvirt-P-vnet11
> ' doesn't exist.
> Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
> COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -F libvirt-J-vnet11'
> failed: Chain 'libvirt-J-vnet11
> ' doesn't exist.
> Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
> COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -X libvirt-J-vnet11'
> failed: Chain 'libvirt-J-vnet11
> ' doesn't exist.
> Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
> COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -F libvirt-P-vnet11'
> failed: Chain 'libvirt-P-vnet11
> ' doesn't exist.
> Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
> COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -X libvirt-P-vnet11'
> failed: Chain 'libvirt-P-vnet11
> ' doesn't exist.
> Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
> COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -F J-vnet11-mac' failed:
> Chain 'J-vnet11-mac' doesn'
> t exist.
> Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
> COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -X J-vnet11-mac' failed:
> Chain 'J-vnet11-mac' doesn'
> t exist.
> Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
> COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -F J-vnet11-arp-mac'
> failed: Chain 'J-vnet11-arp-mac
> ' doesn't exist.
> Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
> COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -X J-vnet11-arp-mac'
> failed: Chain 'J-vnet11-arp-mac
> ' doesn't exist.
>
> Thanks
>
> kasturi
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Problem starting VMs

2016-08-29 Thread knarra

Hi,

I am unable to launch vms on one of my host.  The problem is vm is 
stuck at "waiting for launch" and never comes up. I see the following 
messages in /var/log/messages. Can some one help me to resolve the issue.



Aug 29 12:16:20 rhsqa-grafton3 systemd-machined: New machine 
qemu-20-appwinvm19.
Aug 29 12:16:20 rhsqa-grafton3 systemd: Started Virtual Machine 
qemu-20-appwinvm19.
Aug 29 12:16:20 rhsqa-grafton3 systemd: Starting Virtual Machine 
qemu-20-appwinvm19.

Aug 29 12:16:20 rhsqa-grafton3 kvm: 11 guests now active
Aug 29 12:16:21 rhsqa-grafton3 kernel: device vnet11 entered promiscuous 
mode
Aug 29 12:16:21 rhsqa-grafton3 kernel: ovirtmgmt: port 13(vnet11) 
entered forwarding state
Aug 29 12:16:21 rhsqa-grafton3 kernel: ovirtmgmt: port 13(vnet11) 
entered forwarding state
Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR: 
COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -D PREROUTING -i 
vnet11 -j libvirt-J-vnet11' failed:

 Illegal target name 'libvirt-J-vnet11'.
Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR: 
COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -D POSTROUTING -o 
vnet11 -j libvirt-P-vnet11' failed

: Illegal target name 'libvirt-P-vnet11'.
Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR: 
COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -L libvirt-J-vnet11' 
failed: Chain 'libvirt-J-vnet11

' doesn't exist.
Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR: 
COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -L libvirt-P-vnet11' 
failed: Chain 'libvirt-P-vnet11

' doesn't exist.
Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR: 
COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -F libvirt-J-vnet11' 
failed: Chain 'libvirt-J-vnet11

' doesn't exist.
Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR: 
COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -X libvirt-J-vnet11' 
failed: Chain 'libvirt-J-vnet11

' doesn't exist.
Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR: 
COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -F libvirt-P-vnet11' 
failed: Chain 'libvirt-P-vnet11

' doesn't exist.
Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR: 
COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -X libvirt-P-vnet11' 
failed: Chain 'libvirt-P-vnet11

' doesn't exist.
Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR: 
COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -F J-vnet11-mac' 
failed: Chain 'J-vnet11-mac' doesn'

t exist.
Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR: 
COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -X J-vnet11-mac' 
failed: Chain 'J-vnet11-mac' doesn'

t exist.
Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR: 
COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -F J-vnet11-arp-mac' 
failed: Chain 'J-vnet11-arp-mac

' doesn't exist.
Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR: 
COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -X J-vnet11-arp-mac' 
failed: Chain 'J-vnet11-arp-mac

' doesn't exist.

Thanks

kasturi

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Problem starting VMs

2016-08-25 Thread Wolfgang Bucher
Hi,



with xfs



Wolfgang



-Ursprüngliche Nachricht-
Von: Yaniv Kaul <yk...@redhat.com>
Gesendet: Don 25 August 2016 13:01
An: Wolfgang Bucher <wolfgang.buc...@netland-mn.de>
Betreff: Re: [ovirt-users] Problem starting VMs



On Wed, Aug 24, 2016 at 7:37 PM, Wolfgang Bucher <wolfgang.buc...@netland-mn.de 
<mailto:wolfgang.buc...@netland-mn.de> > wrote:
Hello



using the kernel-lt from elrepo solves all problems!!


With XFS, or EXT4?
 
Maybe it is a problem with the current kernel from centos ??


It's an XFS issue, I think it'll be fixed for 7.3.
Y.
 


Thanks 



Wolfgang



-Ursprüngliche Nachricht-
Von: Wolfgang Bucher <wolfgang.buc...@netland-mn.de 
<mailto:wolfgang.buc...@netland-mn.de> >
Gesendet: Die 23 August 2016 20:14
An: Yaniv Kaul <yk...@redhat.com <mailto:yk...@redhat.com> >
CC: users@ovirt.org <mailto:users@ovirt.org> (users@ovirt.org 
<mailto:users@ovirt.org> ) <users@ovirt.org <mailto:users@ovirt.org> >
Betreff: Re: [ovirt-users] Problem starting VMs

Hello



i just changed xfs to ext4 and defragmantion is much better then before, i will 
also try kernel lts from elrepo and test it again.



Thanks



Wolfgang



-Ursprüngliche Nachricht-
Von: Yaniv Kaul <yk...@redhat.com <mailto:yk...@redhat.com> >
Gesendet: Die 23 August 2016 19:46
An: Wolfgang Bucher <wolfgang.buc...@netland-mn.de 
<mailto:wolfgang.buc...@netland-mn.de> >
CC: Charles Gruener <cgrue...@gruener.us <mailto:cgrue...@gruener.us> >
Betreff: Re: AW: [ovirt-users] Problem starting VMs


On Aug 23, 2016 7:33 PM, "Wolfgang Bucher" <wolfgang.buc...@netland-mn.de 
<mailto:wolfgang.buc...@netland-mn.de> > wrote:
 >
 > Hello
 >
 >
 > I am using local storage with adaptec raid controller, disk format is raw

Raw is always raw-sparse, so that may explain this somehow, yet still odd that 
Windows installation would cause so much fragmentation.
 I wonder if using the discard hook (and IDE or virtio-scsi) would help - or 
perhaps using a qcow2 makes more sense (create a snapshot right after disk 
creation for example).

>
 >
 > image: e4d797d1-5719-48d0-891e-a36cd4a79c33
 > file format: raw
 > virtual size: 50G (53687091200 bytes)
 > disk size: 8.5G
 >
 >
 > this is a fresh installation of W2012 after installation i got with xfs_db 
 > -c frag -r  /dev/sdb1:
 >
 > aktuell 407974, ideal 35, Unterteilungsfaktor 99,99%

And the XFS formatted with default parameters?
 Y.

>
 >
 > Thanks
 >
 >
 > Wolfgang
 >
 >
 >> -Ursprüngliche Nachricht-
 >> Von: Yaniv Kaul <yk...@redhat.com <mailto:yk...@redhat.com> >
 >> Gesendet: Die 23 August 2016 17:56
 >> An: Wolfgang Bucher <wolfgang.buc...@netland-mn.de 
 >> <mailto:wolfgang.buc...@netland-mn.de> >
 >> CC: Michal Skrivanek <michal.skriva...@redhat.com 
 >> <mailto:michal.skriva...@redhat.com> >; users@ovirt.org 
 >> <mailto:users@ovirt.org> (users@ovirt.org <mailto:users@ovirt.org> ) 
 >> <users@ovirt.org <mailto:users@ovirt.org> >
 >>
 >> Betreff: Re: [ovirt-users] Problem starting VMs
 >>
 >>
 >>
 >> On Tue, Aug 23, 2016 at 6:40 PM, Wolfgang Bucher 
 >> <wolfgang.buc...@netland-mn.de <mailto:wolfgang.buc...@netland-mn.de> > 
 >> wrote:
 >>>
 >>> Hello
 >>>
 >>>
 >>> in var log messages i get following :
 >>>
 >>>
 >>> kernel: XFS: possible memory allocation deadlock in kmem_alloc (mode:0x250)
 >>>
 >>>
 >>> I have this problem on 4 different hosts.
 >>>
 >>>
 >>> This happens during copying files from network to a thin provisioned disk, 
 >>> no problems with preallocated disks.
 >>
 >>
 >> What kind of storage are you using? local storage? Even though, it makes 
 >> little sense to me - the disk is a qcow2 disk, which shouldn't be very 
 >> fragmented as you might think (qcow2 grows in 64K chunks).
 >> It may grow and grow and grow (until you sparsify it), but that's not going 
 >> to cause fragmentation. What causes it to be fragmented? Perhaps the 
 >> internal qcow2 mapping is quite fragmented?
 >> Y.
 >>  
 >>>
 >>>
 >>> Thanks
 >>>
 >>> Wolfgang
 >>>
 >>>
 >>>> -Ursprüngliche Nachricht-
 >>>> Von: Michal Skrivanek <michal.skriva...@redhat.com 
 >>>> <mailto:michal.skriva...@redhat.com> >
 >>>> Gesendet: Die 23 August 2016 17:11
 >>>> An: Wolfgang Bucher <wolfgang.buc...@netland-mn.de 
 >>>> <mailto:wolfgang.buc...@

Re: [ovirt-users] Problem starting VMs

2016-08-24 Thread Wolfgang Bucher
Hello



using the kernel-lt from elrepo solves all problems!!

Maybe it is a problem with the current kernel from centos ??



Thanks 



Wolfgang



-Ursprüngliche Nachricht-
Von: Wolfgang Bucher <wolfgang.buc...@netland-mn.de>
Gesendet: Die 23 August 2016 20:14
An: Yaniv Kaul <yk...@redhat.com>
CC: users@ovirt.org (users@ovirt.org) <users@ovirt.org>
Betreff: Re: [ovirt-users] Problem starting VMs

Hello



i just changed xfs to ext4 and defragmantion is much better then before, i will 
also try kernel lts from elrepo and test it again.



Thanks



Wolfgang



-Ursprüngliche Nachricht-
Von: Yaniv Kaul <yk...@redhat.com>
Gesendet: Die 23 August 2016 19:46
An: Wolfgang Bucher <wolfgang.buc...@netland-mn.de>
CC: Charles Gruener <cgrue...@gruener.us>
Betreff: Re: AW: [ovirt-users] Problem starting VMs


On Aug 23, 2016 7:33 PM, "Wolfgang Bucher" <wolfgang.buc...@netland-mn.de 
<mailto:wolfgang.buc...@netland-mn.de> > wrote:
 >
 > Hello
 >
 >
 > I am using local storage with adaptec raid controller, disk format is raw

Raw is always raw-sparse, so that may explain this somehow, yet still odd that 
Windows installation would cause so much fragmentation.
 I wonder if using the discard hook (and IDE or virtio-scsi) would help - or 
perhaps using a qcow2 makes more sense (create a snapshot right after disk 
creation for example).

>
 >
 > image: e4d797d1-5719-48d0-891e-a36cd4a79c33
 > file format: raw
 > virtual size: 50G (53687091200 bytes)
 > disk size: 8.5G
 >
 >
 > this is a fresh installation of W2012 after installation i got with xfs_db 
 > -c frag -r  /dev/sdb1:
 >
 > aktuell 407974, ideal 35, Unterteilungsfaktor 99,99%

And the XFS formatted with default parameters?
 Y.

>
 >
 > Thanks
 >
 >
 > Wolfgang
 >
 >
 >> -Ursprüngliche Nachricht-
 >> Von: Yaniv Kaul <yk...@redhat.com <mailto:yk...@redhat.com> >
 >> Gesendet: Die 23 August 2016 17:56
 >> An: Wolfgang Bucher <wolfgang.buc...@netland-mn.de 
 >> <mailto:wolfgang.buc...@netland-mn.de> >
 >> CC: Michal Skrivanek <michal.skriva...@redhat.com 
 >> <mailto:michal.skriva...@redhat.com> >; users@ovirt.org 
 >> <mailto:users@ovirt.org> (users@ovirt.org <mailto:users@ovirt.org> ) 
 >> <users@ovirt.org <mailto:users@ovirt.org> >
 >>
 >> Betreff: Re: [ovirt-users] Problem starting VMs
 >>
 >>
 >>
 >> On Tue, Aug 23, 2016 at 6:40 PM, Wolfgang Bucher 
 >> <wolfgang.buc...@netland-mn.de <mailto:wolfgang.buc...@netland-mn.de> > 
 >> wrote:
 >>>
 >>> Hello
 >>>
 >>>
 >>> in var log messages i get following :
 >>>
 >>>
 >>> kernel: XFS: possible memory allocation deadlock in kmem_alloc (mode:0x250)
 >>>
 >>>
 >>> I have this problem on 4 different hosts.
 >>>
 >>>
 >>> This happens during copying files from network to a thin provisioned disk, 
 >>> no problems with preallocated disks.
 >>
 >>
 >> What kind of storage are you using? local storage? Even though, it makes 
 >> little sense to me - the disk is a qcow2 disk, which shouldn't be very 
 >> fragmented as you might think (qcow2 grows in 64K chunks).
 >> It may grow and grow and grow (until you sparsify it), but that's not going 
 >> to cause fragmentation. What causes it to be fragmented? Perhaps the 
 >> internal qcow2 mapping is quite fragmented?
 >> Y.
 >>  
 >>>
 >>>
 >>> Thanks
 >>>
 >>> Wolfgang
 >>>
 >>>
 >>>> -Ursprüngliche Nachricht-
 >>>> Von: Michal Skrivanek <michal.skriva...@redhat.com 
 >>>> <mailto:michal.skriva...@redhat.com> >
 >>>> Gesendet: Die 23 August 2016 17:11
 >>>> An: Wolfgang Bucher <wolfgang.buc...@netland-mn.de 
 >>>> <mailto:wolfgang.buc...@netland-mn.de> >
 >>>> CC: Milan Zamazal <mzama...@redhat.com <mailto:mzama...@redhat.com> >; 
 >>>> users@ovirt.org <mailto:users@ovirt.org> (users@ovirt.org 
 >>>> <mailto:users@ovirt.org> ) <users@ovirt.org <mailto:users@ovirt.org> >
 >>>> Betreff: Re: [ovirt-users] Problem starting VMs
 >>>>
 >>>>
 >>>>> On 23 Aug 2016, at 11:06, Wolfgang Bucher <wolfgang.buc...@netland-mn.de 
 >>>>> <mailto:wolfgang.buc...@netland-mn.de> > wrote:
 >>>>>
 >>>>> Thank's
 >>>>>
 >>>>> but what do you mean with "initialization is finished”
 >>>>
 >

Re: [ovirt-users] Problem starting VMs

2016-08-23 Thread Wolfgang Bucher
Hello



i just changed xfs to ext4 and defragmantion is much better then before, i will 
also try kernel lts from elrepo and test it again.



Thanks



Wolfgang



-Ursprüngliche Nachricht-
Von: Yaniv Kaul <yk...@redhat.com>
Gesendet: Die 23 August 2016 19:46
An: Wolfgang Bucher <wolfgang.buc...@netland-mn.de>
CC: Charles Gruener <cgrue...@gruener.us>
Betreff: Re: AW: [ovirt-users] Problem starting VMs


On Aug 23, 2016 7:33 PM, "Wolfgang Bucher" <wolfgang.buc...@netland-mn.de 
<mailto:wolfgang.buc...@netland-mn.de> > wrote:
 >
 > Hello
 >
 >
 > I am using local storage with adaptec raid controller, disk format is raw

Raw is always raw-sparse, so that may explain this somehow, yet still odd that 
Windows installation would cause so much fragmentation.
 I wonder if using the discard hook (and IDE or virtio-scsi) would help - or 
perhaps using a qcow2 makes more sense (create a snapshot right after disk 
creation for example).

>
 >
 > image: e4d797d1-5719-48d0-891e-a36cd4a79c33
 > file format: raw
 > virtual size: 50G (53687091200 bytes)
 > disk size: 8.5G
 >
 >
 > this is a fresh installation of W2012 after installation i got with xfs_db 
 > -c frag -r  /dev/sdb1:
 >
 > aktuell 407974, ideal 35, Unterteilungsfaktor 99,99%

And the XFS formatted with default parameters?
 Y.

>
 >
 > Thanks
 >
 >
 > Wolfgang
 >
 >
 >> -Ursprüngliche Nachricht-
 >> Von: Yaniv Kaul <yk...@redhat.com <mailto:yk...@redhat.com> >
 >> Gesendet: Die 23 August 2016 17:56
 >> An: Wolfgang Bucher <wolfgang.buc...@netland-mn.de 
 >> <mailto:wolfgang.buc...@netland-mn.de> >
 >> CC: Michal Skrivanek <michal.skriva...@redhat.com 
 >> <mailto:michal.skriva...@redhat.com> >; users@ovirt.org 
 >> <mailto:users@ovirt.org> (users@ovirt.org <mailto:users@ovirt.org> ) 
 >> <users@ovirt.org <mailto:users@ovirt.org> >
 >>
 >> Betreff: Re: [ovirt-users] Problem starting VMs
 >>
 >>
 >>
 >> On Tue, Aug 23, 2016 at 6:40 PM, Wolfgang Bucher 
 >> <wolfgang.buc...@netland-mn.de <mailto:wolfgang.buc...@netland-mn.de> > 
 >> wrote:
 >>>
 >>> Hello
 >>>
 >>>
 >>> in var log messages i get following :
 >>>
 >>>
 >>> kernel: XFS: possible memory allocation deadlock in kmem_alloc (mode:0x250)
 >>>
 >>>
 >>> I have this problem on 4 different hosts.
 >>>
 >>>
 >>> This happens during copying files from network to a thin provisioned disk, 
 >>> no problems with preallocated disks.
 >>
 >>
 >> What kind of storage are you using? local storage? Even though, it makes 
 >> little sense to me - the disk is a qcow2 disk, which shouldn't be very 
 >> fragmented as you might think (qcow2 grows in 64K chunks).
 >> It may grow and grow and grow (until you sparsify it), but that's not going 
 >> to cause fragmentation. What causes it to be fragmented? Perhaps the 
 >> internal qcow2 mapping is quite fragmented?
 >> Y.
 >>  
 >>>
 >>>
 >>> Thanks
 >>>
 >>> Wolfgang
 >>>
 >>>
 >>>> -Ursprüngliche Nachricht-
 >>>> Von: Michal Skrivanek <michal.skriva...@redhat.com 
 >>>> <mailto:michal.skriva...@redhat.com> >
 >>>> Gesendet: Die 23 August 2016 17:11
 >>>> An: Wolfgang Bucher <wolfgang.buc...@netland-mn.de 
 >>>> <mailto:wolfgang.buc...@netland-mn.de> >
 >>>> CC: Milan Zamazal <mzama...@redhat.com <mailto:mzama...@redhat.com> >; 
 >>>> users@ovirt.org <mailto:users@ovirt.org> (users@ovirt.org 
 >>>> <mailto:users@ovirt.org> ) <users@ovirt.org <mailto:users@ovirt.org> >
 >>>> Betreff: Re: [ovirt-users] Problem starting VMs
 >>>>
 >>>>
 >>>>> On 23 Aug 2016, at 11:06, Wolfgang Bucher <wolfgang.buc...@netland-mn.de 
 >>>>> <mailto:wolfgang.buc...@netland-mn.de> > wrote:
 >>>>>
 >>>>> Thank's
 >>>>>
 >>>>> but what do you mean with "initialization is finished”
 >>>>
 >>>>
 >>>> until it gets from WaitForLaunch to PoweringUp state, effectively until 
 >>>> the qemu process properly starts up
 >>>>
 >>>>>
 >>>>> sometimes the vm crashes while copying files!
 >>>>
 >>>>
 >>>> when exactly? Can you describe exactly what you are doing and what is 
 >>>> reported as a reason for c

Re: [ovirt-users] Problem starting VMs

2016-08-23 Thread Yaniv Kaul
On Tue, Aug 23, 2016 at 6:40 PM, Wolfgang Bucher <
wolfgang.buc...@netland-mn.de> wrote:

> Hello
>
>
> in var log messages i get following :
>
>
> kernel: XFS: possible memory allocation deadlock in kmem_alloc (mode:0x250
> )
>
>
> I have this problem on 4 different hosts.
>
>
> This happens during copying files from network to a thin provisioned disk,
> no problems with preallocated disks.
>

What kind of storage are you using? local storage? Even though, it makes
little sense to me - the disk is a qcow2 disk, which shouldn't be very
fragmented as you might think (qcow2 grows in 64K chunks).
It may grow and grow and grow (until you sparsify it), but that's not going
to cause fragmentation. What causes it to be fragmented? Perhaps the
internal qcow2 mapping is quite fragmented?
Y.


>
> Thanks
>
> Wolfgang
>
>
> -Ursprüngliche Nachricht-
> *Von:* Michal Skrivanek <michal.skriva...@redhat.com>
> *Gesendet:* Die 23 August 2016 17:11
> *An:* Wolfgang Bucher <wolfgang.buc...@netland-mn.de>
> *CC:* Milan Zamazal <mzama...@redhat.com>; users@ovirt.org (
> users@ovirt.org) <users@ovirt.org>
> *Betreff:* Re: [ovirt-users] Problem starting VMs
>
>
> On 23 Aug 2016, at 11:06, Wolfgang Bucher <wolfgang.buc...@netland-mn.de>
> wrote:
>
> Thank's
>
> but what do you mean with "initialization is finished”
>
>
> until it gets from WaitForLaunch to PoweringUp state, effectively until
> the qemu process properly starts up
>
>
> sometimes the vm crashes while copying files!
>
>
> when exactly? Can you describe exactly what you are doing and what is
> reported as a reason for crash. When exactly does it crash and how?
>
> Thanks,
> michal
>
>
>
>
> Wolfgang
>
>
> -Ursprüngliche Nachricht-
> *Von:* Milan Zamazal <mzama...@redhat.com>
> *Gesendet:* Die 23 August 2016 16:59
> *An:* Wolfgang Bucher <wolfgang.buc...@netland-mn.de>
> *CC:* users@ovirt.org (users@ovirt.org) <users@ovirt.org>
> *Betreff:* Re: AW: Problem starting VMs
>
> Wolfgang Bucher <wolfgang.buc...@netland-mn.de> writes:
>
> > the problem is "waiting for launch" takes up to 20 min.
> >
> >
> > I did a lot of tests with some vm's and th problem is:
> >
> >
> > 1.  create a win2012 server
> >
> > 2. attache a new disk (thin provision)
> >
> > 3. fill the disk from network share with about 100G
> >
> > 4. shutdown the vm
> >
> > 5. reboot host and try starting the vm (takes a lot of time until launch)
> >
> >
> >
> > It's because of a high fragmented filessystem
>
> I see, thank you for the explanation.  In such a case, it may take a lot
> of time before all the initialization is finished.
>
> Regards,
> Milan
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Problem starting VMs

2016-08-23 Thread Wolfgang Bucher
Hello



in var log messages i get following :



kernel: XFS: possible memory allocation deadlock in kmem_alloc (mode:0x250)



I have this problem on 4 different hosts.



This happens during copying files from network to a thin provisioned disk, no 
problems with preallocated disks.



Thanks

Wolfgang



-Ursprüngliche Nachricht-
Von: Michal Skrivanek <michal.skriva...@redhat.com>
Gesendet: Die 23 August 2016 17:11
An: Wolfgang Bucher <wolfgang.buc...@netland-mn.de>
CC: Milan Zamazal <mzama...@redhat.com>; users@ovirt.org (users@ovirt.org) 
<users@ovirt.org>
Betreff: Re: [ovirt-users] Problem starting VMs


On 23 Aug 2016, at 11:06, Wolfgang Bucher <wolfgang.buc...@netland-mn.de 
<mailto:wolfgang.buc...@netland-mn.de> > wrote:

Thank's

but what do you mean with "initialization is finished”

until it gets from WaitForLaunch to PoweringUp state, effectively until the 
qemu process properly starts up


sometimes the vm crashes while copying files!

when exactly? Can you describe exactly what you are doing and what is reported 
as a reason for crash. When exactly does it crash and how?

Thanks,
michal




Wolfgang


-Ursprüngliche Nachricht-
Von: Milan Zamazal <mzama...@redhat.com <mailto:mzama...@redhat.com> >
Gesendet: Die 23 August 2016 16:59
An: Wolfgang Bucher <wolfgang.buc...@netland-mn.de 
<mailto:wolfgang.buc...@netland-mn.de> >
CC: users@ovirt.org <mailto:users@ovirt.org> (users@ovirt.org 
<mailto:users@ovirt.org> ) <users@ovirt.org <mailto:users@ovirt.org> >
Betreff: Re: AW: Problem starting VMs


Wolfgang Bucher <wolfgang.buc...@netland-mn.de 
<mailto:wolfgang.buc...@netland-mn.de> > writes:

> the problem is "waiting for launch" takes up to 20 min.
>
>
> I did a lot of tests with some vm's and th problem is:
>
>
> 1.  create a win2012 server
>
> 2. attache a new disk (thin provision)
>
> 3. fill the disk from network share with about 100G
>
> 4. shutdown the vm
>
> 5. reboot host and try starting the vm (takes a lot of time until launch)
>
>
>
> It's because of a high fragmented filessystem 

I see, thank you for the explanation.  In such a case, it may take a lot
of time before all the initialization is finished.

Regards,
Milan

___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org> 
http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Problem starting VMs

2016-08-23 Thread Michal Skrivanek

> On 23 Aug 2016, at 11:06, Wolfgang Bucher  
> wrote:
> 
> Thank's
> 
> but what do you mean with "initialization is finished”

until it gets from WaitForLaunch to PoweringUp state, effectively until the 
qemu process properly starts up

> 
> sometimes the vm crashes while copying files!

when exactly? Can you describe exactly what you are doing and what is reported 
as a reason for crash. When exactly does it crash and how?

Thanks,
michal

> 
> 
> 
> Wolfgang
> 
> 
> -Ursprüngliche Nachricht-
> Von: Milan Zamazal 
> Gesendet: Die 23 August 2016 16:59
> An: Wolfgang Bucher 
> CC: users@ovirt.org (users@ovirt.org) 
> Betreff: Re: AW: Problem starting VMs
> 
>  Wolfgang Bucher  writes:
> 
> > the problem is "waiting for launch" takes up to 20 min.
> >
> >
> > I did a lot of tests with some vm's and th problem is:
> >
> >
> > 1.  create a win2012 server
> >
> > 2. attache a new disk (thin provision)
> >
> > 3. fill the disk from network share with about 100G
> >
> > 4. shutdown the vm
> >
> > 5. reboot host and try starting the vm (takes a lot of time until launch)
> >
> >
> >
> > It's because of a high fragmented filessystem 
> 
> I see, thank you for the explanation.  In such a case, it may take a lot
> of time before all the initialization is finished.
> 
> Regards,
> Milan
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Problem starting VMs

2016-08-23 Thread Wolfgang Bucher
Hello,


the problem is "waiting for launch" takes up to 20 min.


I did a lot of tests with some vm's and th problem is:


1.  create a win2012 server

2. attache a new disk (thin provision)

3. fill the disk from network share with about 100G

4. shutdown the vm

5. reboot host and try starting the vm (takes a lot of time until launch)



It's because of a high fragmented filessystem 

Thanks



Wolfgang







-Ursprüngliche Nachricht-
Von: Milan Zamazal 
Gesendet: Die 23 August 2016 15:17
An: Wolfgang Bucher 
CC: Users@ovirt.org
Betreff: Re: Problem starting VMs


Wolfgang Bucher  writes:

> After reboot i cannot start some vms, and i get the following warnings in 
> vdsm.log:
>
> periodic/6::WARNING::2016-08-18
> 19:26:10,244::periodic::261::virt.periodic.VmDispatcher::(__call__) could not
> run  on
> [u'5c868b6a-db8e-4c67-a2b7-8bcdefc3350a']

[...]

> vmId=`5c868b6a-db8e-4c67-a2b7-8bcdefc3350a`::could not run on
> 5c868b6a-db8e-4c67-a2b7-8bcdefc3350a: domain not connected
> periodic/3::WARNING::2016-08-18

Those messages may be present on VM start and may (or may not) be
harmless.

[...]

> sometimes the vm starts after 15 min and more.

Do you mean that some VMs start after long time and some don't start at
all?  If a VM doesn't start at all then there should be some ERROR
message in the log.  If all the VMs eventually start sooner or later,
then it would be useful to see the whole piece of the log from the
initial VM.create call to the VM start.

And what do you mean exactly by the VM start?  Is it that a VM is not
booting in the meantime, is inaccessible, is indicated as starting or
not running in Engine, something else?

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Problem starting VMs

2016-08-23 Thread Milan Zamazal
Wolfgang Bucher  writes:

> After reboot i cannot start some vms, and i get the following warnings in 
> vdsm.log:
>
> periodic/6::WARNING::2016-08-18
> 19:26:10,244::periodic::261::virt.periodic.VmDispatcher::(__call__) could not
> run  on
> [u'5c868b6a-db8e-4c67-a2b7-8bcdefc3350a']

[...]

> vmId=`5c868b6a-db8e-4c67-a2b7-8bcdefc3350a`::could not run on
> 5c868b6a-db8e-4c67-a2b7-8bcdefc3350a: domain not connected
> periodic/3::WARNING::2016-08-18

Those messages may be present on VM start and may (or may not) be
harmless.

[...]

> sometimes the vm starts after 15 min and more.

Do you mean that some VMs start after long time and some don't start at
all?  If a VM doesn't start at all then there should be some ERROR
message in the log.  If all the VMs eventually start sooner or later,
then it would be useful to see the whole piece of the log from the
initial VM.create call to the VM start.

And what do you mean exactly by the VM start?  Is it that a VM is not
booting in the meantime, is inaccessible, is indicated as starting or
not running in Engine, something else?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Problem starting VMs

2016-08-18 Thread Wolfgang Bucher
HELLO



Installation is a single host with ovirt no hosted engine. Version : oVirt 
Engine Version: 3.6.7.5-1.el7.centos



After reboot i cannot start some vms, and i get the following warnings in 
vdsm.log:






periodic/6::WARNING::2016-08-18 
19:26:10,244::periodic::261::virt.periodic.VmDispatcher::(__call__) could not 
run  on 
[u'5c868b6a-db8e-4c67-a2b7-8bcdefc3350a']
periodic/3::WARNING::2016-08-18 
19:26:12,247::periodic::261::virt.periodic.VmDispatcher::(__call__) could not 
run  on 
[u'5c868b6a-db8e-4c67-a2b7-8bcdefc3350a']
periodic/0::WARNING::2016-08-18 
19:26:14,249::periodic::261::virt.periodic.VmDispatcher::(__call__) could not 
run  on 
[u'5c868b6a-db8e-4c67-a2b7-8bcdefc3350a']
periodic/6::WARNING::2016-08-18 
19:26:14,948::periodic::261::virt.periodic.VmDispatcher::(__call__) could not 
run  on 
[u'5c868b6a-db8e-4c67-a2b7-8bcdefc3350a']
periodic/6::WARNING::2016-08-18 
19:26:14,958::periodic::295::virt.vm::(__call__) 
vmId=`5c868b6a-db8e-4c67-a2b7-8bcdefc3350a`::could not run on 
5c868b6a-db8e-4c67-a2b7-8bcdefc3350a: domain not connected
periodic/3::WARNING::2016-08-18 
19:26:16,250::periodic::261::virt.periodic.VmDispatcher::(__call__) could not 
run  on 
[u'5c868b6a-db8e-4c67-a2b7-8bcdefc3350a']
periodic/0::WARNING::2016-08-18 
19:26:18,252::periodic::261::virt.periodic.VmDispatcher::(__call__) could not 
run  on 
[u'5c868b6a-db8e-4c67-a2b7-8bcdefc3350a']





sometimes the vm starts after 15 min and more.



Can you help me



Thanks

Wolfgang

















___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users