Re: [Users] can't import vms from 3.1 ovirt to 3.2

2013-05-26 Thread Maor Lipchuk
Hi,
Can u please also attach the engine log, and the full log of VDSM

In the log you sent I see you got the message
supervdsm::190::SuperVdsmProxy::(_connect) Connect to svdsm failed
This behaviour could be related to https://bugzilla.redhat.com/910005
which was fixed in later version of VDSM.
But to be sure, we need to see the full logs.

Regards,
Maor


On 05/24/2013 04:58 PM, ov...@qip.ru wrote:
> i have export/NFS domain created in ovirt 3.1 with vms copies when i try to 
> import vm from this domain to ovirt 3.2 during import process vdsm host loses 
> SPM status and process failed, see attach. ( dc 3.2  with one vdsm host)
> 
> engine and vdsm are on different hosts and were installed on centos 6.4 from 
> dreyou repo
> 
> on vdsm host
> 
> vdsm-xmlrpc-4.10.3-0.36.23.el6.noarch
> vdsm-4.10.3-0.36.23.el6.x86_64
> vdsm-cli-4.10.3-0.36.23.el6.noarch
> vdsm-python-4.10.3-0.36.23.el6.x86_64
> [root@kvm02 rhev]# rpm -qa | fgrep sanlock
> sanlock-lib-2.6-2.el6.x86_64
> sanlock-2.6-2.el6.x86_64
> sanlock-python-2.6-2.el6.x86_64
> libvirt-lock-sanlock-0.10.2-18.el6_4.4.x86_64
> 
> 
> i also tried to import vms to dc engine on fedora18 and vdsm on fedora18 from 
> ovirt3.2 stable repo but rezult was the same  (vdsm host loses SPM status)
> 
> on  export/nfs domain created in 3.2 i can export and import vm to/from it
> 
> 
> 
> 
> --
> 
> 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Can I delay boot of VM so to interact with grub?

2013-05-26 Thread Andrew Cathrow


- Original Message -
> From: "Gianluca Cecchi" 
> To: "users" 
> Sent: Sunday, May 26, 2013 1:00:32 PM
> Subject: [Users] Can I delay boot of VM so to interact with grub?
> 
> Hello,
> in VMware I can choose two things that sometimes are useful
> 
> 1) boot directly in bios
> I choose this when I want to attach an iso image to the VM
> In oVirt probably could be obtained with run once and attach cd.
> But in general it could be interesting this opportunity
> 
> 2) choose a delay in boot
> This way I can use the VM console befaore OS starts and if needed I
> can interact with boot loader
> (grub in Linux VM for example).
> 
> When I open a spice console I'm not able to get usability before os
> already in start after grub.

use "run once" to start a VM and in boot options pick "start paused" 


> 
> I see to options:
> - create  a delay in boot as in VMware
> - give spice console the "-w" (for wait) option that already existed
> in virt-viewer and not in remote-viewer so that one can spawn a
> console window before the VM ha already been started.
> 
> What do you think?
> Is there anything already in place to jump into a vm before grub
> completes (other than setting a long delay in grub itself)?
> 
> Thanks
> Gianluca
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Can I delay boot of VM so to interact with grub?

2013-05-26 Thread Gianluca Cecchi
Hello,
in VMware I can choose two things that sometimes are useful

1) boot directly in bios
I choose this when I want to attach an iso image to the VM
In oVirt probably could be obtained with run once and attach cd.
But in general it could be interesting this opportunity

2) choose a delay in boot
This way I can use the VM console befaore OS starts and if needed I
can interact with boot loader
(grub in Linux VM for example).

When I open a spice console I'm not able to get usability before os
already in start after grub.

I see to options:
- create  a delay in boot as in VMware
- give spice console the "-w" (for wait) option that already existed
in virt-viewer and not in remote-viewer so that one can spawn a
console window before the VM ha already been started.

What do you think?
Is there anything already in place to jump into a vm before grub
completes (other than setting a long delay in grub itself)?

Thanks
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Can host hooks be used to implement the following policy to avoid "boot storm"?

2013-05-26 Thread Itamar Heim

On 05/26/2013 01:43 PM, lofyer wrote:

于 2013/5/26 18:36, lofyer 写道:

于 2013/5/26 15:41, Itamar Heim 写道:

On 05/26/2013 09:59 AM, Roy Golan wrote:

On 05/24/2013 03:22 PM, Itamar Heim wrote:

On 05/23/2013 04:23 PM, lofyer wrote:

Can host hooks be used to implement the following policy to avoid
"boot
storm"?

1.count and sort VMs awaiting for boot.
2.If the count is less than or equal to, for example 5, then boot all
VMs and exit the script.
3.Otherwise, boot and dequeue vm0 to vm4, and minus the count by 5.
4.Sleep some time, and go to step 2.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


hooks run one on vm-start one by one. would be a bit tricky to do the
above.
roy - didn't you add some mechanism to throttle boot storms in the
engine?

yes at RunVmCommandBase.delay(Guid vdsId) - but it doesn't fit here.
It'll delay the VM run only when there is no free memory to run the VM
and will wait till engine signals after gathering statistics of
powering up VMs.


lofyer - any other issue other than memory bottleneck you were trying
to resolve?

Thanks,
Itamar

I think only cpu_load here. If I start 8 VMs(1G memory each) in a 16G
host, it will be very slow for VMs to enter the desktop mode from the
time I click "start".
At first I thought it would be a resolution if I put these boot
processes into a linear squence or delay a random period by using hooks.
Is there an option that I can choose to arrange the boot squence in
the engine?
However, since each VM takes quite a long time to startup, there is no
need to do that. Just plug in more CPUs..

Knowing that some other factors will affect it, can I use an SSD as a
"boot cache"?


not sure, worth trying out i guess
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Can host hooks be used to implement the following policy to avoid "boot storm"?

2013-05-26 Thread lofyer

于 2013/5/26 18:36, lofyer 写道:

于 2013/5/26 15:41, Itamar Heim 写道:

On 05/26/2013 09:59 AM, Roy Golan wrote:

On 05/24/2013 03:22 PM, Itamar Heim wrote:

On 05/23/2013 04:23 PM, lofyer wrote:
Can host hooks be used to implement the following policy to avoid 
"boot

storm"?

1.count and sort VMs awaiting for boot.
2.If the count is less than or equal to, for example 5, then boot all
VMs and exit the script.
3.Otherwise, boot and dequeue vm0 to vm4, and minus the count by 5.
4.Sleep some time, and go to step 2.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


hooks run one on vm-start one by one. would be a bit tricky to do the
above.
roy - didn't you add some mechanism to throttle boot storms in the
engine?

yes at RunVmCommandBase.delay(Guid vdsId) - but it doesn't fit here.
It'll delay the VM run only when there is no free memory to run the VM
and will wait till engine signals after gathering statistics of
powering up VMs.


lofyer - any other issue other than memory bottleneck you were trying 
to resolve?


Thanks,
Itamar
I think only cpu_load here. If I start 8 VMs(1G memory each) in a 16G 
host, it will be very slow for VMs to enter the desktop mode from the 
time I click "start".
At first I thought it would be a resolution if I put these boot 
processes into a linear squence or delay a random period by using hooks.
Is there an option that I can choose to arrange the boot squence in 
the engine?
However, since each VM takes quite a long time to startup, there is no 
need to do that. Just plug in more CPUs..
Knowing that some other factors will affect it, can I use an SSD as a 
"boot cache"?

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Can host hooks be used to implement the following policy to avoid "boot storm"?

2013-05-26 Thread lofyer

于 2013/5/26 15:41, Itamar Heim 写道:

On 05/26/2013 09:59 AM, Roy Golan wrote:

On 05/24/2013 03:22 PM, Itamar Heim wrote:

On 05/23/2013 04:23 PM, lofyer wrote:
Can host hooks be used to implement the following policy to avoid 
"boot

storm"?

1.count and sort VMs awaiting for boot.
2.If the count is less than or equal to, for example 5, then boot all
VMs and exit the script.
3.Otherwise, boot and dequeue vm0 to vm4, and minus the count by 5.
4.Sleep some time, and go to step 2.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


hooks run one on vm-start one by one. would be a bit tricky to do the
above.
roy - didn't you add some mechanism to throttle boot storms in the
engine?

yes at RunVmCommandBase.delay(Guid vdsId) - but it doesn't fit here.
It'll delay the VM run only when there is no free memory to run the VM
and will wait till engine signals after gathering statistics of
powering up VMs.


lofyer - any other issue other than memory bottleneck you were trying 
to resolve?


Thanks,
Itamar
I think only cpu_load here. If I start 8 VMs(1G memory each) in a 16G 
host, it will be very slow for VMs to enter the desktop mode from the 
time I click "start".
At first I thought it would be a resolution if I put these boot 
processes into a linear squence or delay a random period by using hooks.
Is there an option that I can choose to arrange the boot squence in the 
engine?
However, since each VM takes quite a long time to startup, there is no 
need to do that. Just plug in more CPUs..

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Resize storage domain

2013-05-26 Thread Tal Nisan

On 05/22/2013 11:04 PM, Eduardo Ramos wrote:

Hi all!

I have an iscsi domain based on a HP Lefthand cluster. Using HP tool, 
I resized the iscsi volume without problem. On the SPM host, with 
fdisk -l /dev/sdb, I saw the new size, ok.


But now, How do I do ovirt engine see the new size?

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

Hi Eduardo,

In case you have more than one host:
1. Put the domain in maintenance
2. Manually connect iscsi on the SPM host
3. Run pvresize on the LUN
4. Activate the domains

In case you have only 1 host just run pvresize on the disk.

Tal.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Bonding - VMs Network performance problem

2013-05-26 Thread Mike Kolesnik
- Original Message -

> Hi,

> I've got ovirt installed on 2 HP BL460c G6 blades, and my VMs have very poor
> network performance (around 7,01K/s).

> On the servers itselfs there is no problem, i can download a file with wget
> at around 99 M/s.

> Then i go to ovirt network configuration remove the bonding and then make the
> bonding again and the problem gets fixed (i have to do this everytime i
> reboot my blades).
Have you tried to check the "Save network configuration" check box, or clicking 
the button from the host's NICs sub-tab? 
This should persist the configuration that you set on the host across reboots.. 

> SERVER' s Software:
> CentOS 6.4 (64 bits) - 2.6.32-358.6.2.el6.x86_64
> Ovirt EL6 official rpms.

> Anyone experienced this kind of problems?

> Best regards,
> Ricardo Esteves.

> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Can host hooks be used to implement the following policy to avoid "boot storm"?

2013-05-26 Thread Itamar Heim

On 05/26/2013 09:59 AM, Roy Golan wrote:

On 05/24/2013 03:22 PM, Itamar Heim wrote:

On 05/23/2013 04:23 PM, lofyer wrote:

Can host hooks be used to implement the following policy to avoid "boot
storm"?

1.count and sort VMs awaiting for boot.
2.If the count is less than or equal to, for example 5, then boot all
VMs and exit the script.
3.Otherwise, boot and dequeue vm0 to vm4, and minus the count by 5.
4.Sleep some time, and go to step 2.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


hooks run one on vm-start one by one. would be a bit tricky to do the
above.
roy - didn't you add some mechanism to throttle boot storms in the
engine?

yes at RunVmCommandBase.delay(Guid vdsId) - but it doesn't fit here.
It'll delay the VM run only when there is no free memory to run the VM
and will wait till engine signals after gathering statistics of
powering up VMs.


lofyer - any other issue other than memory bottleneck you were trying to 
resolve?


Thanks,
  Itamar
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] VM running on two hosts somehow`

2013-05-26 Thread Omer Frenkel


- Original Message -
> From: "Neil" 
> To: users@ovirt.org
> Sent: Friday, May 24, 2013 10:43:27 AM
> Subject: [Users] VM running on two hosts somehow`
> 
> Hi guys,
> 
> Sorry I thought I'd start a new thread for this issue, as it's now a
> different problem from my original post "Migration failed due to
> Error: novm"
> 
> After my VM failed to migrate from one host to the other my VM was
> still responsive but showed that it was powered off in oVirt, so I
> logged into the console on the Linux guest and rebooted it, which
> appears to have resolve the issue as it now shows up on my engine as
> on, but I've just noticed now I've got two VM instances running on two
> separate hosts...
> 

what is the status of the hosts (migration source and destination) in the 
engine ui?
when the vm was down in the engine ui, you didnt start again from it? just 
restart from within the guest?

looking at the logs from other thread, seems that for some reason vm had a 
balloon-device with no spec-params,
this caused vdsm to fail to respond to the engine monitoring, i still need to 
understand the engine behavior in this case,
i believe it stays in UP but can't get any info from vdsm..
can you find when these errors started on vdsm log?
(i assume this is the migration destination vdsm, maybe it started when vms 
migrated to this host?) 

can you please share the engine.log of the migration and restart time? (log 
from other thread is too short, migrate command info is not there)

thanks!

> On host 10.0.2.22
> 
> 15407 ?Sl   223:35 /usr/libexec/qemu-kvm -name zimbra -S -M
> rhel6.4.0 -cpu Westmere -enable-kvm -m 8192 -smp
> 4,sockets=1,cores=4,threads=1 -uuid
> 179c293b-e6a3-4ec6-a54c-2f92f875bc5e -smbios
> type=1,manufacturer=oVirt,product=oVirt
> Node,version=6-4.el6.centos.10,serial=4C4C4544-0038-5310-8050-C4C04F34354A,uuid=179c293b-e6a3-4ec6-a54c-2f92f875bc5e
> -nodefconfig -nodefaults -chardev
> socket,id=charmonitor,path=/var/lib/libvirt/qemu/zimbra.monitor,server,nowait
> -mon chardev=charmonitor,id=monitor,mode=control -rtc
> base=2013-05-23T15:07:39,driftfix=slew -no-shutdown -device
> virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x4
> -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
> file=/rhev/data-center/28adaf38-a4f6-11e1-a859-cb68949043e4/0e6991ae-6238-4c61-96d2-ca8fed35161e/images/446921d9-cbd1-42b1-919f-88d6ae310fd9/2ff8ba31-7397-41e7-8a60-7ef9eec23d1a,if=none,id=drive-virtio-disk0,format=raw,serial=446921d9-cbd1-42b1-919f-88d6ae310fd9,cache=none,werror=stop,rerror=stop,aio=native
> -device
> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
> -drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,serial=
> -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0
> -netdev tap,fd=30,id=hostnet0,vhost=on,vhostfd=31 -device
> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:a8:7a:01,bus=pci.0,addr=0x3
> -chardev
> socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/zimbra.com.redhat.rhevm.vdsm,server,nowait
> -device
> virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
> -chardev
> socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/zimbra.org.qemu.guest_agent.0,server,nowait
> -device
> virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
> -device usb-tablet,id=input0 -vnc 0:10,password -k en-us -vga cirrus
> 
> 
> 
> On Host 10.0.2.21
> 
> 17594 ?Sl   449:39 /usr/libexec/qemu-kvm -name zimbra -S -M
> rhel6.2.0 -cpu Westmere -enable-kvm -m 8192 -smp
> 4,sockets=1,cores=4,threads=1 -uuid
> 179c293b-e6a3-4ec6-a54c-2f92f875bc5e -smbios type=1,manufacturer=Red
> Hat,product=RHEV
> Hypervisor,version=6-2.el6.centos.7,serial=4C4C4544-0038-5310-8050-C4C04F34354A_BC:30:5B:E4:19:C2,uuid=179c293b-e6a3-4ec6-a54c-2f92f875bc5e
> -nodefconfig -nodefaults -chardev
> socket,id=charmonitor,path=/var/lib/libvirt/qemu/zimbra.monitor,server,nowait
> -mon chardev=charmonitor,id=monitor,mode=control -rtc
> base=2013-05-23T10:19:47,driftfix=slew -no-shutdown -device
> virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x4
> -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
> file=/rhev/data-center/28adaf38-a4f6-11e1-a859-cb68949043e4/0e6991ae-6238-4c61-96d2-ca8fed35161e/images/446921d9-cbd1-42b1-919f-88d6ae310fd9/2ff8ba31-7397-41e7-8a60-7ef9eec23d1a,if=none,id=drive-virtio-disk0,format=raw,serial=446921d9-cbd1-42b1-919f-88d6ae310fd9,cache=none,werror=stop,rerror=stop,aio=native
> -device
> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
> -drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,serial=
> -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0
> -netdev tap,fd=29,id=hostnet0,vhost=on,vhostfd=31 -device
> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:a8:7a:01,bus=pci.0,addr=0x3