Re: [ovirt-users] [Centos7.1x64] [Ovirt 3.5.2] Test fence : Power management test failed for Host hosted_engine1 Done

2015-06-10 Thread wodel youchi
Hi,

engine log is already in debug mode

here it is:
2015-06-10 11:48:23,653 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(ajp--127.0.0.1-8702-12) Correlation ID: null, Call Stack: null, Custom
Event ID: -1, Message: Host hosted_engine_2 from cluster Default was chosen
as a proxy to execute Status command on Host hosted_engine_1.
2015-06-10 11:48:23,653 INFO  [org.ovirt.engine.core.bll.FenceExecutor]
(ajp--127.0.0.1-8702-12) Using Host hosted_engine_2 from cluster Default as
proxy to execute Status command on Host
2015-06-10 11:48:23,673 INFO  [org.ovirt.engine.core.bll.FenceExecutor]
(ajp--127.0.0.1-8702-12) Executing Status Power Management command, Proxy
Host:hosted_engine_2, Agent:ipmilan, Target Host:, Management
IP:192.168.2.2, User:Administrator, Options: power_wait=60,lanplus=1,
Fencing policy:null
2015-06-10 11:48:23,703 INFO
*[org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand]
(ajp--127.0.0.1-8702-12) START, FenceVdsVDSCommand(HostName =
hosted_engine_2, HostId = 0192d1ac-b905-4660-b149-4bef578985dd, targetVdsId
= cf2d1260-7bb3-451a-9cd7-80e6a0ede52a, action = Status, ip = 192.168.2.2,
port = , type = ipmilan, user = Administrator, password = **, options =
' power_wait=60,lanplus=1', policy = 'null'), log id:
2bda01bd2015-06-10 11:48:23,892 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(ajp--127.0.0.1-8702-12) Correlation ID: null, Call Stack: null, Custom
Event ID: -1, Message: Power Management test failed for Host
hosted_engine_1.Done*
2015-06-10 11:48:23,892 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand]
(ajp--127.0.0.1-8702-12) FINISH, FenceVdsVDSCommand, return:
*Test Succeeded, unknown, log id: 2bda01bd2015-06-10 11:48:23,897 WARN
[org.ovirt.engine.core.bll.FenceExecutor] (ajp--127.0.0.1-8702-12) Fencing
operation failed with proxy host 0192d1ac-b905-4660-*b149-4bef578985dd,
trying another proxy...
2015-06-10 11:48:24,039 ERROR [org.ovirt.engine.core.bll.FenceExecutor]
(ajp--127.0.0.1-8702-12) Failed to run Power Management command on Host ,
no running proxy Host was found.
2015-06-10 11:48:24,039 WARN  [org.ovirt.engine.core.bll.FenceExecutor]
(ajp--127.0.0.1-8702-12) Failed to find other proxy to re-run failed fence
operation, retrying with the same proxy...
2015-06-10 11:48:24,143 INFO
*[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(ajp--127.0.0.1-8702-12) Correlation ID: null, Call Stack: null, Custom
Event ID: -1, Message: Host hosted_engine_2 from cluster Default was chosen
as a proxy to execute Status command on Host hosted_engine_1.*
2015-06-10 11:48:24,143 INFO  [org.ovirt.engine.core.bll.FenceExecutor]
(ajp--127.0.0.1-8702-12) Using Host hosted_engine_2 from cluster Default as
proxy to execute Status command on Host

*2015-06-10 11:48:24,148 INFO  [org.ovirt.engine.core.bll.FenceExecutor]
(ajp--127.0.0.1-8702-12) Executing Status Power Management command, Proxy
Host:hosted_engine_2, Agent:ipmilan, Target Host:, Management
IP:192.168.2.2, User:Administrator, Options: power_wait=60,lanplus=1,
Fencing policy:null2015-06-10 11:48:24,165 INFO
*[org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand]
(ajp--127.0.0.1-8702-12) START, FenceVdsVDSCommand(HostName =
hosted_engine_2, HostId = 0192d1ac-b905-4660-b149-4bef578985dd, targetVdsId
= cf2d1260-7bb3-451a-9cd7-80e6a0ede52a, action = Status, ip = 192.168.2.2,
port = , type = ipmilan, user = Administrator, password = **, options =
' power_wait=60,lanplus=1', policy = 'null'), log id: 7e7f2726
2015-06-10 11:48:24,360 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(ajp--127.0.0.1-8702-12) Correlation ID: null, Call Stack: null, Custom
Event ID: -1, Message: Power Management test failed for Host
hosted_engine_1.Done
2015-06-10 11:48:24,360 INFO
*[org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand]
(ajp--127.0.0.1-8702-12) FINISH, FenceVdsVDSCommand, return: Test
Succeeded, unknown, log id: 7e7f2726*


VDSM log from hosted_engine_2

JsonRpcServer::DEBUG::2015-06-10
11:48:23,640::__init__::506::jsonrpc.JsonRpcServer::(serve_requests)
Waiting for request
Thread-2201::DEBUG::2015-06-10 11:48:23,642::API::1209::vds::(fenceNode)
*fenceNode(addr=192.168.2.2,port=,agent=ipmilan,user=Administrator,passwd=,action=status,secure=False,options=
power_wait=60lanplus=1,policy=None)*
Thread-2201::DEBUG::2015-06-10 11:48:23,642::utils::739::root::(execCmd)
/usr/sbin/fence_ipmilan (cwd None)
Thread-2201::DEBUG::2015-06-10
11:48:23,709::utils::759::root::(execCmd) *FAILED:
err = 'Failed: Unable to obtain correct plug status or plug is not
available\n\n\n*'; rc = 1
Thread-2201::DEBUG::2015-06-10 11:48:23,710::API::1164::vds::(fence) rc 1
inp agent=fence_ipmilan
ipaddr=192.168.2.2
login=Administrator
action=status
passwd=
 power_wait=60
lanplus=1 out [] err ['Failed: Unable to obtain correct plug status or
plug is not available', '', '']
Thread-2201::DEBUG::2015-06-10 

Re: [ovirt-users] Can not kill vm

2015-06-10 Thread Dan Kenigsberg
On Wed, Jun 10, 2015 at 06:02:01PM +0800, 肖力 wrote:
 Hi My  versions of your kernel, qemu-kvm, libvirt, and vdsm is:

Thanks. I don't really have a clue, but I have further question, that
may help qemu/libvirt identify the bug:

 Kernel 2.6.32-504.16.2.el6.x86_64
 qemu-kvm qemu-kvm qemu-kvm-rhev-0.12.1.2-2.448.el6_6.2
 libvirt 0.10.2

can you share the complete `rpm -q libvirt`?

 vdsm  3.5.2 

This is an ovirt version, not a vdsm version.

Can you see something interesting in libvirtd.log? Can you share
/var/log/libvirt/qemu/yourvm.log ?

Does this happen with more than one VM? On different hosts?


 vm xml:
 
 
 domain type='kvm' id='7'
   nameubuntu1410-72/name
   uuid4db31f4b-db2e-4b26-b23a-52ca62edb945/uuid
   memory unit='KiB'8388608/memory
   currentMemory unit='KiB'8388608/currentMemory
   vcpu placement='static' current='8'16/vcpu
   cputune
 shares1020/shares
 period12500/period
 quota12500/quota
   /cputune
   sysinfo type='smbios'
 system
   entry name='manufacturer'oVirt/entry
   entry name='product'oVirt Node/entry
   entry name='version'6-6.el6.centos.12.2/entry
   entry name='serial'--3032-3142-413956574530/entry
   entry name='uuid'4db31f4b-db2e-4b26-b23a-52ca62edb945/entry
 /system
   /sysinfo
   os
 type arch='x86_64' machine='rhel6.5.0'hvm/type
 smbios mode='sysinfo'/
   /os
   features
 acpi/
   /features
   cpu mode='custom' match='exact'
 model fallback='allow'Penryn/model
 topology sockets='16' cores='1' threads='1'/
   /cpu
   clock offset='variable' adjustment='0' basis='utc'
 timer name='rtc' tickpolicy='catchup'/
 timer name='pit' tickpolicy='delay'/
 timer name='hpet' present='no'/
   /clock
   on_poweroffdestroy/on_poweroff
   on_rebootrestart/on_reboot
   on_crashdestroy/on_crash
   devices
 emulator/usr/libexec/qemu-kvm/emulator
 disk type='file' device='cdrom'
   driver name='qemu' type='raw'/
   source startupPolicy='optional'/
   target dev='hdc' bus='ide'/
   readonly/
   serial/serial
   alias name='ide0-1-0'/
   address type='drive' controller='0' bus='1' target='0' unit='0'/
 /disk
 disk type='block' device='disk' snapshot='no'
   driver name='qemu' type='raw' cache='none' error_policy='stop' 
 io='native'/
   source 
 dev='/rhev/data-center/d8dadd26-ce25-4d46-8e4a-b033cc01415f/4a12bcb7-1498-4321-bdd1-37481b106f11/images/58e47d6d-4202-40d6-a224-daddbe232deb/003af864-5ff8-46f2-96e8-8327d3d91eb7'/
   target dev='vda' bus='virtio'/
   serial58e47d6d-4202-40d6-a224-daddbe232deb/serial
   boot order='1'/
   alias name='virtio-disk0'/
   address type='pci' domain='0x' bus='0x00' slot='0x05' 
 function='0x0'/
 /disk
 controller type='scsi' index='0' model='virtio-scsi'
   alias name='scsi0'/
   address type='pci' domain='0x' bus='0x00' slot='0x03' 
 function='0x0'/
 /controller
 controller type='virtio-serial' index='0' ports='16'
   alias name='virtio-serial0'/
   address type='pci' domain='0x' bus='0x00' slot='0x04' 
 function='0x0'/
 /controller
 controller type='usb' index='0'
   alias name='usb0'/
   address type='pci' domain='0x' bus='0x00' slot='0x01' 
 function='0x2'/
 /controller
 controller type='ide' index='0'
   alias name='ide0'/
   address type='pci' domain='0x' bus='0x00' slot='0x01' 
 function='0x1'/
 /controller
 interface type='bridge'
   mac address='00:1a:4a:43:e8:0c'/
   source bridge='ovirtmgmt'/
   bandwidth
   /bandwidth
   target dev='vnet6'/
   model type='virtio'/
   filterref filter='vdsm-no-mac-spoofing'/
   link state='up'/
   alias name='net0'/
   address type='pci' domain='0x' bus='0x00' slot='0x07' 
 function='0x0'/
 /interface
 channel type='unix'
   source mode='bind' 
 path='/var/lib/libvirt/qemu/channels/4db31f4b-db2e-4b26-b23a-52ca62edb945.com.redhat.rhevm.vdsm'/
   target type='virtio' name='com.redhat.rhevm.vdsm'/
   alias name='channel0'/
   address type='virtio-serial' controller='0' bus='0' port='1'/
 /channel
 channel type='unix'
   source mode='bind' 
 path='/var/lib/libvirt/qemu/channels/4db31f4b-db2e-4b26-b23a-52ca62edb945.org.qemu.guest_agent.0'/
   target type='virtio' name='org.qemu.guest_agent.0'/
   alias name='channel1'/
   address type='virtio-serial' controller='0' bus='0' port='2'/
 /channel
 channel type='spicevmc'
   target type='virtio' name='com.redhat.spice.0'/
   alias name='channel2'/
   address type='virtio-serial' controller='0' bus='0' port='3'/
 /channel
 input type='mouse' bus='ps2'/
 graphics type='spice' tlsPort='5906' autoport='yes' keymap='en-us' 
 passwdValidTo='1970-01-01T00:00:01'
   listen type='network' address='10.20.122.14' network='vdsm-ovirtmgmt'/
   channel name='main' mode='secure'/
   channel name='display' 

Re: [ovirt-users] Can not kill vm

2015-06-10 Thread 肖力
Hi My  versions of your kernel, qemu-kvm, libvirt, and vdsm is:
Kernel 2.6.32-504.16.2.el6.x86_64
qemu-kvm qemu-kvm qemu-kvm-rhev-0.12.1.2-2.448.el6_6.2
libvirt 0.10.2
vdsm  3.5.2 
vm xml:


domain type='kvm' id='7'
  nameubuntu1410-72/name
  uuid4db31f4b-db2e-4b26-b23a-52ca62edb945/uuid
  memory unit='KiB'8388608/memory
  currentMemory unit='KiB'8388608/currentMemory
  vcpu placement='static' current='8'16/vcpu
  cputune
shares1020/shares
period12500/period
quota12500/quota
  /cputune
  sysinfo type='smbios'
system
  entry name='manufacturer'oVirt/entry
  entry name='product'oVirt Node/entry
  entry name='version'6-6.el6.centos.12.2/entry
  entry name='serial'--3032-3142-413956574530/entry
  entry name='uuid'4db31f4b-db2e-4b26-b23a-52ca62edb945/entry
/system
  /sysinfo
  os
type arch='x86_64' machine='rhel6.5.0'hvm/type
smbios mode='sysinfo'/
  /os
  features
acpi/
  /features
  cpu mode='custom' match='exact'
model fallback='allow'Penryn/model
topology sockets='16' cores='1' threads='1'/
  /cpu
  clock offset='variable' adjustment='0' basis='utc'
timer name='rtc' tickpolicy='catchup'/
timer name='pit' tickpolicy='delay'/
timer name='hpet' present='no'/
  /clock
  on_poweroffdestroy/on_poweroff
  on_rebootrestart/on_reboot
  on_crashdestroy/on_crash
  devices
emulator/usr/libexec/qemu-kvm/emulator
disk type='file' device='cdrom'
  driver name='qemu' type='raw'/
  source startupPolicy='optional'/
  target dev='hdc' bus='ide'/
  readonly/
  serial/serial
  alias name='ide0-1-0'/
  address type='drive' controller='0' bus='1' target='0' unit='0'/
/disk
disk type='block' device='disk' snapshot='no'
  driver name='qemu' type='raw' cache='none' error_policy='stop' 
io='native'/
  source 
dev='/rhev/data-center/d8dadd26-ce25-4d46-8e4a-b033cc01415f/4a12bcb7-1498-4321-bdd1-37481b106f11/images/58e47d6d-4202-40d6-a224-daddbe232deb/003af864-5ff8-46f2-96e8-8327d3d91eb7'/
  target dev='vda' bus='virtio'/
  serial58e47d6d-4202-40d6-a224-daddbe232deb/serial
  boot order='1'/
  alias name='virtio-disk0'/
  address type='pci' domain='0x' bus='0x00' slot='0x05' 
function='0x0'/
/disk
controller type='scsi' index='0' model='virtio-scsi'
  alias name='scsi0'/
  address type='pci' domain='0x' bus='0x00' slot='0x03' 
function='0x0'/
/controller
controller type='virtio-serial' index='0' ports='16'
  alias name='virtio-serial0'/
  address type='pci' domain='0x' bus='0x00' slot='0x04' 
function='0x0'/
/controller
controller type='usb' index='0'
  alias name='usb0'/
  address type='pci' domain='0x' bus='0x00' slot='0x01' 
function='0x2'/
/controller
controller type='ide' index='0'
  alias name='ide0'/
  address type='pci' domain='0x' bus='0x00' slot='0x01' 
function='0x1'/
/controller
interface type='bridge'
  mac address='00:1a:4a:43:e8:0c'/
  source bridge='ovirtmgmt'/
  bandwidth
  /bandwidth
  target dev='vnet6'/
  model type='virtio'/
  filterref filter='vdsm-no-mac-spoofing'/
  link state='up'/
  alias name='net0'/
  address type='pci' domain='0x' bus='0x00' slot='0x07' 
function='0x0'/
/interface
channel type='unix'
  source mode='bind' 
path='/var/lib/libvirt/qemu/channels/4db31f4b-db2e-4b26-b23a-52ca62edb945.com.redhat.rhevm.vdsm'/
  target type='virtio' name='com.redhat.rhevm.vdsm'/
  alias name='channel0'/
  address type='virtio-serial' controller='0' bus='0' port='1'/
/channel
channel type='unix'
  source mode='bind' 
path='/var/lib/libvirt/qemu/channels/4db31f4b-db2e-4b26-b23a-52ca62edb945.org.qemu.guest_agent.0'/
  target type='virtio' name='org.qemu.guest_agent.0'/
  alias name='channel1'/
  address type='virtio-serial' controller='0' bus='0' port='2'/
/channel
channel type='spicevmc'
  target type='virtio' name='com.redhat.spice.0'/
  alias name='channel2'/
  address type='virtio-serial' controller='0' bus='0' port='3'/
/channel
input type='mouse' bus='ps2'/
graphics type='spice' tlsPort='5906' autoport='yes' keymap='en-us' 
passwdValidTo='1970-01-01T00:00:01'
  listen type='network' address='10.20.122.14' network='vdsm-ovirtmgmt'/
  channel name='main' mode='secure'/
  channel name='display' mode='secure'/
  channel name='inputs' mode='secure'/
  channel name='cursor' mode='secure'/
  channel name='playback' mode='secure'/
  channel name='record' mode='secure'/
  channel name='smartcard' mode='secure'/
  channel name='usbredir' mode='secure'/
/graphics
video
  model type='qxl' ram='65536' vram='32768' heads='1'/
  alias name='video0'/
  address type='pci' domain='0x' bus='0x00' slot='0x02' 
function='0x0'/
/video
memballoon model='virtio'
  alias name='balloon0'/
  address 

[ovirt-users] [Centos7x64] [Ovirt 3.5.2 hosted engine] hypervisor hang when rebooted

2015-06-10 Thread wodel youchi
Hi,

Centos7x64, Ovirt 3.5.2 hosted engine : all updated.

Two hypervisors HP DL380p

I have a strange behaviour on my hypervisors, whenever I reboot one of them
it hangs.

It hangs in this line :

hpwdt unexpected close not stopping watchdog
watchdog multiplexing stopped


on the ILO4 GUI I have this error

An Unrecoverable System Error (NMI) has occurred (System error code
0x002B, 0x)


Thanks.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Can not kill vm

2015-06-10 Thread Dan Kenigsberg
On Wed, Jun 10, 2015 at 09:06:43AM +0800, 肖力 wrote:
 Hi
 I can not kill by oVirt.
 When i power off vm by oVirt web ,It is failed.
 And i use virsh destroy vm, failed too,and have list info:
 error: Failed to destroy domain 1
 error: operation failed: failed to kill qemu process with SIGTERM
 What can i do,ths!

It smells like a qemu bug. We recently seen a similar behavior while
attaching host devices to the VM, but I assume this is not your case.

Please share the versions of your kernel, qemu-kvm, libvirt, and vdsm.
Include your

   virsh -r dumpxml vm_name

Does this happen with more than one VM? On different hosts?

Dan.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Centos7.1x64] [Ovirt 3.5.2] Test fence : Power management test failed for Host hosted_engine1 Done

2015-06-10 Thread Martin Perina
Hi,

I just install engine 3.5.2 on Centos 7.1, added 2 Centos 7.1 hosts (both
with ipmilan fence devices) and everything worked fine. I also tried to add
options

  lanplus=1, power_wait=60

and even with them getting power status of hosts worked fine.

So could you please check again settings of your hosts in webadmin?

 hosted_engine1
   PM address: IP address of ILO4 interface of the host hosted_engine1


 hosted_engine2
   PM address: IP address of ILO4 interface of the host hosted_engine2

If the IP addresses are entered correctly, please allow DEBUG log for engine,
execute test of PM settings for one host and attach logs from engine and
VDSM logs from both hosts.

Thanks

Martin Perina


- Original Message -
 From: wodel youchi wodel.you...@gmail.com
 To: users Users@ovirt.org
 Sent: Tuesday, June 9, 2015 2:41:02 PM
 Subject: [ovirt-users] [Centos7.1x64] [Ovirt 3.5.2] Test fence : Power 
 management test failed for Host hosted_engine1
 Done
 
 Hi,
 
 I have a weird problem with fencing
 
 I have a cluster of two HP DL380p G8 (ILO4)
 
 Centos7.1x64 and oVirt 3.5.2 ALL UPDATED
 
 I configured fencing first with ilo4 then ipmilan
 
 When testing fence from the engine I get : Succeeded, Unknown
 
 And in alerts tab I get : Power management test failed for Host
 hosted_engine1 Done (the same for host2)
 
 I tested with fence_ilo4 and fence_ipmilan and they report the result
 correctly
 
 # fence_ipmilan -P -a 192.168.2.2 -o status -l Administrator -p ertyuiop
 -vExecuting: /usr/bin/ipmitool -I lanplus -H 192.168.2.2 -U Administrator -P
 ertyuiop -p 623 -L ADMINISTRATOR chassis power status
 
 0 Chassis Power is on
 
 
 Status: ON
 
 
 # fence_ilo4 -l Administrator -p ertyuiop -a 192.168.2.2 -o status -v
 Executing: /usr/bin/ipmitool -I lanplus -H 192.168.2.2 -U Administrator -P
 ertyuiop -p 623 -L ADMINISTRATOR chassis power status
 
 0 Chassis Power is on
 
 
 Status: ON
 
 --
 These are the options passed to fence_ipmilan (I tested with the options and
 without them)
 
 lanplus=1, power_wait=60
 
 
 This is the engine log:
 
 2015-06-09 13:35:29,287 INFO [org.ovirt.engine.core.bll.FenceExecutor]
 (ajp--127.0.0.1-8702-7) Using Host hosted_engine_2 from cluster Default as
 proxy to execute Status command on Host
 2015-06-09 13:35:29,289 INFO [org.ovirt.engine.core.bll.FenceExecutor]
 (ajp--127.0.0.1-8702-7) Executing Status Power Management command, Proxy
 Host:hosted_engine_2, Agent:ipmilan, Target Host:, Management
 IP:192.168.2.2, User:Administrator, Options: power_wait=60,lanplus=1,
 Fencing policy:null
 2015-06-09 13:35:29,306 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand]
 (ajp--127.0.0.1-8702-7) START, FenceVdsVDSCommand(
 HostName = hosted_engine_2,
 HostId = 0192d1ac-b905-4660-b149-4bef578985dd,
 targetVdsId = cf2d1260-7bb3-451a-9cd7-80e6a0ede52a,
 action = Status,
 ip = 192.168.2.2,
 port = ,
 type = ipmilan,
 user = Administrator,
 password = **,
 options = ' power_wait=60,lanplus=1',
 policy = 'null'), log id: 24ce6206
 2015-06-09 13:35:29,516 WARN
 [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
 (ajp--127.0.0.1-8702-7) Correlation ID: null, Call Stack: null, Custom Event
 ID: -1, Message: Power Management test failed for Host hosted_engine_1.Done
 2015-06-09 13:35:29,516 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand]
 (ajp--127.0.0.1-8702-7) FINISH, FenceVdsVDSCommand, return: Test Succeeded,
 unknown , log id: 24ce6206
 
 
 and here the vdsm log from the proxy
 
 JsonRpcServer::DEBUG::2015-06-09
 13:37:52,461::__init__::506::jsonrpc.JsonRpcServer::(serve_requests) Waiting
 for request
 Thread-131907::DEBUG::2015-06-09 13:37:52,463::API::1209::vds::(fenceNode)
 fenceNode(addr=192.168.2.2,port=,agent=ipmilan,user=Administrator,passwd=,action=status,secure=False,options=
 power_wait=60
 lanplus=1,policy=None)
 Thread-131907::DEBUG::2015-06-09 13:37:52,463::utils::739::root::(execCmd)
 /usr/sbin/fence_ipmilan (cwd None)
 Thread-131907::DEBUG::2015-06-09 13:37:52,533::utils::759::root::(execCmd)
 FAILED: err = 'Failed: Unable to obtain correct plug status or plug is not
 available\n\n\n'; rc = 1
 Thread-131907::DEBUG::2015-06-09 13:37:52,533::API::1164::vds::(fence) rc 1
 inp agent=fence_ipmilan
 ipaddr=192.168.2.2
 login=Administrator
 action=status
 passwd=
 power_wait=60
 lanplus=1 out [] err ['Failed: Unable to obtain correct plug status or plug
 is not available', '', '']
 Thread-131907::DEBUG::2015-06-09 13:37:52,533::API::1235::vds::(fenceNode) rc
 1 in agent=fence_ipmilan
 ipaddr=192.168.2.2
 login=Administrator
 action=status
 passwd=
 power_wait=60
 lanplus=1 out [] err [' Failed: Unable to obtain correct plug status or
 plug is not available ', '', '']
 Thread-131907::DEBUG::2015-06-09
 13:37:52,534::stompReactor::163::yajsonrpc.StompServer::(send) Sending
 response
 Detector thread::DEBUG::2015-06-09
 

[ovirt-users] ovirt webadmin errors

2015-06-10 Thread jazzman
Hi guys, 
I need to build a ovirt based environment using couple old servers - for now 
only for testing. 
For controler I choose machine with 3GB of ram and Intel Xeon 2.8 Ghz. 
I know this is below requirements but in this environment I have smal amount of 
physical servers eg:
1x controller 
2x hypervisors 
2x glusters
After instaling, everything looks okay until the first logon to webadmin. 
When I try to define first virtual datacenter webui works realy slow. It is not 
possible to switch beetwen any tabs. At the end I received a 503 error. 
I thought that was a my browser issue. I check this using Ubuntu + Chromium + 
Firefox from repo, also I try a older version of Firefox. 
Windows 7 + Firefox -  was this same problems. 
Does anybody know how to resolve the problem ? 
Thanks 
Greg K 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt webadmin errors

2015-06-10 Thread Yedidyah Bar David
- Original Message -
 From: jazz...@go2.pl jazz...@o2.pl
 To: users@ovirt.org
 Sent: Wednesday, June 10, 2015 3:04:08 PM
 Subject: [ovirt-users] ovirt webadmin errors
 
 Hi guys,
 I need to build a ovirt based environment using couple old servers - for now
 only for testing.
 For controler I choose machine with 3GB of ram and Intel Xeon 2.8 Ghz.
 I know this is below requirements but in this environment I have smal amount
 of physical servers eg:
 1x controller
 2x hypervisors
 2x glusters
 
 After instaling, everything looks okay until the first logon to webadmin.
 When I try to define first virtual datacenter webui works realy slow. It is
 not possible to switch beetwen any tabs. At the end I received a 503 error.
 I thought that was a my browser issue. I check this using Ubuntu + Chromium +
 Firefox from repo, also I try a older version of Firefox.
 Windows 7 + Firefox - was this same problems.
 
 Does anybody know how to resolve the problem ?

Please post relevant logs - check /var/log/ovirt-engine /var/log/httpd .
Which ovirt version? Which OS on each machine?

Best,
-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt webadmin errors

2015-06-10 Thread Alexander Wels
On Wednesday, June 10, 2015 02:04:08 PM jazz...@go2.pl wrote:
 Hi guys, 
 I need to build a ovirt based environment using couple old servers - for now
 only for testing.  For controler I choose machine with 3GB of ram and Intel
 Xeon 2.8 Ghz. I know this is below requirements but in this environment I
 have smal amount of physical servers eg: 1x controller
 2x hypervisors 
 2x glusters
 After instaling, everything looks okay until the first logon to webadmin. 
 When I try to define first virtual datacenter webui works realy slow. It is
 not possible to switch beetwen any tabs. At the end I received a 503
 error.  I thought that was a my browser issue. I check this using Ubuntu +
 Chromium + Firefox from repo, also I try a older version of Firefox.
 Windows 7 + Firefox -  was this same problems.
 Does anybody know how to resolve the problem ? 
 Thanks 
 Greg K 

Did you define a proper DNS or at least define all the hosts and engine in the 
engine /etc/hosts. Sometimes it is really slow because it is trying to resolve 
names and can't.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt webadmin errors

2015-06-10 Thread jazzman
Wow, Alexander hit the jackpot.
I have stupid typo in dns zone. Now it's work fine. 

thanks for reply 
greetings form Poland 

Greg

Dnia 10 czerwca 2015 14:34 Alexander Wels aw...@redhat.com napisał(a):

 
  
 
 Did you define a proper DNS or at least define all the hosts and engine in 
 the 
 engine /etc/hosts. Sometimes it is really slow because it is trying to 
 resolve 
 names and can't.





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Centos7.1x64] [Ovirt 3.5.2] Test fence : Power management test failed for Host hosted_engine1 Done

2015-06-10 Thread wodel youchi
Hi,

Yes the fence_ipmilan was executed from the other host (hypervisor) and
verse versa.

the command line works fine, the engine GUI shows the error message.

2015-06-09 15:11 GMT+01:00 wodel youchi wodel.you...@gmail.com:

 Hi again,

 No I didn't execute the fence command from the engine, I did it from the
 other host (hypervisor) and verse-versa, and I got the same results.

 from hyper1 - hyper2 and from hyper2 - hyper1


 the fence_ipmilan works in the shell, but I still have the error on the
 Engine GUI

 thanks

 2015-06-09 15:00 GMT+01:00 Sven Kieske s.kie...@mittwald.de:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 AFAIK fencing from engine is not supported?

 See my bug report: https://bugzilla.redhat.com/show_bug.cgi?id=1054778

 afaik there is no RFE yet to allow fencing from engine host.


 On 09/06/15 15:34, Martin Perina wrote:
  Hi,
 
  engine uses one of the existing host as fence proxy and fence
  script is executed on that host.
 
  Did you try to execute fence_ipmilan on engine host? If so, could
  you please try to execute the same command on the host
  hosted_engine_2 (as I see in your logs that this host was selected
  as fence proxy)?
 
  Thanks
 
  Martin Perina

 - --
 Mit freundlichen Grüßen / Regards

 Sven Kieske

 Systemadministrator
 Mittwald CM Service GmbH  Co. KG
 Königsberger Straße 6
 32339 Espelkamp
 T: +49-5772-293-100
 F: +49-5772-293-333
 https://www.mittwald.de
 Geschäftsführer: Robert Meyer
 St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhaus
 en
 Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad
 Oeynhausen
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v2.0.22 (GNU/Linux)

 iQIcBAEBAgAGBQJVdvFnAAoJEMby9TMDAbQRqgcQAMisNoOXT9b5S2m8aOm6y8eO
 BlEFBzZ+8jRoMx/UlQPsoBe06rCFbZBnibg3Wkr5QEyjK0yECB70MMJf/atbBQmV
 HHAzB837CcnH7FLEyBDfG1vCbci/k4F7ow3IOVF0UQpqZ5NnCsnImZNc5PSL5hOn
 DwIOieRMPbx0rZW3CeVlCGJAnx0nKAR8K0KuFWRQ4Hzk0giWKh7CRRNzwHBcEbFD
 AAAdbjNl49OAvp14G5AlHm3AHB1M8Rm/qyeKndfJ0NPUonCsMjNKjw5+dyXmqvor
 rYEaS/eBSl7EBXs6MCY2ZsELJ0BTRCR9T960n2SCtlsudog5+QhmDgDBVDTjpidS
 0jSXRhXjRl9zMXpesj9MOgkT1NFd3G11QlMuEkW+v/5uuA+hR68/p1S6SGh+zCPp
 e8XyG58oHTX83FZEW7/xTRHQ6vsRNH7qsQNVdqKAE3LDCbmJK3grWQa2VxpclaOp
 miJ0r/xGmKzf/TMCmZ9ywQcr5AAx6CWxwPgwzNaca/iqM6ToykHcLHdpk/MM+Dvb
 2fy1Lprz+pMumEDlJ3Yx+Ps4Whd5XcqdKlR0hGjvmx0lx8lcCfM+iq11EqxlObpr
 /SbtcKSJJt+BB53pc60aAdRUG4AOByKO8qojXzLvAUegLLdFTXfnIRQJpwrNRb18
 TmkcQpG06n7G+WKEYG5n
 =nFvM
 -END PGP SIGNATURE-
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Centos7x64] [Ovirt 3.5.2 hosted engine] hypervisor hang when rebooted

2015-06-10 Thread wodel youchi
Hi again,

After a lot of reading, I did find this
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1398497

add this intremap=no_x2apic_optout to kernel boot but it didn't change
anything.

2015-06-10 12:06 GMT+01:00 wodel youchi wodel.you...@gmail.com:

 Hi,

 Centos7x64, Ovirt 3.5.2 hosted engine : all updated.

 Two hypervisors HP DL380p

 I have a strange behaviour on my hypervisors, whenever I reboot one of
 them it hangs.

 It hangs in this line :

 hpwdt unexpected close not stopping watchdog
 watchdog multiplexing stopped


 on the ILO4 GUI I have this error

 An Unrecoverable System Error (NMI) has occurred (System error code
 0x002B, 0x)


 Thanks.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] How do I get discard working?

2015-06-10 Thread Chris Jones - BookIt . com Systems Administrator

oVirt Node - 3.5 - 0.999.201504280931.el7.centos

Using our shared storage via baremetal (stock CentOS 7) - iscsi, I can 
successfully issue fstrim commands. With oVirt at the VM level, even 
with direct LUNS, trim commands are not supported despite having the LVM 
config in the VMs set up to allow it.


Thanks

--
This email was Virus checked by the PHX UTM 9. For issues please contact the 
Windows Systems Admin.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] How do I get discard working?

2015-06-10 Thread Amador Pahim

On 06/10/2015 03:24 PM, Fabian Deutsch wrote:

- Original Message -

oVirt Node - 3.5 - 0.999.201504280931.el7.centos

Using our shared storage via baremetal (stock CentOS 7) - iscsi, I can
successfully issue fstrim commands. With oVirt at the VM level, even
with direct LUNS, trim commands are not supported despite having the LVM
config in the VMs set up to allow it.

Hey,

IIUIC you try to get discard working for VMs? That means that if fstrim is
used inside the VM, that it is getting passed down?

The command line needed for qemu to support discards is:

$ qemu … -drive if=virtio,cache=unsafe,discard,file=disk …

I'm not sure which qemu disk drivers/busses support this, but at least virtio 
does so.
I'm using it for development.

You could try a vdsm hook to modify the qemu command which is called when the 
VM is spawned.

Let me know if you can come up with a hook to realize this!


There's this hook in code review intended  to do so:
https://gerrit.ovirt.org/#/c/29770/



Greetings
fabian
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] How do I get discard working?

2015-06-10 Thread Chris Adams
Once upon a time, Fabian Deutsch fdeut...@redhat.com said:
 I'm not sure which qemu disk drivers/busses support this, but at least virtio 
 does so.
 I'm using it for development.

Using CentOS 7, I think qemu-kvm only supports discard on virtio-scsi
drives using the raw format.  Discard on qcow2 images is supported for
newer versions of qemu-kvm.

oVirt uses virtio by default; you can switch to virtio-scsi.  However,
if you use thin-provisioned disks, oVirt uses qcow2 format, so discard
is not supported with CentOS 7's version of qemu-kvm.

-- 
Chris Adams c...@cmadams.net
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Unable to use the SPICE HTML5 tool

2015-06-10 Thread Nicolás

Hi,

I'm using oVirt 3.5.2 on a CentOS box. As far as the engine goes 
everything is working fine, except that I can't start the SPICE HTML5 
tool for any of the installed machines I have.


I've installed the oVirt guest agent on the VM side, websocket-proxy is 
running and port 6100 is listening on the engine box:


   # systemctl status ovirt-websocket-proxy
   ovirt-websocket-proxy.service - oVirt Engine websockets proxy
   Loaded: loaded
   (/usr/lib/systemd/system/ovirt-websocket-proxy.service; enabled)
   Active: active (running)

   # netstat -atpn | grep 6100
   tcp0  0 0.0.0.0:6100 0.0.0.0:*   LISTEN 
   7227/python


Also, I imported the CA cert (https://fqdn/ca.crt) on the browser. 
However, once I run the SPICE HTML5 client from the userportal, all I 
get is an empty grey square with the two Send ctrl+alt+delete and 
Toggle messages output buttons at the bottom. Nothing in the logs 
about this issue. I tried to run it both with Chromium and Firefox on a 
linux box.


Is there anything that I am missing? I've run out of ideas...

Thanks.

Nicolás


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] How do I get discard working?

2015-06-10 Thread Fabian Deutsch
- Original Message -
 oVirt Node - 3.5 - 0.999.201504280931.el7.centos
 
 Using our shared storage via baremetal (stock CentOS 7) - iscsi, I can
 successfully issue fstrim commands. With oVirt at the VM level, even
 with direct LUNS, trim commands are not supported despite having the LVM
 config in the VMs set up to allow it.

Hey,

IIUIC you try to get discard working for VMs? That means that if fstrim is
used inside the VM, that it is getting passed down?

The command line needed for qemu to support discards is:

$ qemu … -drive if=virtio,cache=unsafe,discard,file=disk …

I'm not sure which qemu disk drivers/busses support this, but at least virtio 
does so.
I'm using it for development.

You could try a vdsm hook to modify the qemu command which is called when the 
VM is spawned.

Let me know if you can come up with a hook to realize this!

Greetings
fabian
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Can not kill vm

2015-06-10 Thread 肖力
Ok,thk!
Now i have only one way just reboot host ,i wait fix this .



At 2015-06-10 19:23:11, Dan Kenigsberg dan...@redhat.com wrote:
On Wed, Jun 10, 2015 at 06:02:01PM +0800, 肖力 wrote:
 Hi My  versions of your kernel, qemu-kvm, libvirt, and vdsm is:

Thanks. I don't really have a clue, but I have further question, that
may help qemu/libvirt identify the bug:

 Kernel 2.6.32-504.16.2.el6.x86_64
 qemu-kvm qemu-kvm qemu-kvm-rhev-0.12.1.2-2.448.el6_6.2
 libvirt 0.10.2

can you share the complete `rpm -q libvirt`?

 vdsm  3.5.2 

This is an ovirt version, not a vdsm version.

Can you see something interesting in libvirtd.log? Can you share
/var/log/libvirt/qemu/yourvm.log ?

Does this happen with more than one VM? On different hosts?


 vm xml:
 
 
 domain type='kvm' id='7'
   nameubuntu1410-72/name
   uuid4db31f4b-db2e-4b26-b23a-52ca62edb945/uuid
   memory unit='KiB'8388608/memory
   currentMemory unit='KiB'8388608/currentMemory
   vcpu placement='static' current='8'16/vcpu
   cputune
 shares1020/shares
 period12500/period
 quota12500/quota
   /cputune
   sysinfo type='smbios'
 system
   entry name='manufacturer'oVirt/entry
   entry name='product'oVirt Node/entry
   entry name='version'6-6.el6.centos.12.2/entry
   entry name='serial'--3032-3142-413956574530/entry
   entry name='uuid'4db31f4b-db2e-4b26-b23a-52ca62edb945/entry
 /system
   /sysinfo
   os
 type arch='x86_64' machine='rhel6.5.0'hvm/type
 smbios mode='sysinfo'/
   /os
   features
 acpi/
   /features
   cpu mode='custom' match='exact'
 model fallback='allow'Penryn/model
 topology sockets='16' cores='1' threads='1'/
   /cpu
   clock offset='variable' adjustment='0' basis='utc'
 timer name='rtc' tickpolicy='catchup'/
 timer name='pit' tickpolicy='delay'/
 timer name='hpet' present='no'/
   /clock
   on_poweroffdestroy/on_poweroff
   on_rebootrestart/on_reboot
   on_crashdestroy/on_crash
   devices
 emulator/usr/libexec/qemu-kvm/emulator
 disk type='file' device='cdrom'
   driver name='qemu' type='raw'/
   source startupPolicy='optional'/
   target dev='hdc' bus='ide'/
   readonly/
   serial/serial
   alias name='ide0-1-0'/
   address type='drive' controller='0' bus='1' target='0' unit='0'/
 /disk
 disk type='block' device='disk' snapshot='no'
   driver name='qemu' type='raw' cache='none' error_policy='stop' 
 io='native'/
   source 
 dev='/rhev/data-center/d8dadd26-ce25-4d46-8e4a-b033cc01415f/4a12bcb7-1498-4321-bdd1-37481b106f11/images/58e47d6d-4202-40d6-a224-daddbe232deb/003af864-5ff8-46f2-96e8-8327d3d91eb7'/
   target dev='vda' bus='virtio'/
   serial58e47d6d-4202-40d6-a224-daddbe232deb/serial
   boot order='1'/
   alias name='virtio-disk0'/
   address type='pci' domain='0x' bus='0x00' slot='0x05' 
 function='0x0'/
 /disk
 controller type='scsi' index='0' model='virtio-scsi'
   alias name='scsi0'/
   address type='pci' domain='0x' bus='0x00' slot='0x03' 
 function='0x0'/
 /controller
 controller type='virtio-serial' index='0' ports='16'
   alias name='virtio-serial0'/
   address type='pci' domain='0x' bus='0x00' slot='0x04' 
 function='0x0'/
 /controller
 controller type='usb' index='0'
   alias name='usb0'/
   address type='pci' domain='0x' bus='0x00' slot='0x01' 
 function='0x2'/
 /controller
 controller type='ide' index='0'
   alias name='ide0'/
   address type='pci' domain='0x' bus='0x00' slot='0x01' 
 function='0x1'/
 /controller
 interface type='bridge'
   mac address='00:1a:4a:43:e8:0c'/
   source bridge='ovirtmgmt'/
   bandwidth
   /bandwidth
   target dev='vnet6'/
   model type='virtio'/
   filterref filter='vdsm-no-mac-spoofing'/
   link state='up'/
   alias name='net0'/
   address type='pci' domain='0x' bus='0x00' slot='0x07' 
 function='0x0'/
 /interface
 channel type='unix'
   source mode='bind' 
 path='/var/lib/libvirt/qemu/channels/4db31f4b-db2e-4b26-b23a-52ca62edb945.com.redhat.rhevm.vdsm'/
   target type='virtio' name='com.redhat.rhevm.vdsm'/
   alias name='channel0'/
   address type='virtio-serial' controller='0' bus='0' port='1'/
 /channel
 channel type='unix'
   source mode='bind' 
 path='/var/lib/libvirt/qemu/channels/4db31f4b-db2e-4b26-b23a-52ca62edb945.org.qemu.guest_agent.0'/
   target type='virtio' name='org.qemu.guest_agent.0'/
   alias name='channel1'/
   address type='virtio-serial' controller='0' bus='0' port='2'/
 /channel
 channel type='spicevmc'
   target type='virtio' name='com.redhat.spice.0'/
   alias name='channel2'/
   address type='virtio-serial' controller='0' bus='0' port='3'/
 /channel
 input type='mouse' bus='ps2'/
 graphics type='spice' tlsPort='5906' autoport='yes' keymap='en-us' 
 passwdValidTo='1970-01-01T00:00:01'
   listen