[ovirt-users] Cannot start VM - pauses due to storage error

2014-12-27 Thread Brent Hartzell
Hello,

 

All of the sudden stared getting the errors below. Can't start VM's, they
immediately go into paused state.

 

 

 

Thread-51856::DEBUG::2014-12-27
09:56:08,812::vm::486::vm.Vm::(_getUserCpuTuneInfo)
vmId=`e3d75e55-2b41-4f0b-8d2a-16f8fde2ba0d`::Domain Metadata is not set

Thread-51858::DEBUG::2014-12-27
09:56:11,953::vm::486::vm.Vm::(_getUserCpuTuneInfo)
vmId=`e3d75e55-2b41-4f0b-8d2a-16f8fde2ba0d`::Domain Metadata is not set

Thread-51860::DEBUG::2014-12-27
09:56:12,856::__init__::467::jsonrpc.JsonRpcServer::(_serveRequest) Calling
'VM.cont' in bridge with {u'vmID': u'e3d75e55-2b41-4f0b-8d2a-16f8fde2ba0d'}

libvirtEventLoop::DEBUG::2014-12-27
09:56:13,016::vm::5461::vm.Vm::(_onLibvirtLifecycleEvent)
vmId=`e3d75e55-2b41-4f0b-8d2a-16f8fde2ba0d`::event Resumed detail 0 opaque
None

libvirtEventLoop::DEBUG::2014-12-27
09:56:13,021::vm::5461::vm.Vm::(_onLibvirtLifecycleEvent)
vmId=`e3d75e55-2b41-4f0b-8d2a-16f8fde2ba0d`::event Resumed detail 0 opaque
None

libvirtEventLoop::INFO::2014-12-27
09:56:13,026::vm::4780::vm.Vm::(_onIOError)
vmId=`e3d75e55-2b41-4f0b-8d2a-16f8fde2ba0d`::abnormal vm stop device
virtio-disk0 error eother

libvirtEventLoop::DEBUG::2014-12-27
09:56:13,029::vm::5461::vm.Vm::(_onLibvirtLifecycleEvent)
vmId=`e3d75e55-2b41-4f0b-8d2a-16f8fde2ba0d`::event Suspended detail 2 opaque
None

Thread-51863::DEBUG::2014-12-27
09:56:15,072::vm::486::vm.Vm::(_getUserCpuTuneInfo)
vmId=`e3d75e55-2b41-4f0b-8d2a-16f8fde2ba0d`::Domain Metadata is not set

Thread-51869::DEBUG::2014-12-27
09:56:18,120::vm::486::vm.Vm::(_getUserCpuTuneInfo)
vmId=`e3d75e55-2b41-4f0b-8d2a-16f8fde2ba0d`::Domain Metadata is not set

Thread-51872::DEBUG::2014-12-27
09:56:21,154::vm::486::vm.Vm::(_getUserCpuTuneInfo)
vmId=`e3d75e55-2b41-4f0b-8d2a-16f8fde2ba0d`::Domain Metadata is not set

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] all hosts non-operational

2014-12-22 Thread Brent Hartzell
Hello,

 

After testing replacing a failed Gluster brick (shared ovirt/gluster) ALL
hosts in the cluster go non-responsive, storage drops off etc. Now, gluster
peer status fails, can't set any volume options, the volume randomly drops
out of oVirt (was created from oVirt), log in oVirt dashboard shows the
entry that the volume was deleted (but is there). Any gluster commands just
hang. The combination of Ovirt  Gluster seems stable until there's a
problem, then literally everything just grinds to a halt. All VM's go down,
datacenter  hosts go non-responsive and the whole thing is broke.. Any
ideas on what we should be looking for?

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Autostart vm's at host boot on local storage

2014-12-22 Thread Brent Hartzell
Can this be done? We hit a road block with gluster and will be using local 
storage while testing gluster. Only problem, if a host reboots, the vm's on 
that host do not. Is there a way to have ovirt/libvirt start all vm's residing 
on the local storage?___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] CPU Type

2014-12-20 Thread Brent Hartzell
Hello,

 

Is there a way to add Xeon or another class of CPU Type to oVirt? We have
some test hosts, which use a combo of the following CPU types:

 

Xeon L5420

Xeon E5430

Xeon E5420

 

The only two CPU Types that will work in oVirt are Conroe  Penryn. Inside
of a VM, it reports Core 2 Duo.

 

Host reports:

model name  : Intel(R) Xeon(R) CPU   E5430  @ 2.66GHz

 

 

VM reports:

model name  : Intel Core 2 Duo P9xxx (Penryn Class Core 2)

 

 

//

 

Is there a way to have the VM report the correct CPU? It doesn't appear to
cause any performance or other issues, but seems to be just a display issue.
My concern though, is that we may not be able to add other servers with
different Intels to the same cluster, for example, new hosts with E5- or
E3- processors. Can someone confirm this wouldn't be an issue?

 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Cannot activate storage domain

2014-12-17 Thread Brent Hartzell
Have the following:

 

6 hosts - virt + Gluster shared

 

Gluster volume is distributed-replicate - replica 2

 

Shutting down servers one at a time all work except for 1 brick. If we shut
down one specific brick (1 brick per host) - we're unable to activate the
storage domain. VM's that were actively running from other bricks continue
to run. Whatever was running form that specific brick fails to run, gets
paused etc. 

 

Error log shows the entry below. I'm not certain what it's saying is read
only.nothing is read only that I can find.  

 

 

2014-12-17 19:57:13,362 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStatusVDSCommand]
(DefaultQuartzScheduler_Worker-47) [4e9290a2] Command
SpmStatusVDSCommand(HostName = U23.domainame.net, HostId =
0db58e46-68a3-4ba0-a8aa-094893c045a1, storagePoolId =
7ccd6ea9-7d80-4170-afa1-64c10c185aa6) execution failed. Exception:
VDSErrorException: VDSGenericException: VDSErrorException: Failed to
SpmStatusVDS, error = [Errno 30] Read-only file system, code = 100

2014-12-17 19:57:13,363 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData]
(DefaultQuartzScheduler_Worker-47) [4e9290a2] hostFromVds::selectedVds -
U23.domainname.net, spmStatus returned null!

 

 

According to Ovirt/Gluster, if a brick goes down, the VM should be able to
be restarted from another brick without issue. This does not appear to be
the case. If we take other bricks offline, it appears to work as expected.
Something with this specific brick cases everything to break which then
makes any VM's that were running from the brick unable to start.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] alternate method of fencing

2014-12-16 Thread Brent Hartzell
Is there a way to force oVirt to place a host into maintenance mode
automatically when a problem occurs? The problem we have is separate
networks for everything:

 

2 public nics which are bonded active/backup for internet access

4 bonded gigabit nics for our Gluster network

1 IPMI 

 

The ipmi is on a completely different network, not attached to anything else
other than a VPN. 

 

///

 

Our cluster servers shared Gluster  hypervisor on the same hosts, which
works great however, all are 1U servers so we don't have room for another
NIC to dedicate to just an ipmi network connection in addition to the ipmi
card itself. Even if the PSU on a host fails, then the ipmi power management
is useless anyhow and the VM's on that host will not be moved.

 

I've tested putting a host into maintenance mode with running VM's and it
will auto-migrate the VM's without issue. Is there a way to make oVirt just
force a host into maintenance mode if there's some sort of problem or some
other fencing mechanism that will allow the VM's to be migrated to another
host?

 

 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users