Re: [Users] error when create a date centers.

2012-04-17 Thread Livnat Peer
It's a known bug
https://bugzilla.redhat.com/show_bug.cgi?id=811462

Actually looks like this was solved today :)



On 16/04/12 14:08, ShaoHe Feng wrote:
  Hi all.

 there's something wrong with my oVirt-engine.

 I want to create a Data Centers.
 and the the arguments as follow:
 Name Local
 Description 
 Type   Local on Host
 Compatibility Version  3.1
 Quota ModeDISABLED

 then I click OK. And a dialog box pop-up as follow.
 the dialog box does not exit,  but it sames the Local Data Centers is
 create OK.

 Is this a bug or something wrong with my oVirt-engine?

 *1* attachments

 cfbjcigj.png(105K)
 download
 
 http://preview.mail.126.com/xdownload?filename=cfbjcigj.pngmid=1tbi4xSS7E3LicXzmwAAsrpart=3sign=f31a7be690bada5ca21b107237dfe0f3time=1334574442uid=lvmxh%40126.com
 preview
 
 http://preview.mail.126.com/preview?mid=1tbi4xSS7E3LicXzmwAAsrpart=3sign=f31a7be690bada5ca21b107237dfe0f3time=1334574442uid=lvmxh%40126.com


 
 
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Problem with live migration.

2012-04-17 Thread Martin Hovmöller
I can't live migrate the vm's in my cluster:

012-04-17 04:06:39,728 INFO  [org.ovirt.engine.core.bll.MigrateVmCommand]
(pool-5-thread-48) Running command: MigrateVmCommand internal: false.
Entities affected :  ID: ce9bb531-4d2a-4f5e-9935-3c8a4f5db94a Type: VM
2012-04-17 04:06:39,743 INFO
 [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (pool-5-thread-48)
START, MigrateVDSCommand(vdsId = f33ffc14-87ba-11e1-b610-e3aeca1e8008,
vmId=ce9bb531-4d2a-4f5e-9935-3c8a4f5db94a, srcHost=10.23.30.130,
dstVdsId=b4d329b4-87b6-11e1-ac0a-b70ec8cc50f0, dstHost=10.23.30.110:54321,
migrationMethod=ONLINE), log id: 7c7201be
2012-04-17 04:06:39,748 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
(pool-5-thread-48) VdsBroker::migrate::Entered
(vm_guid=ce9bb531-4d2a-4f5e-9935-3c8a4f5db94a, srcHost=10.23.30.130,
dstHost=10.23.30.110:54321,  method=online
2012-04-17 04:06:39,749 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
(pool-5-thread-48) START, MigrateBrokerVDSCommand(vdsId =
f33ffc14-87ba-11e1-b610-e3aeca1e8008,
vmId=ce9bb531-4d2a-4f5e-9935-3c8a4f5db94a, srcHost=10.23.30.130,
dstVdsId=b4d329b4-87b6-11e1-ac0a-b70ec8cc50f0, dstHost=10.23.30.110:54321,
migrationMethod=ONLINE), log id: 3c40c2de
2012-04-17 04:06:39,834 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
(pool-5-thread-48) FINISH, MigrateBrokerVDSCommand, log id: 3c40c2de
2012-04-17 04:06:39,840 INFO
 [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (pool-5-thread-48)
FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 7c7201be
2012-04-17 04:06:42,166 INFO
 [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(QuartzScheduler_Worker-18) vds::refreshVmList vm id
ce9bb531-4d2a-4f5e-9935-3c8a4f5db94a is migrating to vds rhevh1.domain
ignoring it in the refresh till migration is done
[...]
2012-04-17 04:07:33,852 INFO
 [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(QuartzScheduler_Worker-34) vds::refreshVmList vm id
ce9bb531-4d2a-4f5e-9935-3c8a4f5db94a status = Paused on vds rhevh1.domain
ignoring it in the refresh till migration is done
2012-04-17 04:07:37,407 ERROR
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(QuartzScheduler_Worker-71) Rerun vm ce9bb531-4d2a-4f5e-9935-3c8a4f5db94a.
Called from vds rhevh2.domain
2012-04-17 04:07:37,414 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
(pool-5-thread-47) START, MigrateStatusVDSCommand(vdsId =
f33ffc14-87ba-11e1-b610-e3aeca1e8008,
vmId=ce9bb531-4d2a-4f5e-9935-3c8a4f5db94a), log id: 8dfce1b
2012-04-17 04:07:37,496 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
(pool-5-thread-47) Failed in MigrateStatusVDS method
2012-04-17 04:07:37,497 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
(pool-5-thread-47) Error code createErr and error message
VDSGenericException: VDSErrorException: Failed to MigrateStatusVDS, error =
Error creating the requested virtual machine
2012-04-17 04:07:37,497 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
(pool-5-thread-47) Command
org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand return
value
 Class Name:
org.ovirt.engine.core.vdsbroker.vdsbroker.StatusOnlyReturnForXmlRpc
mStatus   Class Name:
org.ovirt.engine.core.vdsbroker.vdsbroker.StatusForXmlRpc
mCode 9
mMessage  Error creating the requested virtual machine


2012-04-17 04:07:37,497 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
(pool-5-thread-47) Vds: rhevh2.domain
2012-04-17 04:07:37,497 ERROR
[org.ovirt.engine.core.vdsbroker.VDSCommandBase] (pool-5-thread-47) Command
MigrateStatusVDS execution failed. Exception: VDSErrorException:
VDSGenericException: VDSErrorException: Failed to MigrateStatusVDS, error =
Error creating the requested virtual machine
2012-04-17 04:07:37,497 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
(pool-5-thread-47) FINISH, MigrateStatusVDSCommand, log id: 8dfce1b
2012-04-17 04:07:37,528 WARN  [org.ovirt.engine.core.bll.MigrateVmCommand]
(pool-5-thread-47) CanDoAction of action MigrateVm failed.
Reasons:ACTION_TYPE_FAILED_VDS_VM_CLUSTER,VAR__ACTION__MIGRATE,VAR__TYPE__VM


There is no problem regarding storage or anything like that. If I shut down
the vm I can start it on the other host without any problems whatsoever.
Do I need to do something to make live migration work? Trying to dig in the
logs on the hypervisors, but there's so much stuff being logged there...
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Problem with live migration.

2012-04-17 Thread Itamar Heim

On 04/17/2012 12:47 PM, Martin Hovmöller wrote:

I can't live migrate the vm's in my cluster:

012-04-17 04:06:39,728 INFO
  [org.ovirt.engine.core.bll.MigrateVmCommand] (pool-5-thread-48)
Running command: MigrateVmCommand internal: false. Entities affected :
  ID: ce9bb531-4d2a-4f5e-9935-3c8a4f5db94a Type: VM
2012-04-17 04:06:39,743 INFO
  [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (pool-5-thread-48)
START, MigrateVDSCommand(vdsId = f33ffc14-87ba-11e1-b610-e3aeca1e8008,
vmId=ce9bb531-4d2a-4f5e-9935-3c8a4f5db94a, srcHost=10.23.30.130,
dstVdsId=b4d329b4-87b6-11e1-ac0a-b70ec8cc50f0,
dstHost=10.23.30.110:54321 http://10.23.30.110:54321,
migrationMethod=ONLINE), log id: 7c7201be
2012-04-17 04:06:39,748 INFO
  [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
(pool-5-thread-48) VdsBroker::migrate::Entered
(vm_guid=ce9bb531-4d2a-4f5e-9935-3c8a4f5db94a, srcHost=10.23.30.130,
dstHost=10.23.30.110:54321 http://10.23.30.110:54321,  method=online
2012-04-17 04:06:39,749 INFO
  [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
(pool-5-thread-48) START, MigrateBrokerVDSCommand(vdsId =
f33ffc14-87ba-11e1-b610-e3aeca1e8008,
vmId=ce9bb531-4d2a-4f5e-9935-3c8a4f5db94a, srcHost=10.23.30.130,
dstVdsId=b4d329b4-87b6-11e1-ac0a-b70ec8cc50f0,
dstHost=10.23.30.110:54321 http://10.23.30.110:54321,
migrationMethod=ONLINE), log id: 3c40c2de
2012-04-17 04:06:39,834 INFO
  [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
(pool-5-thread-48) FINISH, MigrateBrokerVDSCommand, log id: 3c40c2de
2012-04-17 04:06:39,840 INFO
  [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (pool-5-thread-48)
FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 7c7201be
2012-04-17 04:06:42,166 INFO
  [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(QuartzScheduler_Worker-18) vds::refreshVmList vm id
ce9bb531-4d2a-4f5e-9935-3c8a4f5db94a is migrating to vds rhevh1.domain
ignoring it in the refresh till migration is done
[...]
2012-04-17 04:07:33,852 INFO
  [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(QuartzScheduler_Worker-34) vds::refreshVmList vm id
ce9bb531-4d2a-4f5e-9935-3c8a4f5db94a status = Paused on vds
rhevh1.domain ignoring it in the refresh till migration is done
2012-04-17 04:07:37,407 ERROR
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(QuartzScheduler_Worker-71) Rerun vm
ce9bb531-4d2a-4f5e-9935-3c8a4f5db94a. Called from vds rhevh2.domain
2012-04-17 04:07:37,414 INFO
  [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
(pool-5-thread-47) START, MigrateStatusVDSCommand(vdsId =
f33ffc14-87ba-11e1-b610-e3aeca1e8008,
vmId=ce9bb531-4d2a-4f5e-9935-3c8a4f5db94a), log id: 8dfce1b
2012-04-17 04:07:37,496 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
(pool-5-thread-47) Failed in MigrateStatusVDS method
2012-04-17 04:07:37,497 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
(pool-5-thread-47) Error code createErr and error message
VDSGenericException: VDSErrorException: Failed to MigrateStatusVDS,
error = Error creating the requested virtual machine
2012-04-17 04:07:37,497 INFO
  [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
(pool-5-thread-47) Command
org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand return
value
  Class Name:
org.ovirt.engine.core.vdsbroker.vdsbroker.StatusOnlyReturnForXmlRpc
mStatus   Class Name:
org.ovirt.engine.core.vdsbroker.vdsbroker.StatusForXmlRpc
mCode 9
mMessage  Error creating the requested virtual machine


2012-04-17 04:07:37,497 INFO
  [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
(pool-5-thread-47) Vds: rhevh2.domain
2012-04-17 04:07:37,497 ERROR
[org.ovirt.engine.core.vdsbroker.VDSCommandBase] (pool-5-thread-47)
Command MigrateStatusVDS execution failed. Exception: VDSErrorException:
VDSGenericException: VDSErrorException: Failed to MigrateStatusVDS,
error = Error creating the requested virtual machine
2012-04-17 04:07:37,497 INFO
  [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
(pool-5-thread-47) FINISH, MigrateStatusVDSCommand, log id: 8dfce1b
2012-04-17 04:07:37,528 WARN
  [org.ovirt.engine.core.bll.MigrateVmCommand] (pool-5-thread-47)
CanDoAction of action MigrateVm failed.
Reasons:ACTION_TYPE_FAILED_VDS_VM_CLUSTER,VAR__ACTION__MIGRATE,VAR__TYPE__VM


There is no problem regarding storage or anything like that. If I shut
down the vm I can start it on the other host without any problems
whatsoever.
Do I need to do something to make live migration work? Trying to dig in
the logs on the hypervisors, but there's so much stuff being logged there...


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


vdsm log from the host that failed to create the VM?
___
Users mailing list
Users@ovirt.org

Re: [Users] Problem with live migration.

2012-04-17 Thread Rami Vaknin

On 04/17/2012 03:45 PM, Martin Hovmöller wrote:



On Tue, Apr 17, 2012 at 2:11 PM, Itamar Heim ih...@redhat.com 
mailto:ih...@redhat.com wrote:


On 04/17/2012 12:47 PM, Martin Hovmöller wrote:

I can't live migrate the vm's in my cluster:

012-04-17 04:06:39,728 INFO
 [org.ovirt.engine.core.bll.MigrateVmCommand] (pool-5-thread-48)
Running command: MigrateVmCommand internal: false. Entities
affected :
 ID: ce9bb531-4d2a-4f5e-9935-3c8a4f5db94a Type: VM
2012-04-17 04:06:39,743 INFO
 [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand]
(pool-5-thread-48)
START, MigrateVDSCommand(vdsId =
f33ffc14-87ba-11e1-b610-e3aeca1e8008,
vmId=ce9bb531-4d2a-4f5e-9935-3c8a4f5db94a, srcHost=10.23.30.130,
dstVdsId=b4d329b4-87b6-11e1-ac0a-b70ec8cc50f0,
dstHost=10.23.30.110:54321 http://10.23.30.110:54321
http://10.23.30.110:54321,

migrationMethod=ONLINE), log id: 7c7201be
2012-04-17 04:06:39,748 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
(pool-5-thread-48) VdsBroker::migrate::Entered
(vm_guid=ce9bb531-4d2a-4f5e-9935-3c8a4f5db94a,
srcHost=10.23.30.130,
dstHost=10.23.30.110:54321 http://10.23.30.110:54321
http://10.23.30.110:54321,  method=online

2012-04-17 04:06:39,749 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
(pool-5-thread-48) START, MigrateBrokerVDSCommand(vdsId =
f33ffc14-87ba-11e1-b610-e3aeca1e8008,
vmId=ce9bb531-4d2a-4f5e-9935-3c8a4f5db94a, srcHost=10.23.30.130,
dstVdsId=b4d329b4-87b6-11e1-ac0a-b70ec8cc50f0,
dstHost=10.23.30.110:54321 http://10.23.30.110:54321
http://10.23.30.110:54321,

migrationMethod=ONLINE), log id: 3c40c2de
2012-04-17 04:06:39,834 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
(pool-5-thread-48) FINISH, MigrateBrokerVDSCommand, log id:
3c40c2de
2012-04-17 04:06:39,840 INFO
 [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand]
(pool-5-thread-48)
FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 7c7201be
2012-04-17 04:06:42,166 INFO
 [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(QuartzScheduler_Worker-18) vds::refreshVmList vm id
ce9bb531-4d2a-4f5e-9935-3c8a4f5db94a is migrating to vds
rhevh1.domain
ignoring it in the refresh till migration is done
[...]
2012-04-17 04:07:33,852 INFO
 [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(QuartzScheduler_Worker-34) vds::refreshVmList vm id
ce9bb531-4d2a-4f5e-9935-3c8a4f5db94a status = Paused on vds
rhevh1.domain ignoring it in the refresh till migration is done
2012-04-17 04:07:37,407 ERROR
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(QuartzScheduler_Worker-71) Rerun vm
ce9bb531-4d2a-4f5e-9935-3c8a4f5db94a. Called from vds
rhevh2.domain
2012-04-17 04:07:37,414 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
(pool-5-thread-47) START, MigrateStatusVDSCommand(vdsId =
f33ffc14-87ba-11e1-b610-e3aeca1e8008,
vmId=ce9bb531-4d2a-4f5e-9935-3c8a4f5db94a), log id: 8dfce1b
2012-04-17 04:07:37,496 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
(pool-5-thread-47) Failed in MigrateStatusVDS method
2012-04-17 04:07:37,497 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
(pool-5-thread-47) Error code createErr and error message
VDSGenericException: VDSErrorException: Failed to
MigrateStatusVDS,
error = Error creating the requested virtual machine
2012-04-17 04:07:37,497 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
(pool-5-thread-47) Command
org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand
return
value
 Class Name:
org.ovirt.engine.core.vdsbroker.vdsbroker.StatusOnlyReturnForXmlRpc
mStatus   Class Name:
org.ovirt.engine.core.vdsbroker.vdsbroker.StatusForXmlRpc
mCode 9
mMessage  Error creating the requested
virtual machine


2012-04-17 04:07:37,497 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
(pool-5-thread-47) Vds: rhevh2.domain
2012-04-17 04:07:37,497 ERROR
[org.ovirt.engine.core.vdsbroker.VDSCommandBase]
(pool-5-thread-47)
Command MigrateStatusVDS execution failed. Exception:
VDSErrorException:
VDSGenericException: VDSErrorException: Failed to
MigrateStatusVDS,
error = Error creating the requested virtual 

Re: [Users] Booting oVirt node image 2.3.0, no install option

2012-04-17 Thread Joey Boggs

On 04/17/2012 09:45 AM, Adam vonNieda wrote:

Hi folks,

Still hoping someone can give me a hand with this. I can't install
overt-node 2.3.0 on a on a Dell C2100 server because it won't start the
graphical interface. I booted up a standard F16 image this morning, and
the graphical installer does start during that process. Logs are below.

Thanks very much,

   -Adam



/tmp/ovirt.log
==

/sbin/restorecon set context
/var/cache/yum-unconfined_u:object_r:rpm_var_cache_t:s0 failed:'Read-only
file system'
/sbin/restorecon reset /var/cache/yum context
unconfined_u:object_r:file_t:s0-unconfined_u:object_r:rpm_var_cache_t:s0
/sbin/restorecon reset /etc/sysctl.conf context
system_u:object_r:etc_runtime_t:s0-system_u:object_r:system_conf_t:s0
/sbin/restorecon reset /boot-kdump context
system_u:object_r:boot_t:s0-system_u:object_r:default_t:s0
2012-04-16 09:36:26,827 - INFO - ovirt-config-installer - live
device
/dev/sdb
2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - cat /proc/mounts|grep
-q none /live
2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions -
2012-04-16 09:36:26,915 - DEBUG - ovirtfunctions - umount /live
2012-04-16 09:36:26,915 - DEBUG - ovirtfunctions -
2012-04-16 09:36:27,455 - ERROR - ovirtfunctions - Failed to mount_live()

/var/log/ovirt.log
==

Apr 16 09:35:53 Starting ovirt-early
oVirt Node Hypervisor release 2.3.0 (1.0.fc16)
Apr 16 09:35:53 Updating /etc/default/ovirt
Apr 16 09:35:54 Updating OVIRT_BOOTIF to ''
Apr 16 09:35:54 Updating OVIRT_INIT to ''
Apr 16 09:35:54 Updating OVIRT_UPGRADE to ''
Apr 16 09:35:54 Updating OVIRT_STANDALONE to '1'
Apr 16 09:35:54 Updating OVIRT_BOOTPARAMS to 'nomodeset
crashkernel=512M-2G:64M,2G-:128M elevator=deadline quiet rd_NO_LVM rhgb
rd.luks=0 rd.md=0 rd.dm=0'
Apr 16 09:35:54 Updating OVIRT_RHN_TYPE to 'classic'
Apr 16 09:35:54 Updating OVIRT_INSTALL to '1'
Apr 16 09:35:54 Updating OVIRT_ISCSI_INSTALL to '1'
Apr 16 09:36:08 Setting temporary admin password: F8Ax67kfRPSAw
Apr 16 09:36:09 Setting temporary root password: F8Ax67kfRPSAw
Apr 16 09:36:09 Skip runtime mode configuration.
Apr 16 09:36:09 Completed ovirt-early
Apr 16 09:36:09 Starting ovirt-awake.
Apr 16 09:36:09 Node is operating in unmanaged mode.
Apr 16 09:36:09 Completed ovirt-awake: RETVAL=0
Apr 16 09:36:09 Starting ovirt
Apr 16 09:36:09 Completed ovirt
Apr 16 09:36:10 Starting ovirt-post
Apr 16 09:36:20 Hardware virtualization detected
  Volume group HostVG not found
  Skipping volume group HostVG
Restarting network (via systemctl):  [  OK  ]
Apr 16 09:36:20 Starting ovirt-post
Apr 16 09:36:21 Hardware virtualization detected
  Volume group HostVG not found
  Skipping volume group HostVG
Restarting network (via systemctl):  [  OK  ]
Apr 16 09:36:22 Starting ovirt-cim
Apr 16 09:36:22 Completed ovirt-cim
WARNING: persistent config storage not available

/var/log/vdsm/vdsm.log
===

MainThread::INFO::2012-04-16 09:36:21,828::vdsm::71::vds::(run) I am the
actual vdsm 4.9-0
MainThread::DEBUG::2012-04-16
09:36:23,873::resourceManager::376::ResourceManager::(registerNamespace)
Registering namespace 'Storage'
MainThread::DEBUG::2012-04-16
09:36:23,874::threadPool::45::Misc.ThreadPool::(__init__) Enter -
numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0
MainThread::DEBUG::2012-04-16
09:36:23,918::multipath::85::Storage.Misc.excCmd::(isEnabled)
'/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None)
MainThread::INFO::2012-04-16 09:36:25,000::vdsm::71::vds::(run) I am the
actual vdsm 4.9-0
MainThread::DEBUG::2012-04-16
09:36:25,199::resourceManager::376::ResourceManager::(registerNamespace)
Registering namespace 'Storage'
MainThread::DEBUG::2012-04-16
09:36:25,200::threadPool::45::Misc.ThreadPool::(__init__) Enter -
numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0
MainThread::DEBUG::2012-04-16
09:36:25,231::multipath::85::Storage.Misc.excCmd::(isEnabled)
'/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None)
MainThread::DEBUG::2012-04-16
09:36:25,243::multipath::85::Storage.Misc.excCmd::(isEnabled) SUCCESS:
err  = '';rc  = 0
MainThread::DEBUG::2012-04-16
09:36:25,244::multipath::109::Storage.Multipath::(isEnabled) multipath
Defaulting to False
MainThread::DEBUG::2012-04-16
09:36:25,244::misc::487::Storage.Misc::(rotateFiles) dir: /etc,
prefixName: multipath.conf, versions: 5
MainThread::DEBUG::2012-04-16
09:36:25,244::misc::508::Storage.Misc::(rotateFiles) versions found: [0]
MainThread::DEBUG::2012-04-16
09:36:25,244::multipath::118::Storage.Misc.excCmd::(setupMultipath)
'/usr/bin/sudo -n /bin/cp /etc/multipath.conf /etc/multipath.conf.1' (cwd
None)
MainThread::DEBUG::2012-04-16
09:36:25,255::multipath::118::Storage.Misc.excCmd::(setupMultipath)
FAILED:err  = 'sudo: unable to mkdir /var/db/sudo/vdsm: Read-only file
system\nsudo: sorry, a password is required to run sudo\n';rc  = 1
MainThread::DEBUG::2012-04-16
09:36:25,256::multipath::118::Storage.Misc.excCmd::(setupMultipath)
'/usr/bin/sudo -n /usr/sbin/persist 

[Users] Can't create ISO domain

2012-04-17 Thread Li, David
My ovirt-node host was created and approved in ovirt-engine.



I couldn't activate an new ISO domain in the ovirt-engine if it's not existing. 
It could be attached to the data center though.  On the other hand, I have no 
problem adding a data domain.

 How should I debug that?



- David

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Can't create ISO domain

2012-04-17 Thread Itamar Heim

On 04/17/2012 06:34 PM, Li, David wrote:

My ovirt-node host was created and approved in ovirt-engine.

I couldn't activate an new ISO domain in the ovirt-engine if it's not
existing. It could be attached to the data center though. On the other
hand, I have no problem adding a data domain.

How should I debug that?


engine and vdsm logs from when you clicked activate
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Use Filer for NFS storage

2012-04-17 Thread Christian Hernandez
Just out of curiosity...

Has anyone ACTUALLY successfully added an NFS storage (using a filer) to
your Cluster/Datacenter?


I still cannot figure out how and am still getting the  Error while
executing action RemoveStorageServerConnection: Unexpected exception  error

--Christian

On Mon, Apr 16, 2012 at 8:55 AM, Christian Hernandez
christi...@4over.comwrote:

 Excuse my ignorance...

 But how do I apply the patch? I don't testing the patch on my systems (as
 I am only testing myself); but I am...

 1) Only have a elementary skills at git
 2) Don't know how to apply the patch

 --Christian



 On Mon, Apr 16, 2012 at 6:50 AM, Adam Litke a...@us.ibm.com wrote:

 On Sun, Apr 15, 2012 at 04:57:15PM +0300, Dan Kenigsberg wrote:
  On Fri, Apr 13, 2012 at 12:26:39PM -0700, Christian Hernandez wrote:
   Here is the log from the Host
  
  
   *Thread-1821::DEBUG::2012-04-13
   12:18:52,200::BindingXMLRPC::167::vds::(wrapper) [192.168.11.236]
   Thread-1821::ERROR::2012-04-13
   12:18:52,200::BindingXMLRPC::171::vds::(wrapper) Unexpected exception
   Traceback (most recent call last):
 File /usr/share/vdsm/BindingXMLRPC.py, line 169, in wrapper
   return f(*args, **kwargs)
 File /usr/share/vdsm/BindingXMLRPC.py, line 571, in
   poolValidateStorageServerConnection
   return pool.validateStorageServerConnection(domType, conList)
 File /usr/share/vdsm/API.py, line 897, in
   validateStorageServerConnection
   return self._irs.validateStorageServerConnection(domainType,
   AttributeError: 'NoneType' object has no attribute
   'validateStorageServerConnection'
   Thread-1822::DEBUG::2012-04-13
   12:18:52,333::BindingXMLRPC::167::vds::(wrapper) [192.168.11.236]
   Thread-1822::ERROR::2012-04-13
   12:18:52,334::BindingXMLRPC::171::vds::(wrapper) Unexpected exception
   Traceback (most recent call last):
 File /usr/share/vdsm/BindingXMLRPC.py, line 169, in wrapper
   return f(*args, **kwargs)
 File /usr/share/vdsm/BindingXMLRPC.py, line 491, in
   poolDisconnectStorageServer
   return pool.disconnectStorageServer(domType, conList)
 File /usr/share/vdsm/API.py, line 823, in disconnectStorageServer
   return self._irs.disconnectStorageServer(domainType, self._UUID,
   AttributeError: 'NoneType' object has no attribute
 'disconnectStorageServer'
 
  It seems like the interesting traceback should be further up - I
  suppose self._irs failed initialization and kept its original None
  value. Please scroll up and try to find out why this failed on Vdsm
  startup.
 
  We have a FIXME in vdsm so that we report such failures better:
 
  vdsm/BindingXMLRPC.py: # XXX: Need another way to check if IRS init was
 okay
 
  Adam, could you take a further look into this?

 Have a look at http://gerrit.ovirt.org/3571 .  This should handle the
 problem
 better by reporting a better error when storage was not initialized
 properly.

 --
 Adam Litke a...@us.ibm.com
 IBM Linux Technology Center



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Use Filer for NFS storage

2012-04-17 Thread Christian Hernandez
Actually yes I was referring to those technologies.

I currently have a NetApp that I would like to use...just can't seem to add
it to the cluster...


On Tue, Apr 17, 2012 at 11:08 AM, Dominic Kaiser domi...@bostonvineyard.org
 wrote:

 Yes I use three:

 Openfiler
 MediaVault
 QNAP NAS

 I am taking it by filer you mean this or am I getting it wrong?

 I use an NFS Datacenter entirely.

 Dominic


 On Tue, Apr 17, 2012 at 1:11 PM, Christian Hernandez christi...@4over.com
  wrote:

 Just out of curiosity...

 Has anyone ACTUALLY successfully added an NFS storage (using a filer) to
 your Cluster/Datacenter?


 I still cannot figure out how and am still getting the  Error while
 executing action RemoveStorageServerConnection: Unexpected exception 
 error

 --Christian

 On Mon, Apr 16, 2012 at 8:55 AM, Christian Hernandez 
 christi...@4over.com wrote:

 Excuse my ignorance...

 But how do I apply the patch? I don't testing the patch on my systems
 (as I am only testing myself); but I am...

 1) Only have a elementary skills at git
 2) Don't know how to apply the patch

 --Christian



 On Mon, Apr 16, 2012 at 6:50 AM, Adam Litke a...@us.ibm.com wrote:

 On Sun, Apr 15, 2012 at 04:57:15PM +0300, Dan Kenigsberg wrote:
  On Fri, Apr 13, 2012 at 12:26:39PM -0700, Christian Hernandez wrote:
   Here is the log from the Host
  
  
   *Thread-1821::DEBUG::2012-04-13
   12:18:52,200::BindingXMLRPC::167::vds::(wrapper) [192.168.11.236]
   Thread-1821::ERROR::2012-04-13
   12:18:52,200::BindingXMLRPC::171::vds::(wrapper) Unexpected
 exception
   Traceback (most recent call last):
 File /usr/share/vdsm/BindingXMLRPC.py, line 169, in wrapper
   return f(*args, **kwargs)
 File /usr/share/vdsm/BindingXMLRPC.py, line 571, in
   poolValidateStorageServerConnection
   return pool.validateStorageServerConnection(domType, conList)
 File /usr/share/vdsm/API.py, line 897, in
   validateStorageServerConnection
   return self._irs.validateStorageServerConnection(domainType,
   AttributeError: 'NoneType' object has no attribute
   'validateStorageServerConnection'
   Thread-1822::DEBUG::2012-04-13
   12:18:52,333::BindingXMLRPC::167::vds::(wrapper) [192.168.11.236]
   Thread-1822::ERROR::2012-04-13
   12:18:52,334::BindingXMLRPC::171::vds::(wrapper) Unexpected
 exception
   Traceback (most recent call last):
 File /usr/share/vdsm/BindingXMLRPC.py, line 169, in wrapper
   return f(*args, **kwargs)
 File /usr/share/vdsm/BindingXMLRPC.py, line 491, in
   poolDisconnectStorageServer
   return pool.disconnectStorageServer(domType, conList)
 File /usr/share/vdsm/API.py, line 823, in
 disconnectStorageServer
   return self._irs.disconnectStorageServer(domainType, self._UUID,
   AttributeError: 'NoneType' object has no attribute
 'disconnectStorageServer'
 
  It seems like the interesting traceback should be further up - I
  suppose self._irs failed initialization and kept its original None
  value. Please scroll up and try to find out why this failed on Vdsm
  startup.
 
  We have a FIXME in vdsm so that we report such failures better:
 
  vdsm/BindingXMLRPC.py: # XXX: Need another way to check if IRS init
 was okay
 
  Adam, could you take a further look into this?

 Have a look at http://gerrit.ovirt.org/3571 .  This should handle the
 problem
 better by reporting a better error when storage was not initialized
 properly.

 --
 Adam Litke a...@us.ibm.com
 IBM Linux Technology Center




 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users




 --
 Dominic Kaiser
 Greater Boston Vineyard
 Director of Operations

 cell: 617-230-1412
 fax: 617-252-0238
 email: domi...@bostonvineyard.org



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Use Filer for NFS storage

2012-04-17 Thread Christian Hernandez
Yes,

I've played with permissions; and made sure the ownerships are set

*[root@anteater ovirt]# ll -d .
drwxrws--- 3 vdsm kvm 4096 Apr 16 08:48 .
[root@anteater ovirt]# ll
total 4
drwxrwxrwx 2 vdsm kvm 4096 Apr 13 11:59 VMs
*

I can mount the share manually on BOTH oVirt Engine AND the Host with the
mount.nfs command

On Tue, Apr 17, 2012 at 11:38 AM, Dominic Kaiser domi...@bostonvineyard.org
 wrote:

 First off permissions need to be chown 36:36.  That is what the vdsm user
 and group should be assigned to throughout the file structure.  My entire
 when you add the folder that is what should happen but obviously did not.
  I checked my ISO domains which I can also mount manually but permissions
 are for the vdsm user and group 36:36.  Start there.  Also you can mount
 share manually on the ovirt engine server is that what you meant?  Just to
 be clear so that I know that engine has access.  I know this seems simple
 to ask but I have made this mistake many times.

 Dominic


 On Tue, Apr 17, 2012 at 2:25 PM, Christian Hernandez christi...@4over.com
  wrote:

 Using NFS version 3 (on both the Filer and the Host)

 NetApp I'm using is FAS3040

 I've already went down the permissions ave tried the 777 approach.

 I can mount the share manually by going to the command line and using the
 mount.nfs command; just not through the oVirt interface


 On Tue, Apr 17, 2012 at 11:20 AM, Dominic Kaiser 
 domi...@bostonvineyard.org wrote:

 In that case,  whenever I had problems it was always permissions.  For
 example my QNAP NAS was blocking my addition of an ISO and datadomain I had
 created because it was not allowing engine to see it.  Also what version of
 NFS is your net app using v3 or v4 NFS?  What netapp?

 Dominic


 On Tue, Apr 17, 2012 at 2:09 PM, Christian Hernandez 
 christi...@4over.com wrote:

 Actually yes I was referring to those technologies.

 I currently have a NetApp that I would like to use...just can't seem to
 add it to the cluster...


 On Tue, Apr 17, 2012 at 11:08 AM, Dominic Kaiser 
 domi...@bostonvineyard.org wrote:

 Yes I use three:

 Openfiler
 MediaVault
 QNAP NAS

 I am taking it by filer you mean this or am I getting it wrong?

 I use an NFS Datacenter entirely.

 Dominic


 On Tue, Apr 17, 2012 at 1:11 PM, Christian Hernandez 
 christi...@4over.com wrote:

 Just out of curiosity...

 Has anyone ACTUALLY successfully added an NFS storage (using a filer)
 to your Cluster/Datacenter?


 I still cannot figure out how and am still getting the  Error while
 executing action RemoveStorageServerConnection: Unexpected exception 
 error

 --Christian

 On Mon, Apr 16, 2012 at 8:55 AM, Christian Hernandez 
 christi...@4over.com wrote:

 Excuse my ignorance...

 But how do I apply the patch? I don't testing the patch on my
 systems (as I am only testing myself); but I am...

 1) Only have a elementary skills at git
 2) Don't know how to apply the patch

 --Christian



 On Mon, Apr 16, 2012 at 6:50 AM, Adam Litke a...@us.ibm.com wrote:

 On Sun, Apr 15, 2012 at 04:57:15PM +0300, Dan Kenigsberg wrote:
  On Fri, Apr 13, 2012 at 12:26:39PM -0700, Christian Hernandez
 wrote:
   Here is the log from the Host
  
  
   *Thread-1821::DEBUG::2012-04-13
   12:18:52,200::BindingXMLRPC::167::vds::(wrapper)
 [192.168.11.236]
   Thread-1821::ERROR::2012-04-13
   12:18:52,200::BindingXMLRPC::171::vds::(wrapper) Unexpected
 exception
   Traceback (most recent call last):
 File /usr/share/vdsm/BindingXMLRPC.py, line 169, in wrapper
   return f(*args, **kwargs)
 File /usr/share/vdsm/BindingXMLRPC.py, line 571, in
   poolValidateStorageServerConnection
   return pool.validateStorageServerConnection(domType,
 conList)
 File /usr/share/vdsm/API.py, line 897, in
   validateStorageServerConnection
   return self._irs.validateStorageServerConnection(domainType,
   AttributeError: 'NoneType' object has no attribute
   'validateStorageServerConnection'
   Thread-1822::DEBUG::2012-04-13
   12:18:52,333::BindingXMLRPC::167::vds::(wrapper)
 [192.168.11.236]
   Thread-1822::ERROR::2012-04-13
   12:18:52,334::BindingXMLRPC::171::vds::(wrapper) Unexpected
 exception
   Traceback (most recent call last):
 File /usr/share/vdsm/BindingXMLRPC.py, line 169, in wrapper
   return f(*args, **kwargs)
 File /usr/share/vdsm/BindingXMLRPC.py, line 491, in
   poolDisconnectStorageServer
   return pool.disconnectStorageServer(domType, conList)
 File /usr/share/vdsm/API.py, line 823, in
 disconnectStorageServer
   return self._irs.disconnectStorageServer(domainType,
 self._UUID,
   AttributeError: 'NoneType' object has no attribute
 'disconnectStorageServer'
 
  It seems like the interesting traceback should be further up - I
  suppose self._irs failed initialization and kept its original None
  value. Please scroll up and try to find out why this failed on
 Vdsm
  startup.
 
  We have a FIXME in vdsm so that we report such failures better:
 
  vdsm/BindingXMLRPC.py: # XXX: Need another 

Re: [Users] Booting oVirt node image 2.3.0, no install option

2012-04-17 Thread Adam vonNieda

   Turns out that there might be an issue with my thumb drive. I tried
another, and it worked fine. Thanks very much for the responses folks!

   -Adam
   

On 4/17/12 10:11 AM, Joey Boggs jbo...@redhat.com wrote:

On 04/17/2012 10:51 AM, Adam vonNieda wrote:
 Thanks for the reply Joey. I saw that too, and thought maybe my USB
thumb drive was set to read only, but it's not. This box doesn't have a
DVD drive, I'll try a different USB drive, and if that doesn't work,
I'll dig up an external DVD drive.

 Thanks again,

-Adam

 Adam vonNieda
 a...@vonnieda.org

 On Apr 17, 2012, at 9:07, Joey Boggsjbo...@redhat.com  wrote:

 On 04/17/2012 09:45 AM, Adam vonNieda wrote:
 Hi folks,

 Still hoping someone can give me a hand with this. I can't install
 overt-node 2.3.0 on a on a Dell C2100 server because it won't start
the
 graphical interface. I booted up a standard F16 image this morning,
and
 the graphical installer does start during that process. Logs are
below.

 Thanks very much,

-Adam


 /tmp/ovirt.log
 ==

 /sbin/restorecon set context
 /var/cache/yum-unconfined_u:object_r:rpm_var_cache_t:s0
failed:'Read-only
 file system'
 /sbin/restorecon reset /var/cache/yum context
 
unconfined_u:object_r:file_t:s0-unconfined_u:object_r:rpm_var_cache_t
:s0
 /sbin/restorecon reset /etc/sysctl.conf context
 
system_u:object_r:etc_runtime_t:s0-system_u:object_r:system_conf_t:s0
 /sbin/restorecon reset /boot-kdump context
 system_u:object_r:boot_t:s0-system_u:object_r:default_t:s0
 2012-04-16 09:36:26,827 - INFO - ovirt-config-installer - live
 device
 /dev/sdb
 2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - cat
/proc/mounts|grep
 -q none /live
 2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions -
 2012-04-16 09:36:26,915 - DEBUG - ovirtfunctions - umount /live
 2012-04-16 09:36:26,915 - DEBUG - ovirtfunctions -
 2012-04-16 09:36:27,455 - ERROR - ovirtfunctions - Failed to
mount_live()

 /var/log/ovirt.log
 ==

 Apr 16 09:35:53 Starting ovirt-early
 oVirt Node Hypervisor release 2.3.0 (1.0.fc16)
 Apr 16 09:35:53 Updating /etc/default/ovirt
 Apr 16 09:35:54 Updating OVIRT_BOOTIF to ''
 Apr 16 09:35:54 Updating OVIRT_INIT to ''
 Apr 16 09:35:54 Updating OVIRT_UPGRADE to ''
 Apr 16 09:35:54 Updating OVIRT_STANDALONE to '1'
 Apr 16 09:35:54 Updating OVIRT_BOOTPARAMS to 'nomodeset
 crashkernel=512M-2G:64M,2G-:128M elevator=deadline quiet rd_NO_LVM
rhgb
 rd.luks=0 rd.md=0 rd.dm=0'
 Apr 16 09:35:54 Updating OVIRT_RHN_TYPE to 'classic'
 Apr 16 09:35:54 Updating OVIRT_INSTALL to '1'
 Apr 16 09:35:54 Updating OVIRT_ISCSI_INSTALL to '1'
 Apr 16 09:36:08 Setting temporary admin password: F8Ax67kfRPSAw
 Apr 16 09:36:09 Setting temporary root password: F8Ax67kfRPSAw
 Apr 16 09:36:09 Skip runtime mode configuration.
 Apr 16 09:36:09 Completed ovirt-early
 Apr 16 09:36:09 Starting ovirt-awake.
 Apr 16 09:36:09 Node is operating in unmanaged mode.
 Apr 16 09:36:09 Completed ovirt-awake: RETVAL=0
 Apr 16 09:36:09 Starting ovirt
 Apr 16 09:36:09 Completed ovirt
 Apr 16 09:36:10 Starting ovirt-post
 Apr 16 09:36:20 Hardware virtualization detected
   Volume group HostVG not found
   Skipping volume group HostVG
 Restarting network (via systemctl):  [  OK  ]
 Apr 16 09:36:20 Starting ovirt-post
 Apr 16 09:36:21 Hardware virtualization detected
   Volume group HostVG not found
   Skipping volume group HostVG
 Restarting network (via systemctl):  [  OK  ]
 Apr 16 09:36:22 Starting ovirt-cim
 Apr 16 09:36:22 Completed ovirt-cim
 WARNING: persistent config storage not available

 /var/log/vdsm/vdsm.log
 ===

 MainThread::INFO::2012-04-16 09:36:21,828::vdsm::71::vds::(run) I am
the
 actual vdsm 4.9-0
 MainThread::DEBUG::2012-04-16
 
09:36:23,873::resourceManager::376::ResourceManager::(registerNamespac
e)
 Registering namespace 'Storage'
 MainThread::DEBUG::2012-04-16
 09:36:23,874::threadPool::45::Misc.ThreadPool::(__init__) Enter -
 numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0
 MainThread::DEBUG::2012-04-16
 09:36:23,918::multipath::85::Storage.Misc.excCmd::(isEnabled)
 '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None)
 MainThread::INFO::2012-04-16 09:36:25,000::vdsm::71::vds::(run) I am
the
 actual vdsm 4.9-0
 MainThread::DEBUG::2012-04-16
 
09:36:25,199::resourceManager::376::ResourceManager::(registerNamespac
e)
 Registering namespace 'Storage'
 MainThread::DEBUG::2012-04-16
 09:36:25,200::threadPool::45::Misc.ThreadPool::(__init__) Enter -
 numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0
 MainThread::DEBUG::2012-04-16
 09:36:25,231::multipath::85::Storage.Misc.excCmd::(isEnabled)
 '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None)
 MainThread::DEBUG::2012-04-16
 09:36:25,243::multipath::85::Storage.Misc.excCmd::(isEnabled)
SUCCESS:
 err   = '';rc   = 0
 MainThread::DEBUG::2012-04-16
 09:36:25,244::multipath::109::Storage.Multipath::(isEnabled)
multipath
 Defaulting to False
 MainThread::DEBUG::2012-04-16