Re: [ovirt-users] replace ovirt engine host

2014-11-12 Thread Ml Ml
Anyone? :-(

On Tue, Nov 11, 2014 at 6:39 PM, Ml Ml mliebher...@googlemail.com wrote:
 I dunno why this is all so simple for you.

 I just replaced the ovirt-engine like described in the docs.

 I ejected the CD ISOs on every vm so i was able to delete the ISO_DOMAIN.

 But i have still problems with my storage. Its a replicated glusterfs.
 It looks healthy on the nodes itself. But somehow my ovirt-engine gets
 confused. Can someone explain me what the actual error is?

 Note: i only replaced the ovirt-engine host and delete the ISO_DOMAIN:

 2014-11-11 18:32:37,832 ERROR
 [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetTaskStatusVDSCommand]
 (org.ovirt.thread.pool-6-thread-39) [71891fe3] Failed in
 HSMGetTaskStatusVDS method
 2014-11-11 18:32:37,833 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
 (org.ovirt.thread.pool-6-thread-39) [71891fe3] spmStart polling ended:
 taskId = 8c5fae2c-0ddb-41cd-ac54-c404c943e00f task status = finished
 2014-11-11 18:32:37,834 ERROR
 [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
 (org.ovirt.thread.pool-6-thread-39) [71891fe3] Start SPM Task failed -
 result: cleanSuccess, message: VDSGenericException: VDSErrorException:
 Failed to HSMGetTaskStatusVDS, error = Storage domain does not exist,
 code = 358
 2014-11-11 18:32:37,888 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
 (org.ovirt.thread.pool-6-thread-39) [71891fe3] spmStart polling ended,
 spm status: Free
 2014-11-11 18:32:37,889 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
 (org.ovirt.thread.pool-6-thread-39) [71891fe3] START,
 HSMClearTaskVDSCommand(HostName = ovirt-node01.foobar.net, HostId =
 2e8cec66-23d7-4a5c-b6f3-9758d1d87f5c,
 taskId=8c5fae2c-0ddb-41cd-ac54-c404c943e00f), log id: 547e26fd
 2014-11-11 18:32:37,937 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
 (org.ovirt.thread.pool-6-thread-39) [71891fe3] FINISH,
 HSMClearTaskVDSCommand, log id: 547e26fd
 2014-11-11 18:32:37,938 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
 (org.ovirt.thread.pool-6-thread-39) [71891fe3] FINISH,
 SpmStartVDSCommand, return:
 org.ovirt.engine.core.common.businessentities.SpmStatusResult@5027ed97,
 log id: 461eb5b5
 2014-11-11 18:32:37,941 INFO
 [org.ovirt.engine.core.bll.storage.SetStoragePoolStatusCommand]
 (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] Running command:
 SetStoragePoolStatusCommand internal: true. Entities affected :  ID:
 b384b3da-02a6-44f3-a3f6-56751ce8c26d Type: StoragePool
 2014-11-11 18:32:37,948 ERROR
 [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
 (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d]
 IrsBroker::Failed::ActivateStorageDomainVDS due to:
 IrsSpmStartFailedException: IRSGenericException: IRSErrorException:
 SpmStart failed
 2014-11-11 18:32:38,006 INFO
 [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
 (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] Irs placed on server
 2e8cec66-23d7-4a5c-b6f3-9758d1d87f5c failed. Proceed Failover
 2014-11-11 18:32:38,044 INFO
 [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
 (DefaultQuartzScheduler_Worker-29) START,
 GlusterVolumesListVDSCommand(HostName = ovirt-node01.foobar.net,
 HostId = 2e8cec66-23d7-4a5c-b6f3-9758d1d87f5c), log id: 7a110756
 2014-11-11 18:32:38,045 INFO
 [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
 (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d]
 hostFromVds::selectedVds - ovirt-node02.foobar.net, spmStatus Free,
 storage pool HP_Proliant_DL180G6
 2014-11-11 18:32:38,048 INFO
 [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
 (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] starting spm on vds
 ovirt-node02.foobar.net, storage pool HP_Proliant_DL180G6, prevId -1,
 LVER -1
 2014-11-11 18:32:38,050 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
 (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] START,
 SpmStartVDSCommand(HostName = ovirt-node02.foobar.net, HostId =
 6948da12-0b8a-4b6d-a9af-162e6c25dad3, storagePoolId =
 b384b3da-02a6-44f3-a3f6-56751ce8c26d, prevId=-1, prevLVER=-1,
 storagePoolFormatType=V3, recoveryMode=Manual, SCSIFencing=false), log
 id: 1a6ccb9c
 2014-11-11 18:32:38,108 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
 (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] spmStart polling
 started: taskId = 78d31638-70a5-46aa-89e7-1d1e8126bdba
 2014-11-11 18:32:38,193 INFO
 [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
 (DefaultQuartzScheduler_Worker-29) FINISH,
 GlusterVolumesListVDSCommand, return:
 {d46619e9-9368-4e82-bf3a-a2377b6e85e4=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@9746ef53},
 log id: 7a110756
 2014-11-11 18:32:38,352 INFO
 [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
 (DefaultQuartzScheduler_Worker-29) START,
 GlusterVolumesListVDSCommand(HostName = ovirt-node04.foobar.net,
 HostId 

Re: [ovirt-users] replace ovirt engine host

2014-11-12 Thread Yedidyah Bar David
Sorry, no idea.

Does not seem very related to hosted-engine.

Perhaps better to change the subject (add 'gluster'?) to attract other people.
Also please post all relevant logs - hosted-engine, vdsm, all engine logs.
-- 
Didi

- Original Message -
 From: Ml Ml mliebher...@googlemail.com
 To: Matt . yamakasi@gmail.com
 Cc: users@ovirt.org
 Sent: Wednesday, November 12, 2014 3:06:04 PM
 Subject: Re: [ovirt-users] replace ovirt engine host
 
 Anyone? :-(
 
 On Tue, Nov 11, 2014 at 6:39 PM, Ml Ml mliebher...@googlemail.com wrote:
  I dunno why this is all so simple for you.
 
  I just replaced the ovirt-engine like described in the docs.
 
  I ejected the CD ISOs on every vm so i was able to delete the ISO_DOMAIN.
 
  But i have still problems with my storage. Its a replicated glusterfs.
  It looks healthy on the nodes itself. But somehow my ovirt-engine gets
  confused. Can someone explain me what the actual error is?
 
  Note: i only replaced the ovirt-engine host and delete the ISO_DOMAIN:
 
  2014-11-11 18:32:37,832 ERROR
  [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetTaskStatusVDSCommand]
  (org.ovirt.thread.pool-6-thread-39) [71891fe3] Failed in
  HSMGetTaskStatusVDS method
  2014-11-11 18:32:37,833 INFO
  [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
  (org.ovirt.thread.pool-6-thread-39) [71891fe3] spmStart polling ended:
  taskId = 8c5fae2c-0ddb-41cd-ac54-c404c943e00f task status = finished
  2014-11-11 18:32:37,834 ERROR
  [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
  (org.ovirt.thread.pool-6-thread-39) [71891fe3] Start SPM Task failed -
  result: cleanSuccess, message: VDSGenericException: VDSErrorException:
  Failed to HSMGetTaskStatusVDS, error = Storage domain does not exist,
  code = 358
  2014-11-11 18:32:37,888 INFO
  [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
  (org.ovirt.thread.pool-6-thread-39) [71891fe3] spmStart polling ended,
  spm status: Free
  2014-11-11 18:32:37,889 INFO
  [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
  (org.ovirt.thread.pool-6-thread-39) [71891fe3] START,
  HSMClearTaskVDSCommand(HostName = ovirt-node01.foobar.net, HostId =
  2e8cec66-23d7-4a5c-b6f3-9758d1d87f5c,
  taskId=8c5fae2c-0ddb-41cd-ac54-c404c943e00f), log id: 547e26fd
  2014-11-11 18:32:37,937 INFO
  [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
  (org.ovirt.thread.pool-6-thread-39) [71891fe3] FINISH,
  HSMClearTaskVDSCommand, log id: 547e26fd
  2014-11-11 18:32:37,938 INFO
  [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
  (org.ovirt.thread.pool-6-thread-39) [71891fe3] FINISH,
  SpmStartVDSCommand, return:
  org.ovirt.engine.core.common.businessentities.SpmStatusResult@5027ed97,
  log id: 461eb5b5
  2014-11-11 18:32:37,941 INFO
  [org.ovirt.engine.core.bll.storage.SetStoragePoolStatusCommand]
  (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] Running command:
  SetStoragePoolStatusCommand internal: true. Entities affected :  ID:
  b384b3da-02a6-44f3-a3f6-56751ce8c26d Type: StoragePool
  2014-11-11 18:32:37,948 ERROR
  [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
  (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d]
  IrsBroker::Failed::ActivateStorageDomainVDS due to:
  IrsSpmStartFailedException: IRSGenericException: IRSErrorException:
  SpmStart failed
  2014-11-11 18:32:38,006 INFO
  [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
  (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] Irs placed on server
  2e8cec66-23d7-4a5c-b6f3-9758d1d87f5c failed. Proceed Failover
  2014-11-11 18:32:38,044 INFO
  [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
  (DefaultQuartzScheduler_Worker-29) START,
  GlusterVolumesListVDSCommand(HostName = ovirt-node01.foobar.net,
  HostId = 2e8cec66-23d7-4a5c-b6f3-9758d1d87f5c), log id: 7a110756
  2014-11-11 18:32:38,045 INFO
  [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
  (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d]
  hostFromVds::selectedVds - ovirt-node02.foobar.net, spmStatus Free,
  storage pool HP_Proliant_DL180G6
  2014-11-11 18:32:38,048 INFO
  [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
  (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] starting spm on vds
  ovirt-node02.foobar.net, storage pool HP_Proliant_DL180G6, prevId -1,
  LVER -1
  2014-11-11 18:32:38,050 INFO
  [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
  (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] START,
  SpmStartVDSCommand(HostName = ovirt-node02.foobar.net, HostId =
  6948da12-0b8a-4b6d-a9af-162e6c25dad3, storagePoolId =
  b384b3da-02a6-44f3-a3f6-56751ce8c26d, prevId=-1, prevLVER=-1,
  storagePoolFormatType=V3, recoveryMode=Manual, SCSIFencing=false), log
  id: 1a6ccb9c
  2014-11-11 18:32:38,108 INFO
  [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
  (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] spmStart polling
  started: taskId = 78d31638-70a5-46aa-89e7-1d1e8126bdba

Re: [ovirt-users] replace ovirt engine host

2014-11-12 Thread Ml Ml
Here is the vdsm log of my ovirt-node01:


fda6e0ee-33e9-4eb2-b724-34f7a5492e83::ERROR::2014-11-12
16:13:20,071::sp::330::Storage.StoragePool::(startSpm) failed: Storage
domain does not exist: ('6d882c77-cdbc-48ef-ae21-1a6d45e7f8a1',)
fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12
16:13:20,071::sp::336::Storage.StoragePool::(_shutDownUpgrade)
Shutting down upgrade process
fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12
16:13:20,071::resourceManager::198::ResourceManager.Request::(__init__)
ResName=`Storage.upgrade_b384b3da-02a6-44f3-a3f6-56751ce8c26d`ReqID=`7ec0dd55-0b56-4d8a-bc21-5aa6fe2ec373`::Request
was made in '/usr/share/vdsm/storage/sp.py' line '338' at
'_shutDownUpgrade'
fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12
16:13:20,071::resourceManager::542::ResourceManager::(registerResource)
Trying to register resource
'Storage.upgrade_b384b3da-02a6-44f3-a3f6-56751ce8c26d' for lock type
'exclusive'
fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12
16:13:20,072::resourceManager::601::ResourceManager::(registerResource)
Resource 'Storage.upgrade_b384b3da-02a6-44f3-a3f6-56751ce8c26d' is
free. Now locking as 'exclusive' (1 active user)
fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12
16:13:20,072::resourceManager::238::ResourceManager.Request::(grant)
ResName=`Storage.upgrade_b384b3da-02a6-44f3-a3f6-56751ce8c26d`ReqID=`7ec0dd55-0b56-4d8a-bc21-5aa6fe2ec373`::Granted
request
fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12
16:13:20,072::resourceManager::198::ResourceManager.Request::(__init__)
ResName=`Storage.upgrade_6d882c77-cdbc-48ef-ae21-1a6d45e7f8a1`ReqID=`a6bd57b0-5ac0-459a-a4c2-2a5a58c4b1ea`::Request
was made in '/usr/share/vdsm/storage/sp.py' line '358' at
'_shutDownUpgrade'
fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12
16:13:20,073::resourceManager::542::ResourceManager::(registerResource)
Trying to register resource
'Storage.upgrade_6d882c77-cdbc-48ef-ae21-1a6d45e7f8a1' for lock type
'exclusive'
fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12
16:13:20,073::resourceManager::601::ResourceManager::(registerResource)
Resource 'Storage.upgrade_6d882c77-cdbc-48ef-ae21-1a6d45e7f8a1' is
free. Now locking as 'exclusive' (1 active user)
fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12
16:13:20,073::resourceManager::238::ResourceManager.Request::(grant)
ResName=`Storage.upgrade_6d882c77-cdbc-48ef-ae21-1a6d45e7f8a1`ReqID=`a6bd57b0-5ac0-459a-a4c2-2a5a58c4b1ea`::Granted
request
fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12
16:13:20,073::resourceManager::616::ResourceManager::(releaseResource)
Trying to release resource
'Storage.upgrade_6d882c77-cdbc-48ef-ae21-1a6d45e7f8a1'
fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12
16:13:20,073::resourceManager::635::ResourceManager::(releaseResource)
Released resource
'Storage.upgrade_6d882c77-cdbc-48ef-ae21-1a6d45e7f8a1' (0 active
users)
fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12
16:13:20,074::resourceManager::641::ResourceManager::(releaseResource)
Resource 'Storage.upgrade_6d882c77-cdbc-48ef-ae21-1a6d45e7f8a1' is
free, finding out if anyone is waiting for it.
fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12
16:13:20,074::resourceManager::649::ResourceManager::(releaseResource)
No one is waiting for resource
'Storage.upgrade_6d882c77-cdbc-48ef-ae21-1a6d45e7f8a1', Clearing
records.
fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12
16:13:20,074::resourceManager::616::ResourceManager::(releaseResource)
Trying to release resource
'Storage.upgrade_b384b3da-02a6-44f3-a3f6-56751ce8c26d'
fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12
16:13:20,074::resourceManager::635::ResourceManager::(releaseResource)
Released resource
'Storage.upgrade_b384b3da-02a6-44f3-a3f6-56751ce8c26d' (0 active
users)
fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12
16:13:20,075::resourceManager::641::ResourceManager::(releaseResource)
Resource 'Storage.upgrade_b384b3da-02a6-44f3-a3f6-56751ce8c26d' is
free, finding out if anyone is waiting for it.
fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12
16:13:20,075::resourceManager::649::ResourceManager::(releaseResource)
No one is waiting for resource
'Storage.upgrade_b384b3da-02a6-44f3-a3f6-56751ce8c26d', Clearing
records.
fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12
16:13:20,075::persistentDict::167::Storage.PersistentDict::(transaction)
Starting transaction
fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12
16:13:20,075::persistentDict::173::Storage.PersistentDict::(transaction)
Flushing changes
fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12
16:13:20,076::persistentDict::299::Storage.PersistentDict::(flush)
about to write lines (FileMetadataRW)=['CLASS=Data',
'DESCRIPTION=RaidVolBGluster', 'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3',
'LEASETIMESEC=60', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5',
'MASTER_VERSION=1', 'POOL_DESCRIPTION=HP_Proliant_DL18
0G6', 

Re: [ovirt-users] replace ovirt engine host

2014-11-12 Thread Nir Soffer
Hi Mario,

Please open a bug for this.

Include these logs in the bug for the ovirt engine host, one hypervisor node 
that
had no trouble, and one hypervisor node that had trouble (ovirt-node01?).

/var/log/mesages
/var/log/sanlock.log
/var/log/vdsm.log

And of course engine.log for the engine node.

Thanks,
Nir

- Original Message -
 From: Ml Ml mliebher...@googlemail.com
 To: Sandro Bonazzola sbona...@redhat.com
 Cc: Matt . yamakasi@gmail.com, users@ovirt.org, Dan Kenigsberg 
 dan...@redhat.com, Nir Soffer
 nsof...@redhat.com
 Sent: Wednesday, November 12, 2014 5:18:56 PM
 Subject: Re: [ovirt-users] replace ovirt engine host
 
 Here is the vdsm log of my ovirt-node01:
 
 
 fda6e0ee-33e9-4eb2-b724-34f7a5492e83::ERROR::2014-11-12
 16:13:20,071::sp::330::Storage.StoragePool::(startSpm) failed: Storage
 domain does not exist: ('6d882c77-cdbc-48ef-ae21-1a6d45e7f8a1',)
 fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12
 16:13:20,071::sp::336::Storage.StoragePool::(_shutDownUpgrade)
 Shutting down upgrade process
 fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12
 16:13:20,071::resourceManager::198::ResourceManager.Request::(__init__)
 ResName=`Storage.upgrade_b384b3da-02a6-44f3-a3f6-56751ce8c26d`ReqID=`7ec0dd55-0b56-4d8a-bc21-5aa6fe2ec373`::Request
 was made in '/usr/share/vdsm/storage/sp.py' line '338' at
 '_shutDownUpgrade'
 fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12
 16:13:20,071::resourceManager::542::ResourceManager::(registerResource)
 Trying to register resource
 'Storage.upgrade_b384b3da-02a6-44f3-a3f6-56751ce8c26d' for lock type
 'exclusive'
 fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12
 16:13:20,072::resourceManager::601::ResourceManager::(registerResource)
 Resource 'Storage.upgrade_b384b3da-02a6-44f3-a3f6-56751ce8c26d' is
 free. Now locking as 'exclusive' (1 active user)
 fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12
 16:13:20,072::resourceManager::238::ResourceManager.Request::(grant)
 ResName=`Storage.upgrade_b384b3da-02a6-44f3-a3f6-56751ce8c26d`ReqID=`7ec0dd55-0b56-4d8a-bc21-5aa6fe2ec373`::Granted
 request
 fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12
 16:13:20,072::resourceManager::198::ResourceManager.Request::(__init__)
 ResName=`Storage.upgrade_6d882c77-cdbc-48ef-ae21-1a6d45e7f8a1`ReqID=`a6bd57b0-5ac0-459a-a4c2-2a5a58c4b1ea`::Request
 was made in '/usr/share/vdsm/storage/sp.py' line '358' at
 '_shutDownUpgrade'
 fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12
 16:13:20,073::resourceManager::542::ResourceManager::(registerResource)
 Trying to register resource
 'Storage.upgrade_6d882c77-cdbc-48ef-ae21-1a6d45e7f8a1' for lock type
 'exclusive'
 fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12
 16:13:20,073::resourceManager::601::ResourceManager::(registerResource)
 Resource 'Storage.upgrade_6d882c77-cdbc-48ef-ae21-1a6d45e7f8a1' is
 free. Now locking as 'exclusive' (1 active user)
 fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12
 16:13:20,073::resourceManager::238::ResourceManager.Request::(grant)
 ResName=`Storage.upgrade_6d882c77-cdbc-48ef-ae21-1a6d45e7f8a1`ReqID=`a6bd57b0-5ac0-459a-a4c2-2a5a58c4b1ea`::Granted
 request
 fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12
 16:13:20,073::resourceManager::616::ResourceManager::(releaseResource)
 Trying to release resource
 'Storage.upgrade_6d882c77-cdbc-48ef-ae21-1a6d45e7f8a1'
 fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12
 16:13:20,073::resourceManager::635::ResourceManager::(releaseResource)
 Released resource
 'Storage.upgrade_6d882c77-cdbc-48ef-ae21-1a6d45e7f8a1' (0 active
 users)
 fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12
 16:13:20,074::resourceManager::641::ResourceManager::(releaseResource)
 Resource 'Storage.upgrade_6d882c77-cdbc-48ef-ae21-1a6d45e7f8a1' is
 free, finding out if anyone is waiting for it.
 fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12
 16:13:20,074::resourceManager::649::ResourceManager::(releaseResource)
 No one is waiting for resource
 'Storage.upgrade_6d882c77-cdbc-48ef-ae21-1a6d45e7f8a1', Clearing
 records.
 fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12
 16:13:20,074::resourceManager::616::ResourceManager::(releaseResource)
 Trying to release resource
 'Storage.upgrade_b384b3da-02a6-44f3-a3f6-56751ce8c26d'
 fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12
 16:13:20,074::resourceManager::635::ResourceManager::(releaseResource)
 Released resource
 'Storage.upgrade_b384b3da-02a6-44f3-a3f6-56751ce8c26d' (0 active
 users)
 fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12
 16:13:20,075::resourceManager::641::ResourceManager::(releaseResource)
 Resource 'Storage.upgrade_b384b3da-02a6-44f3-a3f6-56751ce8c26d' is
 free, finding out if anyone is waiting for it.
 fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12
 16:13:20,075::resourceManager::649::ResourceManager::(releaseResource)
 No one is waiting for resource
 'Storage.upgrade_b384b3da-02a6-44f3-a3f6-56751ce8c26d

Re: [ovirt-users] replace ovirt engine host

2014-11-12 Thread Vijay Bellur

On 11/12/2014 09:16 PM, Nir Soffer wrote:

Hi Mario,

Please open a bug for this.

Include these logs in the bug for the ovirt engine host, one hypervisor node 
that
had no trouble, and one hypervisor node that had trouble (ovirt-node01?).

/var/log/mesages
/var/log/sanlock.log
/var/log/vdsm.log

And of course engine.log for the engine node.



Please also include glusterfs logs from:

/var/log/glusterfs

Thanks,
Vijay
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] replace ovirt engine host

2014-11-11 Thread Ml Ml
I dunno why this is all so simple for you.

I just replaced the ovirt-engine like described in the docs.

I ejected the CD ISOs on every vm so i was able to delete the ISO_DOMAIN.

But i have still problems with my storage. Its a replicated glusterfs.
It looks healthy on the nodes itself. But somehow my ovirt-engine gets
confused. Can someone explain me what the actual error is?

Note: i only replaced the ovirt-engine host and delete the ISO_DOMAIN:

2014-11-11 18:32:37,832 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetTaskStatusVDSCommand]
(org.ovirt.thread.pool-6-thread-39) [71891fe3] Failed in
HSMGetTaskStatusVDS method
2014-11-11 18:32:37,833 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
(org.ovirt.thread.pool-6-thread-39) [71891fe3] spmStart polling ended:
taskId = 8c5fae2c-0ddb-41cd-ac54-c404c943e00f task status = finished
2014-11-11 18:32:37,834 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
(org.ovirt.thread.pool-6-thread-39) [71891fe3] Start SPM Task failed -
result: cleanSuccess, message: VDSGenericException: VDSErrorException:
Failed to HSMGetTaskStatusVDS, error = Storage domain does not exist,
code = 358
2014-11-11 18:32:37,888 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
(org.ovirt.thread.pool-6-thread-39) [71891fe3] spmStart polling ended,
spm status: Free
2014-11-11 18:32:37,889 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
(org.ovirt.thread.pool-6-thread-39) [71891fe3] START,
HSMClearTaskVDSCommand(HostName = ovirt-node01.foobar.net, HostId =
2e8cec66-23d7-4a5c-b6f3-9758d1d87f5c,
taskId=8c5fae2c-0ddb-41cd-ac54-c404c943e00f), log id: 547e26fd
2014-11-11 18:32:37,937 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
(org.ovirt.thread.pool-6-thread-39) [71891fe3] FINISH,
HSMClearTaskVDSCommand, log id: 547e26fd
2014-11-11 18:32:37,938 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
(org.ovirt.thread.pool-6-thread-39) [71891fe3] FINISH,
SpmStartVDSCommand, return:
org.ovirt.engine.core.common.businessentities.SpmStatusResult@5027ed97,
log id: 461eb5b5
2014-11-11 18:32:37,941 INFO
[org.ovirt.engine.core.bll.storage.SetStoragePoolStatusCommand]
(org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] Running command:
SetStoragePoolStatusCommand internal: true. Entities affected :  ID:
b384b3da-02a6-44f3-a3f6-56751ce8c26d Type: StoragePool
2014-11-11 18:32:37,948 ERROR
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
(org.ovirt.thread.pool-6-thread-39) [6d5f7d9d]
IrsBroker::Failed::ActivateStorageDomainVDS due to:
IrsSpmStartFailedException: IRSGenericException: IRSErrorException:
SpmStart failed
2014-11-11 18:32:38,006 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
(org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] Irs placed on server
2e8cec66-23d7-4a5c-b6f3-9758d1d87f5c failed. Proceed Failover
2014-11-11 18:32:38,044 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-29) START,
GlusterVolumesListVDSCommand(HostName = ovirt-node01.foobar.net,
HostId = 2e8cec66-23d7-4a5c-b6f3-9758d1d87f5c), log id: 7a110756
2014-11-11 18:32:38,045 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
(org.ovirt.thread.pool-6-thread-39) [6d5f7d9d]
hostFromVds::selectedVds - ovirt-node02.foobar.net, spmStatus Free,
storage pool HP_Proliant_DL180G6
2014-11-11 18:32:38,048 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
(org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] starting spm on vds
ovirt-node02.foobar.net, storage pool HP_Proliant_DL180G6, prevId -1,
LVER -1
2014-11-11 18:32:38,050 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
(org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] START,
SpmStartVDSCommand(HostName = ovirt-node02.foobar.net, HostId =
6948da12-0b8a-4b6d-a9af-162e6c25dad3, storagePoolId =
b384b3da-02a6-44f3-a3f6-56751ce8c26d, prevId=-1, prevLVER=-1,
storagePoolFormatType=V3, recoveryMode=Manual, SCSIFencing=false), log
id: 1a6ccb9c
2014-11-11 18:32:38,108 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
(org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] spmStart polling
started: taskId = 78d31638-70a5-46aa-89e7-1d1e8126bdba
2014-11-11 18:32:38,193 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-29) FINISH,
GlusterVolumesListVDSCommand, return:
{d46619e9-9368-4e82-bf3a-a2377b6e85e4=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@9746ef53},
log id: 7a110756
2014-11-11 18:32:38,352 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-29) START,
GlusterVolumesListVDSCommand(HostName = ovirt-node04.foobar.net,
HostId = 073c24e1-003f-412a-be56-0c41a435829a), log id: 2f25d56e
2014-11-11 18:32:38,433 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]

Re: [ovirt-users] replace ovirt engine host

2014-11-07 Thread Ml Ml
anyone? :)

Or are you only doing backups, no restore? :-P

On Thu, Nov 6, 2014 at 10:08 AM, ml ml mliebher...@googlemail.com wrote:
 Hello List,

 currently i am testing backup and restore of my ovirt engine host.

 So far the backup and recovery works.

 Since i have a nfs ISO_DOMAIN i need to remount the nfs mounts on my
 vdsm hosts/nodes. This is kinda ugly but okay :)

 However, my storage information seems to go weird. It looks like the
 ovirt engine gets confused about the current status.

 Do i have to restart vdsm after replacing the ovirt engine?

 When i put connect my original ovirt engine again the cluster status is 
 okay...

 I test this by unplugging the original ovirt engine host network cable
 and doing a restore on a connrected 2nd machine.


 So i guess my question is: How do i replace my ovirt engine smoothly
 in a production enviorment?

 Thanks,
 Mario
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] replace ovirt engine host

2014-11-07 Thread Sven Kieske


On 07/11/14 10:10, Ml Ml wrote:
 anyone? :)
 
 Or are you only doing backups, no restore? :-P

gladly I just had to test disaster recovery and not actually
 perform it (yet) :D

To be honest: I never  have restored ovirt-engine with running vdsm
hosts connected to it, sounds like a lot of fun, I see if I can
grab some time and try this out myself :)

By your description I guess you have nfs/iso domain on your engine host?
why don't you just seperate it, so no need for remounts
if your engine is destroyed.

HTH

-- 
Mit freundlichen Grüßen / Regards

Sven Kieske

Systemadministrator
Mittwald CM Service GmbH  Co. KG
Königsberger Straße 6
32339 Espelkamp
T: +49-5772-293-100
F: +49-5772-293-333
https://www.mittwald.de
Geschäftsführer: Robert Meyer
St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] replace ovirt engine host

2014-11-07 Thread Koen Vanoppen
Hi,

We had a consulting partner who did the same for our company. This is his
procedure and worked great:

How to migrate ovirt management engine
Packages
Ensure you have the same packages  versions installed on the destination
hostas on the source, using 'rpm -qa | grep ovirt'. Make sure versions are
100%identical.
Default setup

Run 'engine-setup' on the destination host after installing the packages.
Use
the following configuration:
1.Backup existing configuration
2.On the source host, do:
a.service ovirt-engine stop
b.service ovirt-engine-dwhd stop
c.mkdir ~/backup
d.tar -C /etc/pki/ovirt-engine -czpf ~/backup/ovirt-engine-pki.tar.gz .
e.tar -C /etc/ovirt-engine -czpf ~/backup/ovirt-engine-conf.tar.gz .
f.cd /usr/share/ovirt-engine/dbscripts
g../backup.sh
h.mv engine_*.sql ~/backup/engine.sql
3.You may also want to backup dwh  reports:
a.cd /usr/share/ovirt-engine/bin/
b../engine-backup.sh --mode=backup --scope=db --db-user=engine
--db-password=XXX --file=/usr/tmp/rhevm-backups/engine-backup
--log=/tmp/engine-backup.log
c../engine-backup.sh --mode=backup --scope=dwhdb --db-user=engine
--db-password=XXX --file=/usr/tmp/rhevm-backups/dwh-backup
--log=/tmp/engine-backup.log
d../engine-backup.sh --mode=backup --scope=reportsdb --db-user=engine
--db-password=XXX --file=/usr/tmp/rhevm-backups/reports-backup
--log=/tmp/engine-backup.log
4.Download these backup files, and copy them to the destination host.
Restore configuration
1.On the destination host, do:
a.service ovirt-engine stop
b.service ovirt-engine-dwhd stop
c.cd backup
d.tar -C /etc/pki/ovirt-engine -xzpf ovirt-engine-pki.tar.gz
e. tar -C /etc/ovirt-engine -xzpf ovirt-engine-conf.tar.gz
f. tar -xvjf engine-backup
g. tar -xvjf dwh-backup
h. tar -xvjf reports-backup

Restore Database
1.On the destination host do:
a.su - postgres -c psql -d template1 -c 'drop database engine;'
b. su - postgres -c psql -d template1 -c 'create database engine owner
engine;'
c. su - postgres
d. psql
e.  \c engine
f.  \i /path/to/backup/engine.sql
NOTE: in case you have issues logging in to the database, add the following
  line to the pg_hba.conf file:

   hostallengine127.0.0.1/32trust

2.Fix engine password:
a.su - postgres
b. psql
c.alter user engine with password 'XXX';
Change ovirt hostname
On the destination host, run:

 /usr/share/ovirt-engine/setup/bin/ovirt-engine-rename








NB:
Restoring the dwh/reports database is similar to steps 5-7, but omitted from
this document due to problems starting the reporting service.


2014-11-07 10:28 GMT+01:00 Sven Kieske s.kie...@mittwald.de:



 On 07/11/14 10:10, Ml Ml wrote:
  anyone? :)
 
  Or are you only doing backups, no restore? :-P

 gladly I just had to test disaster recovery and not actually
  perform it (yet) :D

 To be honest: I never  have restored ovirt-engine with running vdsm
 hosts connected to it, sounds like a lot of fun, I see if I can
 grab some time and try this out myself :)

 By your description I guess you have nfs/iso domain on your engine host?
 why don't you just seperate it, so no need for remounts
 if your engine is destroyed.

 HTH

 --
 Mit freundlichen Grüßen / Regards

 Sven Kieske

 Systemadministrator
 Mittwald CM Service GmbH  Co. KG
 Königsberger Straße 6
 32339 Espelkamp
 T: +49-5772-293-100
 F: +49-5772-293-333
 https://www.mittwald.de
 Geschäftsführer: Robert Meyer
 St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
 Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] replace ovirt engine host

2014-11-07 Thread Daniel Helgenberger

Daniel Helgenberger
m box bewegtbild GmbH

ACKERSTR. 19 P:  +49/30/2408781-22
D-10115 BERLIN F:  +49/30/2408781-10

www.m-box.dehttp://www.m-box.de
www.monkeymen.tvhttp://www.monkeymen.tv

Gesch?ftsf?hrer: Martin Retschitzegger / Michaela G?llner
Handeslregister: Amtsgericht Charlottenburg / HRB 112767
On 07.11.2014, at 15:24, Koen Vanoppen 
vanoppen.k...@gmail.commailto:vanoppen.k...@gmail.com wrote:

Hi,

We had a consulting partner who did the same for our company. This is his 
procedure and worked great:

How to migrate ovirt management engine
Packages
Ensure you have the same packages  versions installed on the destination 
hostas on the source, using 'rpm -qa | grep ovirt'. Make sure versions are 
100%identical.
Default setup

Run 'engine-setup' on the destination host after installing the packages. Use
the following configuration:
1.Backup existing configuration
2.On the source host, do:
You might want your consultant take a look on [1]...
Steps a-3d:
engine-backup mode=backup --file=~/ovirt-engine-source --log=backup.log

a.service ovirt-engine stop
b.service ovirt-engine-dwhd stop
c.mkdir ~/backup
d.tar -C /etc/pki/ovirt-engine -czpf ~/backup/ovirt-engine-pki.tar.gz .
e.tar -C /etc/ovirt-engine -czpf ~/backup/ovirt-engine-conf.tar.gz .
f.cd /usr/share/ovirt-engine/dbscripts
g../backup.sh
h.mv engine_*.sql ~/backup/engine.sql
3.You may also want to backup dwh  reports:
a.cd /usr/share/ovirt-engine/bin/
b../engine-backup.sh --mode=backup --scope=db --db-user=engine 
--db-password=XXX --file=/usr/tmp/rhevm-backups/engine-backup 
--log=/tmp/engine-backup.log
c../engine-backup.sh --mode=backup --scope=dwhdb --db-user=engine 
--db-password=XXX --file=/usr/tmp/rhevm-backups/dwh-backup 
--log=/tmp/engine-backup.log
d../engine-backup.sh --mode=backup --scope=reportsdb --db-user=engine 
--db-password=XXX --file=/usr/tmp/rhevm-backups/reports-backup 
--log=/tmp/engine-backup.log
4.Download these backup files, and copy them to the destination host.
Restore configuration
1.On the destination host, do:
Again, steps a-h, basically
engine-setup
engine-cleanup
engine-backup mode=restore --file=~/ovirt-engine-source --log=backup.log

also, I would run a second
engine-setup
After that, you should be good to go..

Of course, depending on your previous engine setup this could be a little more 
complicated. Still, quite strait forward.
[1] http://www.ovirt.org/Ovirt-engine-backup
a.service ovirt-engine stop
b.service ovirt-engine-dwhd stop
c.cd backup
d.tar -C /etc/pki/ovirt-engine -xzpf ovirt-engine-pki.tar.gz
e. tar -C /etc/ovirt-engine -xzpf ovirt-engine-conf.tar.gz
f. tar -xvjf engine-backup
g. tar -xvjf dwh-backup
h. tar -xvjf reports-backup
 
Restore Database
1.On the destination host do:
a.su - postgres -c psql -d template1 -c 'drop database engine;'
b. su - postgres -c psql -d template1 -c 'create database engine owner 
engine;'
c. su - postgres
d. psql
e.  \c engine
f.  \i /path/to/backup/engine.sql
NOTE: in case you have issues logging in to the database, add the following
  line to the pg_hba.conf file:

   hostallengine127.0.0.1/32http://127.0.0.1/32trust

2.Fix engine password:
a.su - postgres
b. psql
c.alter user engine with password 'XXX';
Change ovirt hostname
On the destination host, run:

 /usr/share/ovirt-engine/setup/bin/ovirt-engine-rename








NB:
Restoring the dwh/reports database is similar to steps 5-7, but omitted from
this document due to problems starting the reporting service.


2014-11-07 10:28 GMT+01:00 Sven Kieske 
s.kie...@mittwald.demailto:s.kie...@mittwald.de:


On 07/11/14 10:10, Ml Ml wrote:
 anyone? :)

 Or are you only doing backups, no restore? :-P

gladly I just had to test disaster recovery and not actually
 perform it (yet) :D

To be honest: I never  have restored ovirt-engine with running vdsm
hosts connected to it, sounds like a lot of fun, I see if I can
grab some time and try this out myself :)

By your description I guess you have nfs/iso domain on your engine host?
why don't you just seperate it, so no need for remounts
if your engine is destroyed.

HTH

--
Mit freundlichen Gr??en / Regards

Sven Kieske

Systemadministrator
Mittwald CM Service GmbH  Co. KG
K?nigsberger Stra?e 6
32339 Espelkamp
T: +49-5772-293-100tel:%2B49-5772-293-100
F: +49-5772-293-333tel:%2B49-5772-293-333
https://www.mittwald.de
Gesch?ftsf?hrer: Robert Meyer
St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
Komplement?rin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
___
Users mailing list
Users@ovirt.orgmailto:Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.orgmailto:Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

Re: [ovirt-users] replace ovirt engine host

2014-11-07 Thread Matt .
Hi,

Actually it's very simple as described in the docs.

Just stop the engine, make a backup, copy it over, place it back and
start it. You can do this in a several of ways.

ISO domains is which I would remove and recreate again. ISO domains
are actually dumb domains, so nothing can go wrong.

Did it some time ago because I needed more performance.

VDSM can run without the engine, it doesn't need it as the egine
monitors and does the commands, so when it's not there... VM's just
run (until you make them die yourself :))

I would give it 15-30 min/

Cheers,

Matt


2014-11-07 18:36 GMT+01:00 Daniel Helgenberger daniel.helgenber...@m-box.de:

 Daniel Helgenberger
 m box bewegtbild GmbH

 ACKERSTR. 19 P:  +49/30/2408781-22
 D-10115 BERLIN F:  +49/30/2408781-10

 www.m-box.de
 www.monkeymen.tv

 Geschäftsführer: Martin Retschitzegger / Michaela Göllner
 Handeslregister: Amtsgericht Charlottenburg / HRB 112767
 On 07.11.2014, at 15:24, Koen Vanoppen vanoppen.k...@gmail.com wrote:

 Hi,

 We had a consulting partner who did the same for our company. This is his
 procedure and worked great:

 How to migrate ovirt management engine
 Packages
 Ensure you have the same packages  versions installed on the destination
 hostas on the source, using 'rpm -qa | grep ovirt'. Make sure versions are
 100%identical.
 Default setup

 Run 'engine-setup' on the destination host after installing the packages.
 Use
 the following configuration:
 1.Backup existing configuration
 2.On the source host, do:

 You might want your consultant take a look on [1]...
 Steps a-3d:
 engine-backup mode=backup --file=~/ovirt-engine-source --log=backup.log

 a.service ovirt-engine stop
 b.service ovirt-engine-dwhd stop
 c.mkdir ~/backup
 d.tar -C /etc/pki/ovirt-engine -czpf ~/backup/ovirt-engine-pki.tar.gz .
 e.tar -C /etc/ovirt-engine -czpf ~/backup/ovirt-engine-conf.tar.gz .
 f.cd /usr/share/ovirt-engine/dbscripts
 g../backup.sh
 h.mv engine_*.sql ~/backup/engine.sql
 3.You may also want to backup dwh  reports:
 a.cd /usr/share/ovirt-engine/bin/
 b../engine-backup.sh --mode=backup --scope=db --db-user=engine
 --db-password=XXX --file=/usr/tmp/rhevm-backups/engine-backup
 --log=/tmp/engine-backup.log
 c../engine-backup.sh --mode=backup --scope=dwhdb --db-user=engine
 --db-password=XXX --file=/usr/tmp/rhevm-backups/dwh-backup
 --log=/tmp/engine-backup.log
 d../engine-backup.sh --mode=backup --scope=reportsdb --db-user=engine
 --db-password=XXX --file=/usr/tmp/rhevm-backups/reports-backup
 --log=/tmp/engine-backup.log
 4.Download these backup files, and copy them to the destination host.
 Restore configuration
 1.On the destination host, do:

 Again, steps a-h, basically
 engine-setup
 engine-cleanup
 engine-backup mode=restore --file=~/ovirt-engine-source --log=backup.log

 also, I would run a second
 engine-setup
 After that, you should be good to go..

 Of course, depending on your previous engine setup this could be a little
 more complicated. Still, quite strait forward.
 [1] http://www.ovirt.org/Ovirt-engine-backup

 a.service ovirt-engine stop
 b.service ovirt-engine-dwhd stop
 c.cd backup
 d.tar -C /etc/pki/ovirt-engine -xzpf ovirt-engine-pki.tar.gz
 e. tar -C /etc/ovirt-engine -xzpf ovirt-engine-conf.tar.gz
 f. tar -xvjf engine-backup
 g. tar -xvjf dwh-backup
 h. tar -xvjf reports-backup

 Restore Database
 1.On the destination host do:
 a.su - postgres -c psql -d template1 -c 'drop database engine;'
 b. su - postgres -c psql -d template1 -c 'create database engine owner
 engine;'
 c. su - postgres
 d. psql
 e.  \c engine
 f.  \i /path/to/backup/engine.sql
 NOTE: in case you have issues logging in to the database, add the following
   line to the pg_hba.conf file:

hostallengine127.0.0.1/32trust

 2.Fix engine password:
 a.su - postgres
 b. psql
 c.alter user engine with password 'XXX';
 Change ovirt hostname
 On the destination host, run:

  /usr/share/ovirt-engine/setup/bin/ovirt-engine-rename








 NB:
 Restoring the dwh/reports database is similar to steps 5-7, but omitted from
 this document due to problems starting the reporting service.


 2014-11-07 10:28 GMT+01:00 Sven Kieske s.kie...@mittwald.de:



 On 07/11/14 10:10, Ml Ml wrote:
  anyone? :)
 
  Or are you only doing backups, no restore? :-P

 gladly I just had to test disaster recovery and not actually
  perform it (yet) :D

 To be honest: I never  have restored ovirt-engine with running vdsm
 hosts connected to it, sounds like a lot of fun, I see if I can
 grab some time and try this out myself :)

 By your description I guess you have nfs/iso domain on your engine host?
 why don't you just seperate it, so no need for remounts
 if your engine is destroyed.

 HTH

 --
 Mit freundlichen Grüßen / Regards

 Sven Kieske

 Systemadministrator
 Mittwald CM Service GmbH  Co. KG
 Königsberger