[ovirt-users] Migrate to CentOS7 on new hardware

2015-08-07 Thread andreas.ewert
Hello,

I want to migrate my oVirt Hypervisors to CentOS7. My strategy is to install 
new boxes with CentOS7 and add them to the OVirt Cluster.
But I get this message in engine.log:

2015-08-07 13:01:35,388 WARN  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(DefaultQuartzScheduler_Worker-55) [5b9014cb] Correlation ID: 5b9014cb, Job ID: 
eddd1466-307f-400e-8d9b-b8cd67c2433c, Call Stack: null, Custom Event ID: -1, 
Message: Not possible to mix RHEL 6.x and 7.x hosts in one cluster. Tried 
adding RHEL - 7 - 1.1503.el7.centos.2.8 host to a cluster with RHEL - 6 - 
6.el6.centos.12.2 hosts.

I use oVirt Engine Version: 3.5.2.1-1.el6

Could you please show me a migration strategy?

With best regards
Andreas


Andreas Ewert
Server
_

Tel.: +49 221 456 44321
Fax: +49 221 456 95 44321

CBC Cologne Broadcasting Center GmbH
Picassoplatz 1 • 50679 Köln

Sitz der Gesellschaft: Köln • AG Köln • HRB 25232
Geschäftsführer: Thomas Harscheidt • www.cbc.dehttp://www.cbc.de/

Ein Unternehmen der Mediengruppe RTL Deutschland
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Can't create disks on gluster storage

2014-07-17 Thread andreas.ewert
Hi,

Due creation of new disk in gluster-storage I got some errors. Finally the disk 
is not being created.
engine.log says that there is no such file or directory, but the gluster domain 
(124e6273-c6f5-471f-88e7-e5d9d37d7385) is active:

[root@ipet etc]# vdsClient -s 0 getStoragePoolInfo 
5849b030-626e-47cb-ad90-3ce782d831b3
name = Test
isoprefix = 
/rhev/data-center/mnt/mixcoatl:_srv_mirror_ISOs/c1582b82-9bfc-4e2b-ab2a-83551dcfba8f/images/----
pool_status = connected
lver = 0
spm_id = 1
master_uuid = 54f86ad7-2c12-4322-b2d1-f129f3d20e57
version = 3
domains = 
9bdf01bd-78d6-4408-b3a9-e05469004d78:Active,e4a3928d-0475-4b99-bfb8-86606931296a:Active,c1582b82-9bfc-4e2b-ab2a-83551dcfba8f:Active,124e6273-c6f5-471f-88e7-e5d9d37d7385:Active,54f86ad7-2c12-4322-b2d1-f129f3d20e57:Active
type = FCP
master_ver = 8
9bdf01bd-78d6-4408-b3a9-e05469004d78 = {'status': 'Active', 'diskfree': 
'1043408617472', 'isoprefix': '', 'alerts': [], 'disktotal': '1099108974592', 
'version': 3}
e4a3928d-0475-4b99-bfb8-86606931296a = {'status': 'Active', 'diskfree': 
'27244910346240', 'isoprefix': '', 'alerts': [], 'disktotal': '34573945667584', 
'version': 3}
124e6273-c6f5-471f-88e7-e5d9d37d7385 = {'status': 'Active', 'diskfree': 
'12224677937152', 'isoprefix': '', 'alerts': [], 'disktotal': '16709735940096', 
'version': 3}
c1582b82-9bfc-4e2b-ab2a-83551dcfba8f = {'status': 'Active', 'diskfree': 
'115282018304', 'isoprefix': 
'/rhev/data-center/mnt/mixcoatl:_srv_mirror_ISOs/c1582b82-9bfc-4e2b-ab2a-83551dcfba8f/images/----',
 'alerts': [], 'disktotal': '539016298496', 'version': 0}
54f86ad7-2c12-4322-b2d1-f129f3d20e57 = {'status': 'Active', 'diskfree': 
'148579024896', 'isoprefix': '', 'alerts': [], 'disktotal': '1197490569216', 
'version': 3}


[root@ipet etc]# ll /rhev/data-center/mnt/glusterSD/moly\:_repo1/
insgesamt 0
drwxr-xr-x 4 vdsm kvm 96 17. Apr 10:02 124e6273-c6f5-471f-88e7-e5d9d37d7385
-rwxr-xr-x 1 vdsm kvm  0 11. Feb 12:52 __DIRECT_IO_TEST__
[root@ipet etc]# ll /rhev/data-center/
insgesamt 12
drwxr-xr-x 2 vdsm kvm 4096 14. Jul 10:42 5849b030-626e-47cb-ad90-3ce782d831b3
drwxr-xr-x 2 vdsm kvm 4096 16. Dez 2013  hsm-tasks
drwxr-xr-x 7 vdsm kvm 4096 14. Jul 10:24 mnt
[root@ipet etc]# ll /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/
insgesamt 16
lrwxrwxrwx 1 vdsm kvm 66 14. Jul 10:42 54f86ad7-2c12-4322-b2d1-f129f3d20e57 - 
/rhev/data-center/mnt/blockSD/54f86ad7-2c12-4322-b2d1-f129f3d20e57
lrwxrwxrwx 1 vdsm kvm 66 14. Jul 10:42 9bdf01bd-78d6-4408-b3a9-e05469004d78 - 
/rhev/data-center/mnt/blockSD/9bdf01bd-78d6-4408-b3a9-e05469004d78
lrwxrwxrwx 1 vdsm kvm 84 14. Jul 10:42 c1582b82-9bfc-4e2b-ab2a-83551dcfba8f - 
/rhev/data-center/mnt/mixcoatl:_srv_mirror_ISOs/c1582b82-9bfc-4e2b-ab2a-83551dcfba8f
lrwxrwxrwx 1 vdsm kvm 66 14. Jul 10:42 mastersd - 
/rhev/data-center/mnt/blockSD/54f86ad7-2c12-4322-b2d1-f129f3d20e57


Here i miss the directory of the gluster domain 
124e6273-c6f5-471f-88e7-e5d9d37d7385 and the symbolic link belonging to it. The 
directory is missed on every Host.
What can I do to fix this issue?

With best regards
Andreas



 

engine.log:
2014-07-17 05:30:33,347 INFO  [org.ovirt.engine.core.bll.AddDiskCommand] 
(ajp--127.0.0.1-8702-7) [19404214] Running command: AddDiskCommand internal: 
false. Entities affected :  ID: 124e6273-c6f5-471f-88e7-e5d9d37d7385 Type: 
Storage
2014-07-17 05:30:33,358 INFO  
[org.ovirt.engine.core.bll.AddImageFromScratchCommand] (ajp--127.0.0.1-8702-7) 
[3e7d9b07] Running command: AddImageFromScratchCommand internal: true. Entities 
affected :  ID: 124e6273-c6f5-471f-88e7-e5d9d37d7385 Type: Storage
2014-07-17 05:30:33,364 INFO  
[org.ovirt.engine.core.vdsbroker.irsbroker.CreateImageVDSCommand] 
(ajp--127.0.0.1-8702-7) [3e7d9b07] START, CreateImageVDSCommand( storagePoolId 
= 5849b030-626e-47cb-ad90-3ce782d831b3, ignoreFailoverLimit = false, 
storageDomainId = 124e6273-c6f5-471f-88e7-e5d9d37d7385, imageGroupId = 
7594000c-23a9-4941-8218-2c0654518a3d, imageSizeInBytes = 68719476736, 
volumeFormat = RAW, newImageId = c48e46cd-9dd5-4c52-94a4-db0378aecc3c, 
newImageDescription = ), log id: 642dadf4
2014-07-17 05:30:33,366 INFO  
[org.ovirt.engine.core.vdsbroker.irsbroker.CreateImageVDSCommand] 
(ajp--127.0.0.1-8702-7) [3e7d9b07] -- executeIrsBrokerCommand: calling 
'createVolume' with two new parameters: description and UUID
2014-07-17 05:30:33,392 INFO  
[org.ovirt.engine.core.vdsbroker.irsbroker.CreateImageVDSCommand] 
(ajp--127.0.0.1-8702-7) [3e7d9b07] FINISH, CreateImageVDSCommand, return: 
c48e46cd-9dd5-4c52-94a4-db0378aecc3c, log id: 642dadf4
2014-07-17 05:30:33,403 INFO  [org.ovirt.engine.core.bll.CommandAsyncTask] 
(ajp--127.0.0.1-8702-7) [3e7d9b07] CommandAsyncTask::Adding 
CommandMultiAsyncTasks object for command 7429f506-e57d-45e7-bc66-42bbdc90174c
2014-07-17 05:30:33,404 INFO  

[Users] Disk Images

2014-02-12 Thread andreas.ewert
Hi,

If I remove a disk image from a Virtual Machine, then I can’t edit the disk 
properties in the „Disks“ tab. (e.g. bootable flag)
It is only possible to change the flags on attached disks.
My test environments are engine versions 3.3.3 and 3.4 beta2

Best regards
Andreas


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Disk Images

2014-02-12 Thread andreas.ewert
Yes, I know! Why is there this restriction? I have the following scenario: I 
have a recover vm with one bootable disk attached. Now I want to repair some 
broken configs from another vm’s boot disk. First I unattach the device from 
the broken vm. If I forget to remove the bootable flag, then I have to attach 
the disk again to the broken vm. I remove the bootable flag and attach the disk 
to my „recover vm“.  This is quite a long way round. 
This could be a feature request, right?

best regards
Andreas

Am 12.02.2014 um 16:37 schrieb Dafna Ron d...@redhat.com:

 disks can only be edited when attached to a vm
 
 On 02/12/2014 03:04 PM, andreas.ew...@cbc.de wrote:
 Hi,
 
 If I remove a disk image from a Virtual Machine, then I can’t edit the disk 
 properties in the „Disks“ tab. (e.g. bootable flag)
 It is only possible to change the flags on attached disks.
 My test environments are engine versions 3.3.3 and 3.4 beta2
 
 Best regards
 Andreas
 
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
 
 -- 
 Dafna Ron

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] issues with live snapshot

2014-02-12 Thread andreas.ewert
Hi,

I want to create a live snapshot, but it fails at the finalizing task. There 
are 3 events: 

- Snapshot 'test' creation for VM 'snaptest' was initiated by EwertA
- Failed to create live snapshot 'test' for VM 'snaptest'. VM restart is 
recommended.
- Failed to complete snapshot 'test' creation for VM 'snaptest‘.

Thread-338209::DEBUG::2014-02-13 
08:40:19,672::BindingXMLRPC::965::vds::(wrapper) client [10.98.229.5]::call 
vmSnapshot with ('31c185ce-cc2e-4246-bf46-fcd96cd30050', [{'baseVolumeID': 
'b9448428-b787-4286-b54e-aa54a8f8bb17', 'domainID': 
'54f86ad7-2c12-4322-b2d1-f129f3d20e57', 'volumeID': 
'c677d01e-dc50-486b-a532-f88a71666d2c', 'imageID': 
'db6faf9e-2cc8-4106-954b-fef7e4b1bd1b'}], '') {}
Thread-338209::DEBUG::2014-02-13 
08:40:19,672::task::579::TaskManager.Task::(_updateState) 
Task=`8675b6b0-3216-46a8-8d9a-d0feb02d5b49`::moving from state init - state 
preparing
Thread-338209::INFO::2014-02-13 
08:40:19,672::logUtils::44::dispatcher::(wrapper) Run and protect: 
prepareImage(sdUUID='54f86ad7-2c12-4322-b2d1-f129f3d20e57', 
spUUID='5849b030-626e-47cb-ad90-3ce782d831b3', 
imgUUID='db6faf9e-2cc8-4106-954b-fef7e4b1bd1b', 
leafUUID='c677d01e-dc50-486b-a532-f88a71666d2c')
Thread-338209::DEBUG::2014-02-13 
08:40:19,673::resourceManager::197::ResourceManager.Request::(__init__) 
ResName=`Storage.54f86ad7-2c12-4322-b2d1-f129f3d20e57`ReqID=`630a701e-bd44-49ef-8a14-f657b8653a33`::Request
 was made in '/usr/share/vdsm/storage/hsm.py' line '3236' at 'prepareImage'
Thread-338209::DEBUG::2014-02-13 
08:40:19,673::resourceManager::541::ResourceManager::(registerResource) Trying 
to register resource 'Storage.54f86ad7-2c12-4322-b2d1-f129f3d20e57' for lock 
type 'shared'
Thread-338209::DEBUG::2014-02-13 
08:40:19,673::resourceManager::600::ResourceManager::(registerResource) 
Resource 'Storage.54f86ad7-2c12-4322-b2d1-f129f3d20e57' is free. Now locking as 
'shared' (1 active user)
Thread-338209::DEBUG::2014-02-13 
08:40:19,673::resourceManager::237::ResourceManager.Request::(grant) 
ResName=`Storage.54f86ad7-2c12-4322-b2d1-f129f3d20e57`ReqID=`630a701e-bd44-49ef-8a14-f657b8653a33`::Granted
 request
Thread-338209::DEBUG::2014-02-13 
08:40:19,674::task::811::TaskManager.Task::(resourceAcquired) 
Task=`8675b6b0-3216-46a8-8d9a-d0feb02d5b49`::_resourcesAcquired: 
Storage.54f86ad7-2c12-4322-b2d1-f129f3d20e57 (shared)
Thread-338209::DEBUG::2014-02-13 
08:40:19,675::task::974::TaskManager.Task::(_decref) 
Task=`8675b6b0-3216-46a8-8d9a-d0feb02d5b49`::ref 1 aborting False
Thread-338209::DEBUG::2014-02-13 
08:40:19,675::lvm::440::OperationMutex::(_reloadlvs) Operation 'lvm reload 
operation' got the operation mutex
Thread-338209::DEBUG::2014-02-13 
08:40:19,675::lvm::309::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm 
lvs --config  devices { preferred_names = [\\^/dev/mapper/\\] 
ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 
obtain_device_list_from_udev=0 filter = [ 
\'a|/dev/mapper/36000d771ec7c7d5beda78691839c|\', \'r|.*|\' ] }  global {  
locking_type=1  prioritise_write_locks=1  wait_for_locks=1 }  backup {  
retain_min = 50  retain_days = 0 }  --noheadings --units b --nosuffix 
--separator | -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags 
54f86ad7-2c12-4322-b2d1-f129f3d20e57' (cwd None)
Thread-338209::DEBUG::2014-02-13 
08:40:19,715::lvm::309::Storage.Misc.excCmd::(cmd) SUCCESS: err = ''; rc = 0
Thread-338209::DEBUG::2014-02-13 
08:40:19,739::lvm::475::Storage.LVM::(_reloadlvs) lvs reloaded
Thread-338209::DEBUG::2014-02-13 
08:40:19,740::lvm::475::OperationMutex::(_reloadlvs) Operation 'lvm reload 
operation' released the operation mutex
Thread-338209::DEBUG::2014-02-13 
08:40:19,741::lvm::309::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm 
lvchange --config  devices { preferred_names = [\\^/dev/mapper/\\] 
ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 
obtain_device_list_from_udev=0 filter = [ 
\'a|/dev/mapper/36000d771ec7c7d5beda78691839c|\', \'r|.*|\' ] }  global {  
locking_type=1  prioritise_write_locks=1  wait_for_locks=1 }  backup {  
retain_min = 50  retain_days = 0 }  --autobackup n --available y 
54f86ad7-2c12-4322-b2d1-f129f3d20e57/c677d01e-dc50-486b-a532-f88a71666d2c' (cwd 
None)
Thread-338209::DEBUG::2014-02-13 
08:40:19,800::lvm::309::Storage.Misc.excCmd::(cmd) SUCCESS: err = ''; rc = 0
Thread-338209::DEBUG::2014-02-13 
08:40:19,801::lvm::526::OperationMutex::(_invalidatelvs) Operation 'lvm 
invalidate operation' got the operation mutex
Thread-338209::DEBUG::2014-02-13 
08:40:19,801::lvm::538::OperationMutex::(_invalidatelvs) Operation 'lvm 
invalidate operation' released the operation mutex
Thread-338209::WARNING::2014-02-13 
08:40:19,801::fileUtils::167::Storage.fileUtils::(createdir) Dir 
/var/run/vdsm/storage/54f86ad7-2c12-4322-b2d1-f129f3d20e57/db6faf9e-2cc8-4106-954b-fef7e4b1bd1b
 already exists
Thread-338209::DEBUG::2014-02-13 
08:40:19,801::blockSD::1068::Storage.StorageDomain::(createImageLinks)