[ovirt-users] Problem with backing-file, how to fix the backing-chain ?

2016-10-17 Thread Dael Maselli

Hi all,

We run an ovirt environment before with engine v3.6.5 (if remember good)
and now with v4.0.4 (we upgraded because we read the bug with
backing-file was resolved with v4).

We upgraded some of the hosts machines (but not all still) at v4.0.4 too
to see if this would fix the problem, but nothing.

The problem is that we have several VMs with snapshots, we do daily,
weekly and monthly snapshots, keep some of them (usually the fresh ones)
and remove the olds (that in the case they are weekly snapshots, they
are in the middle of a series of snapshots), this in the time has
produced the famous

Backing file too long bug.

So we upgraded the engine from 3.6.5 to 4.0.4 (latest available).

We discovered this bug, when we tried to upgrade an host to v4.0.4,
doing so a VM in the host didn't migrate, so we shutdown it and tried to
run on another host, but never succeded for the bug.

We don't know if we have more VMs in this situation because we upgraded
only 2 hosts on 10.

Investigating the problem we discovered that the backing file indicated
in each of LVM snapshots report a path very long with
/dev/storage-domain-id/../image-group-id/ with ../image-group-id/
repeated a lot of times and at the end /parentid.

So to understand which was the right path that it would contain, we
cloned a VM in the v4.0.4 and then we did 4 snapshots, now the backing
file path is

/dev/storage-domain-id/parentid

Is there a way to modify the path in the backing-file or a way to
recover the VM from this state ?

Where do reside the informations about the backing-file path ?

I attach here all the commands we run

On the ovirt manager (host with the engine only) we run

ovirt-shell

[oVirt shell (connected)]# list disks --parent-vm-name vm1

id : 2df25a13-6958-40a8-832f-9a26ce65de0f
name   : vm1_Disk2

id : 8cda0aa6-9e25-4b50-ba00-b877232a1983
name   : vm1_Disk1

[oVirt shell (connected)]# show disk 8cda0aa6-9e25-4b50-ba00-b877232a1983

id   : 8cda0aa6-9e25-4b50-ba00-b877232a1983
name : vm1_Disk1
actual_size  : 1073741824
alias: vm1_Disk1
disk_profile-id  : 1731f79a-5034-4270-9a87-94d93025deac
format   : cow
image_id : 7b354e2a-2099-4f2a-80b7-fba7d1fd13ee
propagate_errors : False
provisioned_size : 17179869184
shareable: False
size : 17179869184
sparse   : True
status-state : ok
storage_domains-storage_domain-id: 384f9059-ef2f-4d43-a54f-de71c5d589c8
storage_type : image
wipe_after_delete: False

[root@ovc1mgr ~]# su - postgres
Last login: Fri Oct 14 01:02:14 CEST 2016
-bash-4.2$ psql -d engine -U postgres
psql (9.2.15)
Type "help" for help.

engine=#\x on
engine=# select * from images where image_group_id =
'8cda0aa6-9e25-4b50-ba00-b877232a1983' order by creation_date;

-[ RECORD 1 ]-+-
image_guid| 60ba7acf-58cb-475b-b9ee-15b1be99fee6
creation_date | 2016-03-29 15:12:34+02
size  | 17179869184
it_guid   | ----
parentid  | ----
imagestatus   | 4
lastmodified  | 2016-04-21 11:25:59.972+02
vm_snapshot_id| 27c187cd-989f-4f7a-ac05-49c4410de6c2
volume_type   | 1
volume_format | 5
image_group_id| 8cda0aa6-9e25-4b50-ba00-b877232a1983
_create_date  | 2016-03-29 15:12:31.994065+02
_update_date  | 2016-09-04 01:10:08.773649+02
active| f
volume_classification | 1

-[ RECORD 2 ]-+-
image_guid| 68d764ec-bc2e-4e1d-b8f2-b44afd9fcb2e
creation_date | 2016-07-03 01:01:30+02
size  | 17179869184
it_guid   | ----
parentid  | 60ba7acf-58cb-475b-b9ee-15b1be99fee6
imagestatus   | 1
lastmodified  | 2016-07-04 01:03:33.732+02
vm_snapshot_id| 175c2071-a06b-4b0e-a069-5cc4bb236a34
volume_type   | 2
volume_format | 4
image_group_id| 8cda0aa6-9e25-4b50-ba00-b877232a1983
_create_date  | 2016-07-03 01:01:15.069585+02
_update_date  | 2016-09-11 02:06:04.420965+02
active| f
volume_classification | 1

-[ RECORD 3 ]-+-
image_guid| 37ca6494-e990-44e5-8597-28845a0a19b5
creation_date | 2016-08-07 01:06:15+02
size  | 17179869184
it_guid   | ----
parentid  | 68d764ec-bc2e-4e1d-b8f2-b44afd9fcb2e
imagestatus   | 1
lastmodified  | 2016-08-08 01:00:03.778+02
vm_snapshot_id| 4c0e5ac0-2ef3-4996-b3e9-7fd566d97b1a

Re: [ovirt-users] 'Image does not exist in domain' while moving disks

2016-07-05 Thread Dael Maselli


I upgraded first the manager to the latest 3.6 but nothing changed.

It seems that upgrading the SPM host fixed the issue

SPM node (CentOS Linux release 7.2.1511) packages:
ovirt-vmconsole-1.0.2-1.el7.centos.noarch
ovirt-release36-3.6.7-1.noarch
vdsm-jsonrpc-4.17.32-0.el7.centos.noarch
vdsm-4.17.32-0.el7.centos.noarch
vdsm-python-4.17.32-0.el7.centos.noarch
vdsm-hook-vmfex-dev-4.17.32-0.el7.centos.noarch
vdsm-xmlrpc-4.17.32-0.el7.centos.noarch
vdsm-yajsonrpc-4.17.32-0.el7.centos.noarch
vdsm-cli-4.17.32-0.el7.centos.noarch
vdsm-infra-4.17.32-0.el7.centos.noarch

Regards,
  Dael.



On 04/07/16 12:36, Dael Maselli wrote:

Hi,

I'm trying to move disks between storage domain but I get this error:

"VDSM command failed: Image does not exist in domain: 
u'image=6bf41b4e-3184-40c1-9db0-e304b39f34d0, 
domain=ae2ab96e-c2bf-44bb-beba-6c4d7cb9f5cc'"


I tried live and shutting down the vms. I also can't make snapshot of 
the same vms. It happens with a lot of vms but not all.


Here is the log on SPM node:

e56e0346-7f23-4b40-bc9c-06bc8d32a4b4::ERROR::2016-07-04 
11:48:47,279::blockVolume::459::Storage.Volume::(validateImagePath) 
Unexpected error
e56e0346-7f23-4b40-bc9c-06bc8d32a4b4::ERROR::2016-07-04 
11:48:47,280::task::866::Storage.TaskManager.Task::(_setError) 
Task=`e56e0346-7f23-4b40-bc9c-06bc8d32a4b4`::Unexpected error
jsonrpc.Executor/7::ERROR::2016-07-04 
11:48:53,310::hsm::1510::Storage.HSM::(deleteImage) Empty or not found 
image 6bf41b4e-3184-40c1-9db0-e304b39f34d0 in SD 
ae2ab96e-c2bf-44bb-beba-6c4d7cb9f5cc. 
{'6bd52f1c-d623-4d68-b3c6-a870c1daa9ce': 
ImgsPar(imgs=['c3b39b18-bb73-4743-9dec-719ed781b7d1'], 
parent='----'), 
'2fb324d2-da23-4a67-998f-8078b0c2b391': 
ImgsPar(imgs=['d7059fd4-f45b-4f1a-bf48-67b2a6f26994'], 
parent='----'), 
'1d971362-911b-43db-bed7-2871ea13807b': 
ImgsPar(imgs=['1cedd1ef-1657-48bd-9da5-bf2989511f6d'], 
parent='6a568c44-5280-4e96-b2b4-3fc7c1905d25'), 
'a3dcc1ac-8683-4f31-9160-eb892ddb8a4f': 
ImgsPar(imgs=['fab23a76-7b8f-4e3c-b2ff-5d9525e2d173'], 
parent='----'), 
'd211f70c-3df0-44a1-a792-2d638c6c139d': 
ImgsPar(imgs=['bb715d7f-2c66-4438-a222-69d0c1e4858b'], 
parent='----'), 
'0c24c0e1-03e9-4743-85c7-d10607d92735': 
ImgsPar(imgs=['1cedd1ef-1657-48bd-9da5-bf2989511f6d'], 
parent='0f27ed74-1193-4f37-8886-6679b2e6f230'), 
'1ec37fac-8894-4608-b14f-7236f365e6c3': 
ImgsPar(imgs=['9b6c698f-e0a2-4b56-b6b4-498f19aaaed0'], 
parent='----'), 
'f3aa49d1-1760-4375-b1d8-396e618c0c8f': 
ImgsPar(imgs=['521febb4-db23-4f00-b592-bed7cb90e6be'], 
parent='----'), 
'ca333dfc-6cd2-45f9-b7a2-9a546a9cfaad': 
ImgsPar(imgs=['3ea3326e-4a60-4d8c-b40d-7e448e54cc54'], 
parent='7716f924-09a1-4f9a-841d-be0104aa4b66'), 
'ca193146-12b5-4530-8d64-d4597a7775dd': 
ImgsPar(imgs=['1cedd1ef-1657-48bd-9da5-bf2989511f6d'], 
parent='3142ec55-1809-4f35-adf3-dcea037d2432'), 
'd41061ab-b8d2-45bf-9778-0ce8658d673c': 
ImgsPar(imgs=['65cdb44f-01b6-43b8-8abc-534d007b6b1e'], 
parent='----'), 
'45e3590c-5e4f-4baf-a809-e71707887c2a': 
ImgsPar(imgs=['c7275aaa-df06-4218-b125-6acb895df6e8'], 
parent='----'), 
'19355639-35fa-4ea9-b61a-3cbfb3407a17': 
ImgsPar(imgs=['204831e3-1664-4818-a7b8-8b3f0a4942c8'], 
parent='----'), 
'97fef998-7297-4946-9d2b-fd6cbb20f666': 
ImgsPar(imgs=['1cedd1ef-1657-48bd-9da5-bf2989511f6d'], 
parent='1d971362-911b-43db-bed7-2871ea13807b'), 
'd6f6d76c-964b-45fc-9d6c-c7ca21fa464a': 
ImgsPar(imgs=['3ea3326e-4a60-4d8c-b40d-7e448e54cc54'], 
parent='fe5443cb-7789-42b2-be1d-8938a1f30bc3'), 
'5ccce161-0bc4-447e-8873-728c9757b2e8': 
ImgsPar(imgs=['3086ac4c-f0e1-4c2e-a186-8993dfb9ad8c'], 
parent='----'), 
'b8b2ad19-99e1-4697-8ba2-89ef9135496e': 
ImgsPar(imgs=['d74a71dc-133e-46ba-90c3-038764006fc5'], 
parent='----'), 
'cf9d937f-f306-4147-946a-4cdc7fcbcd6b': 
ImgsPar(imgs=['12037b0b-13e1-44a1-ba39-e397060e7598'], 
parent='----'), 
'746face0-5da3-48bb-a373-0823158559a5': 
ImgsPar(imgs=['204831e3-1664-4818-a7b8-8b3f0a4942c8'], 
parent='b4e3c7ad-89db-49ba-a04c-3b47fc9c80a7'), 
'6bf74f7f-8e6a-4156-bcc3-db9c207d2421': 
ImgsPar(imgs=['1cedd1ef-1657-48bd-9da5-bf2989511f6d'], 
parent='97fef998-7297-4946-9d2b-fd6cbb20f666'), 
'f4ce6e1c-8c52-484b-a5d1-964873b0051b': 
ImgsPar(imgs=['728c6308-7090-45a9-869f-9a9cb51d3bf5'], 
parent='----'), 
'04a20a84-dd0e-421f-b47b-5136733a69fb': 
ImgsPar(imgs=['07d3b9a5-bece-458e-9e58-7c72593c91d5'], 
parent='----'), 
'2c60571d-46e2-438a-8492-3c34b2a86df4': 
ImgsPar(imgs=['3ea3326e-4a60-4d8c-b40d-7e448e54cc54'], 
parent='1464fa93-ba8a-4ba8-a09f-4686cd1d4437'), 
'b4e3c7ad-89db-49ba-a04c-3b47fc9c80a7': 
ImgsPar(imgs=['204831e3-1664

[ovirt-users] 'Image does not exist in domain' while moving disks

2016-07-04 Thread Dael Maselli
cted error
jsonrpc.Executor/7::ERROR::2016-07-04 
11:48:53,315::dispatcher::76::Storage.Dispatcher::(wrapper) {'status': 
{'message': "*Image does not exist in domain*: 
u'image=6bf41b4e-3184-40c1-9db0-e304b39f34d0, 
domain=ae2ab96e-c2bf-44bb-beba-6c4d7cb9f5cc'", 'code': 268}}


Thank you so much.

Dael.


--
_______

Dael Maselli  ---  INFN-LNF Computing Service  --  +39.06.9403.2214
___

 * http://www.lnf.infn.it/~dmaselli/ *
___

Democracy is two wolves and a lamb voting on what to have for lunch
___

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Can't start VMs (Unable to get volume size for domain)

2016-04-05 Thread Dael Maselli

I have version 3.6.2.6-1.el7.centos, no RC.


On 05/04/16 16:35, Dael Maselli wrote:


I have the same problem and lost 2 VM!!! Any update on this?

Thank you very much!

Regards,
Dael Maselli.


On 04/01/16 18:06, Justin Foreman wrote:

I’m running 3.6.2 rc1 with hosted engine on an FCP storage domain.

As of yesterday, I can’t run some VMs. I’ve experience corruption on 
others (I now have a Windows VM that blue screens on boot).


Here’s the log from my engine.

2016-01-04 16:55:39,446 INFO [org.ovirt.engine.core.bll.RunVmCommand] 
(default task-16) [1f1deb62] Lock Acquired to object 
'EngineLock:{exclusiveLocks='[3a17534b-e86d-4563-8ca2-2a27c34b4a87=<VM, 
ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
2016-01-04 16:55:39,479 INFO 
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] 
(default task-16) [1f1deb62] START, IsVmDuringInitiatingVDSCommand( 
IsVmDuringInitiatingVDSCommandParameters:{runAsync='true', 
vmId='3a17534b-e86d-4563-8ca2-2a27c34b4a87'}), log id: 299a5052
2016-01-04 16:55:39,479 INFO 
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] 
(default task-16) [1f1deb62] FINISH, IsVmDuringInitiatingVDSCommand, 
return: false, log id: 299a5052
2016-01-04 16:55:39,517 INFO [org.ovirt.engine.core.bll.RunVmCommand] 
(org.ovirt.thread.pool-8-thread-40) [1f1deb62] Running command: 
RunVmCommand internal: false. Entities affected :  ID: 
3a17534b-e86d-4563-8ca2-2a27c34b4a87 Type: VMAction group RUN_VM with 
role type USER
2016-01-04 16:55:39,579 INFO 
[org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] 
(org.ovirt.thread.pool-8-thread-40) [1f1deb62] START, 
UpdateVmDynamicDataVDSCommand( 
UpdateVmDynamicDataVDSCommandParameters:{runAsync='true', 
hostId='null', vmId='----', 
vmDynamic='org.ovirt.engine.core.common.businessentities.VmDynamic@dadddaa9'}), 
log id: 6574710a
2016-01-04 16:55:39,582 INFO 
[org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] 
(org.ovirt.thread.pool-8-thread-40) [1f1deb62] FINISH, 
UpdateVmDynamicDataVDSCommand, log id: 6574710a
2016-01-04 16:55:39,585 INFO 
[org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] 
(org.ovirt.thread.pool-8-thread-40) [1f1deb62] START, 
CreateVmVDSCommand( CreateVmVDSCommandParameters:{runAsync='true', 
hostId='2fe6c27b-9346-4678-8cd3-c9d367ec447f', 
vmId='3a17534b-e86d-4563-8ca2-2a27c34b4a87', vm='VM [adm1]'}), log 
id: 55e0849d
2016-01-04 16:55:39,586 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] 
(org.ovirt.thread.pool-8-thread-40) [1f1deb62] START, 
CreateVDSCommand(HostName = ov-101, 
CreateVmVDSCommandParameters:{runAsync='true', 
hostId='2fe6c27b-9346-4678-8cd3-c9d367ec447f', 
vmId='3a17534b-e86d-4563-8ca2-2a27c34b4a87', vm='VM [adm1]'}), log 
id: 1d5c1c04
2016-01-04 16:55:39,589 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.VmInfoBuilderBase] 
(org.ovirt.thread.pool-8-thread-40) [1f1deb62] Bootable disk 
'9e43c66a-5bf1-44d6-94f4-52178d15c1e6' set to index '0'
2016-01-04 16:55:39,600 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] 
(org.ovirt.thread.pool-8-thread-40) [1f1deb62] 
org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand 
pitReinjection=false,memGuaranteedSize=4054,smpThreadsPerCore=1,cpuType=SandyBridge,vmId=3a17534b-e86d-4563-8ca2-2a27c34b4a87,acpiEnable=true,numaTune={nodeset=0,1, 
mode=interleave},tabletEnable=true,spiceSecureChannels=smain,sinputs,scursor,splayback,srecord,sdisplay,susbredir,ssmartcard,vmType=kvm,keyboardLayout=en-us,smp=1,smpCoresPerSocket=1,emulatedMachine=pc-i440fx-rhel7.2.0,smartcardEnable=false,guestNumaNodes=[{memory=4054, 
cpus=0, 
nodeIndex=0}],transparentHugePages=true,vmName=adm1,maxVCpus=16,kvmEnable=true,devices=[{address={bus=0x00, 
domain=0x, function=0x0, slot=0x02, type=pci}, type=video, 
specParams={heads=1, vram=32768}, device=cirrus, 
deviceId=645e99e3-a9fa-4894-baf5-97b539236782}, {type=graphics, 
specParams={}, device=vnc, 
deviceId=12845c03-16a3-4bf0-a015-a15201a77673}, {iface=ide, 
shared=false, path=, address={bus=1, controller=0, unit=0, 
type=drive, target=0}, readonly=true, index=2, type=disk, 
specParams={path=}, device=cdrom, 
deviceId=ab048396-5dd8-4594-aa8a-9fe835a04cd1}, {shared=false, 
address={bus=0, controller=0, unit=0, type=drive, target=0}, 
imageID=9e43c66a-5bf1-44d6-94f4-52178d15c1e6, format=raw, index=0, 
optional=false, type=disk, 
deviceId=9e43c66a-5bf1-44d6-94f4-52178d15c1e6, 
domainID=1fb79d91-b245-4447-91e0-e57671152a8c, propagateErrors=off, 
iface=ide, readonly=false, bootOrder=1, 
poolID=0001-0001-0001-0001-0154, 
volumeID=c736baca-de76-4593-b3dc-28bb8807e7a3, specParams={}, 
device=disk}, {shared=false, address={bus=0, controller=0, unit=1, 
type=drive, target=0}, imageID=a016b350-87ef-4c3b-b150-024907fed9c0, 
format=raw, optional=false, type=disk, 
deviceId=a016b350-87ef-4c3b-b150-024907fed9c0, 
domainID=1fb79d91-b245-4447-91e0-e57671152a8c, propagateErrors=off, 
iface=ide, readonly

Re: [ovirt-users] Can't start VMs (Unable to get volume size for domain)

2016-04-05 Thread Dael Maselli


I have the same problem and lost 2 VM!!! Any update on this?

Thank you very much!

Regards,
Dael Maselli.


On 04/01/16 18:06, Justin Foreman wrote:

I’m running 3.6.2 rc1 with hosted engine on an FCP storage domain.

As of yesterday, I can’t run some VMs. I’ve experience corruption on others (I 
now have a Windows VM that blue screens on boot).

Here’s the log from my engine.

2016-01-04 16:55:39,446 INFO  [org.ovirt.engine.core.bll.RunVmCommand] (default 
task-16) [1f1deb62] Lock Acquired to object 
'EngineLock:{exclusiveLocks='[3a17534b-e86d-4563-8ca2-2a27c34b4a87=<VM, 
ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
2016-01-04 16:55:39,479 INFO  
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default 
task-16) [1f1deb62] START, IsVmDuringInitiatingVDSCommand( 
IsVmDuringInitiatingVDSCommandParameters:{runAsync='true', 
vmId='3a17534b-e86d-4563-8ca2-2a27c34b4a87'}), log id: 299a5052
2016-01-04 16:55:39,479 INFO  
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default 
task-16) [1f1deb62] FINISH, IsVmDuringInitiatingVDSCommand, return: false, log 
id: 299a5052
2016-01-04 16:55:39,517 INFO  [org.ovirt.engine.core.bll.RunVmCommand] 
(org.ovirt.thread.pool-8-thread-40) [1f1deb62] Running command: RunVmCommand 
internal: false. Entities affected :  ID: 3a17534b-e86d-4563-8ca2-2a27c34b4a87 
Type: VMAction group RUN_VM with role type USER
2016-01-04 16:55:39,579 INFO  
[org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] 
(org.ovirt.thread.pool-8-thread-40) [1f1deb62] START, 
UpdateVmDynamicDataVDSCommand( 
UpdateVmDynamicDataVDSCommandParameters:{runAsync='true', hostId='null', 
vmId='----', 
vmDynamic='org.ovirt.engine.core.common.businessentities.VmDynamic@dadddaa9'}), 
log id: 6574710a
2016-01-04 16:55:39,582 INFO  
[org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] 
(org.ovirt.thread.pool-8-thread-40) [1f1deb62] FINISH, 
UpdateVmDynamicDataVDSCommand, log id: 6574710a
2016-01-04 16:55:39,585 INFO  
[org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] 
(org.ovirt.thread.pool-8-thread-40) [1f1deb62] START, CreateVmVDSCommand( 
CreateVmVDSCommandParameters:{runAsync='true', 
hostId='2fe6c27b-9346-4678-8cd3-c9d367ec447f', 
vmId='3a17534b-e86d-4563-8ca2-2a27c34b4a87', vm='VM [adm1]'}), log id: 55e0849d
2016-01-04 16:55:39,586 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] 
(org.ovirt.thread.pool-8-thread-40) [1f1deb62] START, CreateVDSCommand(HostName 
= ov-101, CreateVmVDSCommandParameters:{runAsync='true', 
hostId='2fe6c27b-9346-4678-8cd3-c9d367ec447f', 
vmId='3a17534b-e86d-4563-8ca2-2a27c34b4a87', vm='VM [adm1]'}), log id: 1d5c1c04
2016-01-04 16:55:39,589 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.VmInfoBuilderBase] 
(org.ovirt.thread.pool-8-thread-40) [1f1deb62] Bootable disk 
'9e43c66a-5bf1-44d6-94f4-52178d15c1e6' set to index '0'
2016-01-04 16:55:39,600 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] 
(org.ovirt.thread.pool-8-thread-40) [1f1deb62] 
org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand 
pitReinjection=false,memGuaranteedSize=4054,smpThreadsPerCore=1,cpuType=SandyBridge,vmId=3a17534b-e86d-4563-8ca2-2a27c34b4a87,acpiEnable=true,numaTune={nodeset=0,1,
 
mode=interleave},tabletEnable=true,spiceSecureChannels=smain,sinputs,scursor,splayback,srecord,sdisplay,susbredir,ssmartcard,vmType=kvm,keyboardLayout=en-us,smp=1,smpCoresPerSocket=1,emulatedMachine=pc-i440fx-rhel7.2.0,smartcardEnable=false,guestNumaNodes=[{memory=4054,
 cpus=0, 
nodeIndex=0}],transparentHugePages=true,vmName=adm1,maxVCpus=16,kvmEnable=true,devices=[{address={bus=0x00,
 domain=0x, function=0x0, slot=0x02, type=pci}, type=video, 
specParams={heads=1, vram=32768}, device=cirrus, 
deviceId=645e99e3-a9fa-4894-baf5-97b539236782}, {type=graphics, specParams={}, 
device=vnc, deviceId=12845c03-16a3-4bf0-a015-a15201a77673}, {iface=ide, 
shared=false, path=, address={bus=1, controller=0, unit=0, type=drive, 
target=0}, readonly=true, index=2, type=disk, specParams={path=}, device=cdrom, 
deviceId=ab048396-5dd8-4594-aa8a-9fe835a04cd1}, {shared=false, address={bus=0, 
controller=0, unit=0, type=drive, target=0}, 
imageID=9e43c66a-5bf1-44d6-94f4-52178d15c1e6, format=raw, index=0, 
optional=false, type=disk, deviceId=9e43c66a-5bf1-44d6-94f4-52178d15c1e6, 
domainID=1fb79d91-b245-4447-91e0-e57671152a8c, propagateErrors=off, iface=ide, 
readonly=false, bootOrder=1, poolID=0001-0001-0001-0001-0154, 
volumeID=c736baca-de76-4593-b3dc-28bb8807e7a3, specParams={}, device=disk}, 
{shared=false, address={bus=0, controller=0, unit=1, type=drive, target=0}, 
imageID=a016b350-87ef-4c3b-b150-024907fed9c0, format=raw, optional=false, 
type=disk, deviceId=a016b350-87ef-4c3b-b150-024907fed9c0, 
domainID=1fb79d91-b245-4447-91e0-e57671152a8c, propagateErrors=off, iface=ide, 
readonly=false, poolID=0001-0001-0001-0001-0154, 
volumeID=20fc4399-0b02-4da1-8aee-68df16

[ovirt-users] Move cluster between datacenters

2015-11-24 Thread Dael Maselli

Hello,

I have an environment with two data centers with some clusters in each 
one. Each cluster has one or more dedicated FC storage domains.


We have some management problem because adding storage to one cluster 
means to add it to all nodes of all clusters in the same data center, by 
the way I found this a little overkill and I think it should be managed 
like networks.


Anyway, to resolve out problems we would like to create a new datacenter 
and move a cluster from old data center to the new one, obviously with 
it's own storage domain.


Is there a way to do this without export/import all vms?

Thank you.

Regards,
Dael Maselli.


--
___

Dael Maselli  ---  INFN-LNF Computing Service  --  +39.06.9403.2214
___

 * http://www.lnf.infn.it/~dmaselli/ *
___

Democracy is two wolves and a lamb voting on what to have for lunch
___

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Move/Migrate Storage Domain to new devices

2015-04-22 Thread Dael Maselli

Ok, I copied the templates and migration almost worked.

Anyway there are some disks that fails migration, here is what I found 
in vdsm.log of the SPM:


c061a252-0611-4e25-b9eb-8540e01dcfec::ERROR::2015-04-22 
09:36:42,063::volume::409::Storage.Volume::(create) Unexpected error

Traceback (most recent call last):
  File /usr/share/vdsm/storage/volume.py, line 399, in create
  File /usr/share/vdsm/storage/volume.py, line 302, in share
CannotShareVolume: Cannot share volume: 
'src=/rhev/data-center/134985e2-4885-4b24-85b0-3b51365d66c7/2f4e7ec2-2865-4956-b4ee-735a9d46eb67/images/98cecc63-0816-4e64-8dfb-cadad6af8aae/c10aebff-20b3-4ccc-b166-c2be47dc5af0, 
dst=/rhev/d
ata-center/134985e2-4885-4b24-85b0-3b51365d66c7/2f4e7ec2-2865-4956-b4ee-735a9d46eb67/images/530d6820-98b1-4e95-8470-4cefa5ab351c/c10aebff-20b3-4ccc-b166-c2be47dc5af0: 
[Errno 17] File exists'
c061a252-0611-4e25-b9eb-8540e01dcfec::ERROR::2015-04-22 
09:36:42,071::image::401::Storage.Image::(_createTargetImage) Unexpected 
error

Traceback (most recent call last):
  File /usr/share/vdsm/storage/image.py, line 384, in _createTargetImage
  File /usr/share/vdsm/storage/sd.py, line 430, in createVolume
  File /usr/share/vdsm/storage/volume.py, line 412, in create
VolumeCannotGetParent: Cannot get parent volume: (Couldn't get parent 
c10aebff-20b3-4ccc-b166-c2be47dc5af0 for volume 
d694d919-a0b0-4b89-b6cd-d5c02748f9de: Cannot share volume: 
'src=/rhev/data-center/134985e2-4885-4b24-85b0-3b513
65d66c7/2f4e7ec2-2865-4956-b4ee-735a9d46eb67/images/98cecc63-0816-4e64-8dfb-cadad6af8aae/c10aebff-20b3-4ccc-b166-c2be47dc5af0, 
dst=/rhev/data-center/134985e2-4885-4b24-85b0-3b51365d66c7/2f4e7ec2-2865-4956-b4ee-735a9d46eb67/images/
530d6820-98b1-4e95-8470-4cefa5ab351c/c10aebff-20b3-4ccc-b166-c2be47dc5af0: 
[Errno 17] File exists',)
c061a252-0611-4e25-b9eb-8540e01dcfec::ERROR::2015-04-22 
09:36:44,174::task::866::Storage.TaskManager.Task::(_setError) 
Task=`c061a252-0611-4e25-b9eb-8540e01dcfec`::Unexpected error

Traceback (most recent call last):
  File /usr/share/vdsm/storage/task.py, line 873, in _run
  File /usr/share/vdsm/storage/task.py, line 334, in run
  File /usr/share/vdsm/storage/securable.py, line 77, in wrapper
  File /usr/share/vdsm/storage/sp.py, line 1553, in moveImage
  File /usr/share/vdsm/storage/image.py, line 499, in move
  File /usr/share/vdsm/storage/image.py, line 384, in _createTargetImage
  File /usr/share/vdsm/storage/sd.py, line 430, in createVolume
  File /usr/share/vdsm/storage/volume.py, line 412, in create
VolumeCannotGetParent: Cannot get parent volume: (Couldn't get parent 
c10aebff-20b3-4ccc-b166-c2be47dc5af0 for volume 
d694d919-a0b0-4b89-b6cd-d5c02748f9de: Cannot share volume: 
'src=/rhev/data-center/134985e2-4885-4b24-85b0-3b513
65d66c7/2f4e7ec2-2865-4956-b4ee-735a9d46eb67/images/98cecc63-0816-4e64-8dfb-cadad6af8aae/c10aebff-20b3-4ccc-b166-c2be47dc5af0, 
dst=/rhev/data-center/134985e2-4885-4b24-85b0-3b51365d66c7/2f4e7ec2-2865-4956-b4ee-735a9d46eb67/images/
530d6820-98b1-4e95-8470-4cefa5ab351c/c10aebff-20b3-4ccc-b166-c2be47dc5af0: 
[Errno 17] File exists',)


Thank you.


Regards,
 Dael Maselli.



On 21/04/15 09:42, Aharon Canan wrote:


Hi

Did you try to copy the template to the new storage domain?
Under Template tab - Disks sub-tab - copy




Regards,
__
*Aharon Canan*




*From: *Dael Maselli dael.mase...@lnf.infn.it
*To: *users@ovirt.org
*Sent: *Monday, April 20, 2015 5:48:03 PM
*Subject: *[ovirt-users] Move/Migrate Storage Domain to new devices

Hi,

I have a data storage domain that use one FC LUN. I need to move all
data to a new storage server.

I tried by move single disks to a new storage domain but some
cannot be
moved, I think because they are thin-cloned by template.

When I worked with LVM I use to do a simple pvmove leaving the VG
intact, is there something similar (online or in maintenance) in
oVirt?
Can I just do a pvmove from the SPM host o it's going to destroy
everything?

Thank you very much.

Regards,
Dael Maselli.


-- 
___


Dael Maselli  ---  INFN-LNF Computing Service  --  +39.06.9403.2214
___

  * http://www.lnf.infn.it/~dmaselli/ *
___

Democracy is two wolves and a lamb voting on what to have for lunch
___

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users




--
___

Dael Maselli  ---  INFN-LNF Computing Service  --  +39.06.9403.2214

[ovirt-users] Move/Migrate Storage Domain to new devices

2015-04-20 Thread Dael Maselli

Hi,

I have a data storage domain that use one FC LUN. I need to move all 
data to a new storage server.


I tried by move single disks to a new storage domain but some cannot be 
moved, I think because they are thin-cloned by template.


When I worked with LVM I use to do a simple pvmove leaving the VG 
intact, is there something similar (online or in maintenance) in oVirt? 
Can I just do a pvmove from the SPM host o it's going to destroy everything?


Thank you very much.

Regards,
   Dael Maselli.


--
___

Dael Maselli  ---  INFN-LNF Computing Service  --  +39.06.9403.2214
___

 * http://www.lnf.infn.it/~dmaselli/ *
___

Democracy is two wolves and a lamb voting on what to have for lunch
___

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt RPM for RHEL Co.

2012-06-08 Thread Dael Maselli

On 07/06/12 13.54, Andrew Cathrow wrote:

  Do you plan to officially release oVirt for RedHat and
derivates?

We do want to package oVirt 3.1 for EL6 however it comes behind fedora in terms 
of priority but it's something we certainly want to make happen.
There's extra work to do in addition to compiling and packaging oVirt, for 
example we need JBoss AS7 packaged in RPMs for EL6.


This was the only answer I would. Thank you.

The rest now is not so important. I repeat that what I would is 
something like 389ds. When I said officially it was not something 
about law, obviously.


I thank all of you very much for your work. I never wanted this polemic. 
Just want to see oVirt on EPEL ;-)


Best regards,
Dael Maselli.


--
___

Dael Maselli  ---  INFN-LNF Computing Service  --  +39.06.9403.2214
___

 * http://www.lnf.infn.it/ * http://www.infn.it/ *

  * http://www.FrascatiScienza.it/ * http://www.BucoNero.eu/ *
___

Democracy is two wolves and a lamb voting on what to have for lunch
___

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt RPM for RHEL Co.

2012-06-06 Thread Dael Maselli


Could you please answer this question?

Thank you.

Dael Maselli.


On 27/02/12 12.04, Dael Maselli wrote:

Hi,

I was waiting the first release impatiently, when it was I suddenly 
downloaded the Installation Guide and I read:


The packages provided via this mechanism are expected to work for 
users of Fedora, Red

Hat Enterprise Linux, and other Enterprise Linux derivatives.

I have Scientific Linux and/or Centos, but I can't find the rpm for 
these system (version 6.2).


Do you plan to build and realease also for those OS?

Thank you.

Dael Maselli.


--
___

Dael Maselli  ---  INFN-LNF Computing Service  --  +39.06.9403.2214
___

  *http://www.lnf.infn.it/  *http://www.infn.it/  *

   *http://www.FrascatiScienza.it/  *http://www.BucoNero.eu/  *
___

Democracy is two wolves and a lamb voting on what to have for lunch
___


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


--
___

Dael Maselli  ---  INFN-LNF Computing Service  --  +39.06.9403.2214
___

 * http://www.lnf.infn.it/ * http://www.infn.it/ *

  * http://www.FrascatiScienza.it/ * http://www.BucoNero.eu/ *
___

Democracy is two wolves and a lamb voting on what to have for lunch
___

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users