[ovirt-users] Re: Multiple Active VM before the preview" snapshots

2019-03-31 Thread Bruckner, Simone
Dear all,

  does anyone have an idea how to address this?

Thank you and all the best,
Simone

-Ursprüngliche Nachricht-
Von: Bruckner, Simone 
Gesendet: Mittwoch, 27. März 2019 13:25
An: users@ovirt.org
Betreff: [ovirt-users] Multiple Active VM before the preview" snapshots

Hi,

  we see some VMs that show an inconsistent view of snapshots. Checking die 
database for one example vm shows the following result:

engine=# select snapshot_id, status, description from snapshots where vm_id = 
'40c0f334-dac5-42ad-8040-e2d2193c73c0';
 snapshot_id  |   status   | description
--++
--++--
 b77f5752-f1a4-454f-bcde-6afd6897e047 | OK | Active VM
 059d262a-6cc4-4d35-b1d4-62ef7fe28d67 | OK | Active VM before the 
preview
 d22e4f74-6521-45d5-8e09-332c05194ec3 | OK | Active VM before the 
preview
 87d39245-bedf-4cf1-a2a6-4228176091d3 | IN_PREVIEW | base
(4 rows)

We cannot perform any snapshot, clone, copy operations on those vms. Is there a 
way to get this cleared?

All the best,
Simone
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: 
https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZE7ZQRXI5VPQYZFJK7DTQDCQIGPLKY5K/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/W5S3HOIIMB43A6KYK37BHQOM2IIDKZ5V/


[ovirt-users] Multiple Active VM before the preview" snapshots

2019-03-27 Thread Bruckner, Simone
Hi,

  we see some VMs that show an inconsistent view of snapshots. Checking die 
database for one example vm shows the following result:

engine=# select snapshot_id, status, description from snapshots where vm_id = 
'40c0f334-dac5-42ad-8040-e2d2193c73c0';
 snapshot_id  |   status   | description
--++--
 b77f5752-f1a4-454f-bcde-6afd6897e047 | OK | Active VM
 059d262a-6cc4-4d35-b1d4-62ef7fe28d67 | OK | Active VM before the 
preview
 d22e4f74-6521-45d5-8e09-332c05194ec3 | OK | Active VM before the 
preview
 87d39245-bedf-4cf1-a2a6-4228176091d3 | IN_PREVIEW | base
(4 rows)

We cannot perform any snapshot, clone, copy operations on those vms. Is there a 
way to get this cleared?

All the best,
Simone
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZE7ZQRXI5VPQYZFJK7DTQDCQIGPLKY5K/


[ovirt-users] Re: VM stuck in "Migrating to"

2018-07-15 Thread Bruckner, Simone
Hi,

 worked!

Thank you very much and all the best,
Simone

-Ursprüngliche Nachricht-
Von: Shmuel Melamud 
Gesendet: Sonntag, 15. Juli 2018 13:31
An: Bruckner, Simone 
Cc: users@ovirt.org
Betreff: Re: [ovirt-users] VM stuck in "Migrating to"

Hi!

As I understand, the VM is down already. You can fix this directly in the 
database. Connect to it with psql tool and do the following:

select vm_guid from vms where vm_name='Name of your VM';

You'll get the VM ID. And then:

update vm_dynamic set status=0 where vm_guid='VM ID';

On Wed, Jul 11, 2018 at 11:17 PM, Bruckner, Simone 
 wrote:
> Hi all,
>
>
>
>   I have a VM stuck in state „Migrating to“. I restarted ovirt-engine
> and rebooted all hosts, no success. I run ovirt 4.2.4.5-1.el7 on
> CentOS 7.5 hosts with vdsm-4.20.32-1.el7.x86_64. How can I clean this up?
>
>
>
> Thank you and all the best,
>
> Simone
>
>
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org Privacy
> Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/4OPRY2GR
> YDZQJ724TLT5GHGGHB5NFU45/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4EGZVVBOE7D7YYWYWYAWAKR52KIP5YIL/


[ovirt-users] Re: VM stuck in "Migrating to"

2018-07-15 Thread Bruckner, Simone
Hi,



  running engine-setup did not resolve the issue.



All the best,

Simone



Von: Maton, Brett 
Gesendet: Sonntag, 15. Juli 2018 13:59
An: Bruckner, Simone 
Cc: users@ovirt.org
Betreff: Re: [ovirt-users] VM stuck in "Migrating to"



You could also run engine-setup on hosted-engine again



On 11 July 2018 at 21:17, Bruckner, Simone 
mailto:simone.bruck...@fabasoft.com>> wrote:

   Hi all,



 I have a VM stuck in state „Migrating to“. I restarted ovirt-engine and 
rebooted all hosts, no success. I run ovirt 4.2.4.5-1.el7 on CentOS 7.5 hosts 
with vdsm-4.20.32-1.el7.x86_64. How can I clean this up?



   Thank you and all the best,

   Simone




   ___
   Users mailing list -- users@ovirt.org<mailto:users@ovirt.org>
   To unsubscribe send an email to 
users-le...@ovirt.org<mailto:users-le...@ovirt.org>
   Privacy Statement: https://www.ovirt.org/site/privacy-policy/
   oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
   List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4OPRY2GRYDZQJ724TLT5GHGGHB5NFU45/



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/55Q3MBEPBBRXHYQKYZBYBU55U7F3EI7G/


[ovirt-users] VM stuck in "Migrating to"

2018-07-11 Thread Bruckner, Simone
Hi all,



  I have a VM stuck in state "Migrating to". I restarted ovirt-engine and 
rebooted all hosts, no success. I run ovirt 4.2.4.5-1.el7 on CentOS 7.5 hosts 
with vdsm-4.20.32-1.el7.x86_64. How can I clean this up?



Thank you and all the best,

Simone



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4OPRY2GRYDZQJ724TLT5GHGGHB5NFU45/


[ovirt-users] Re: Failed to update VMs/Templates OVF data, cannot change SPM

2018-06-20 Thread Bruckner, Simone
Hi Nir,

  I identified the reason for the failing OVF updates on the initial VG – 
metadata was affected by blkdiscard tests in scope of 
https://bugzilla.redhat.com/show_bug.cgi?id=1562369

However, the OVF updates are failing on other installations as well (on 2 out 
of 40 storage domains). Here is the output of your commands:

# lvs -o vg_name,lv_name,tags | grep 3ad1987a-8b7d-426d-9d51-4a78cb0a888f
  3ad1987a-8b7d-426d-9d51-4a78cb0a888f 212d644c-0155-4999-9df9-72bacfc7f141 
IU_0ebefe5e-9053-4bf1-bdfd-fdb26579c179,MD_4,PU_----
  3ad1987a-8b7d-426d-9d51-4a78cb0a888f 94f519de-bc19-4557-82c4-86bbcfc5dd2f 
IU_60d9eec7-951f-4594-ae99-7d31318ee3a9,MD_5,PU_----
  3ad1987a-8b7d-426d-9d51-4a78cb0a888f ids
  3ad1987a-8b7d-426d-9d51-4a78cb0a888f inbox
  3ad1987a-8b7d-426d-9d51-4a78cb0a888f leases
  3ad1987a-8b7d-426d-9d51-4a78cb0a888f master
  3ad1987a-8b7d-426d-9d51-4a78cb0a888f metadata
  3ad1987a-8b7d-426d-9d51-4a78cb0a888f outbox
  3ad1987a-8b7d-426d-9d51-4a78cb0a888f xleases

# for i in 4 5; do
  dd if=/dev/3ad1987a-8b7d-426d-9d51-4a78cb0a888f/metadata bs=512 count=1 
skip=$i of=metadata.$i
done
1+0 records in
1+0 records out
512 bytes (512 B) copied, 0.00121297 s, 422 kB/s
1+0 records in
1+0 records out
512 bytes (512 B) copied, 0.000735026 s, 697 kB/s

# file metadata.*
metadata.4: data
metadata.5: ASCII text

# cat metadata.5
DOMAIN=3ad1987a-8b7d-426d-9d51-4a78cb0a888f
CTIME=1520597691
FORMAT=RAW
DISKTYPE=OVFS
LEGALITY=LEGAL
SIZE=262144
VOLTYPE=LEAF
DESCRIPTION={"Updated":true,"Size":4669440,"Last Updated":"Fri Jun 08 09:51:18 
CEST 2018","Storage 
Domains":[{"uuid":"3ad1987a-8b7d-426d-9d51-4a78cb0a888f"}],"Disk 
Description":"OVF_STORE"}
IMAGE=60d9eec7-951f-4594-ae99-7d31318ee3a9
PUUID=----
MTIME=0
POOL_UUID=
TYPE=PREALLOCATED
GEN=0
EOF

# od -c metadata.4
000  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0
*
0001000

vdsm.log from manual OVF update test:

2018-06-20 09:28:27,840+0200 INFO  (jsonrpc/7) [vdsm.api] START 
setVolumeDescription(sdUUID=u'3ad1987a-8b7d-426d-9d51-4a78cb0a888f', 
spUUID=u'5849b030-626e-47cb-ad90-3ce782d831b3', 
imgUUID=u'0ebefe5e-9053-4bf1-bdfd-fdb26579c179', 
volUUID=u'212d644c-0155-4999-9df9-72bacfc7f141', 
description=u'{"Updated":false,"Last Updated":"Fri Jun 08 09:51:18 CEST 
2018","Storage Domains":[{"uuid":"3ad1987a-8b7d-426d-9d51-4a78cb0a888f"}],"Disk 
Description":"OVF_STORE"}', options=None) from=:::,51790, 
flow_id=7e4edb74, task_id=5f1fda67-a073-419a-bba5-9bf680c0e5d5 (api:46)
2018-06-20 09:28:28,072+0200 WARN  (jsonrpc/7) [storage.ResourceManager] 
Resource factory failed to create resource 
'01_img_3ad1987a-8b7d-426d-9d51-4a78cb0a888f.0ebefe5e-9053-4bf1-bdfd-fdb26579c179'.
 Canceling request. (resourceManager:543)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/resourceManager.py", line 
539, in registerResource
obj = namespaceObj.factory.createResource(name, lockType)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/resourceFactories.py", 
line 193, in createResource
lockType)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/resourceFactories.py", 
line 122, in __getResourceCandidatesList
imgUUID=resourceName)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/image.py", line 206, in 
getChain
if len(uuidlist) == 1 and srcVol.isShared():
  File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line 1434, in 
isShared
return self._manifest.isShared()
  File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line 141, in 
isShared
return self.getVolType() == sc.type2name(sc.SHARED_VOL)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line 134, in 
getVolType
self.voltype = self.getMetaParam(sc.VOLTYPE)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line 118, in 
getMetaParam
meta = self.getMetadata()
  File "/usr/lib/python2.7/site-packages/vdsm/storage/blockVolume.py", line 
112, in getMetadata
md = VolumeMetadata.from_lines(lines)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/volumemetadata.py", line 
103, in from_lines
"Missing metadata key: %s: found: %s" % (e, md))
MetaDataKeyNotFoundError: Meta Data key not found error: ("Missing metadata 
key: 'DOMAIN': found: {}",)
2018-06-20 09:28:28,072+0200 WARN  (jsonrpc/7) 
[storage.ResourceManager.Request] 
(ResName='01_img_3ad1987a-8b7d-426d-9d51-4a78cb0a888f.0ebefe5e-9053-4bf1-bdfd-fdb26579c179',
 ReqID='10c95223-f349-4ac3-ab2f-7a5f3d1c7749') Tried to cancel a processed 
request (resourceManager:187)
2018-06-20 09:28:28,073+0200 INFO  (jsonrpc/7) [vdsm.api] FINISH 
setVolumeDescription error=Could not acquire resource. Probably resource 
factory threw an exception.: () from=:::,51790, flow_id=7e4edb74, 
task_id=5f1fda67-a073-419a-bba5-9bf680c0e5d5 (api:50)
2018-06-20 09:28:28,073+0200 ERROR (jsonrpc/7) [storage.TaskManager.Task] 

[ovirt-users] Re: Moving from thin to preallocated storage domains

2018-06-13 Thread Bruckner, Simone
Hi,

  I have defined thin LUNs on the array and presented them to the oVirt hosts. 
I will change the LUN from thin to preallocated on the array (which is 
transparent to the oVirt host).

Besides removing “discard after delete” from the storage domain flags, is there 
anything else I need to take care of on the oVirt side?

All the best,
Oliver

Von: Benny Zlotnik 
Gesendet: Mittwoch, 13. Juni 2018 17:32
An: Albl, Oliver 
Cc: users@ovirt.org
Betreff: [ovirt-users] Re: Moving from thin to preallocated storage domains

Hi,

What do you mean by converting the LUN from thin to preallocated?
oVirt creates LVs on top of the LUNs you provide

On Wed, Jun 13, 2018 at 2:05 PM, Albl, Oliver 
mailto:oliver.a...@fabasoft.com>> wrote:
Hi all,

  I have to move some FC storage domains from thin to preallocated. I would set 
the storage domain to maintenance, convert the LUN from thin to preallocated on 
the array, remove “Discard After Delete” from the advanced settings of the 
storage domain and active it again. Is there anything else I need to take care 
of?

All the best,
Oliver


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to 
users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VUEQY5DHUC633US5HZQO3N2IQ2TVCZPX/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DGBJCXJ6HYZ4LXNECE3RGLWGJ2CSU76V/


[ovirt-users] Rebooted host shows running vms

2018-04-18 Thread Bruckner, Simone
Hi all,

  we had an unexpected shutdown of one of our hypervisor nodes caused by a 
hardware problem. We ran "Confirm that that host has beed rebooted" and as long 
as the host is in maintenance mode, we see 0 vms running. But when we activate 
the host, it shows 14 vms running. How can we get this cleaned up?

We run oVirt 4.2.1.7-1.el7.centos.

Thank you and all the best,
Simone Bruckner

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Cannot start VM after live storage migration - Bad volume specification

2018-03-19 Thread Bruckner, Simone
Hi,

  it seems that there is a broken chain - we see two "empty" parent_ids in the 
database:

engine=# SELECT b.disk_alias, s.description,s.snapshot_id, i.creation_date, 
s.status, i.imagestatus, i.size,i.parentid,i.image_group_id, i.vm_snapshot_id, 
i.image_guid, i.parentid, i.active FROM images as i JOIN snapshots AS s ON 
(i.vm_snapshot_id = s.snapshot_id) LEFT JOIN vm_static AS v ON (s.vm_id = 
v.vm_guid) JOIN base_disks AS b ON (i.image_group_id = b.disk_id) WHERE 
v.vm_name = 'VMNAME' and disk_alias = 'VMNAME_Disk2' ORDER BY creation_date, 
description, disk_alias
;
disk_alias|description 
| snapshot_id  | creation_date  | status | 
imagestatus | size  |   parentid   |
image_group_id|vm_snapshot_id|  
image_guid  |   parentid   | active
--++--+++-+---+--+--+--+--+--+
VMNAME_Disk2 | tmp| 
3920a4e1-fc3f-45e2-84d5-0d3f1b8ad608 | 2018-01-28 10:09:37+01 | OK |
   1 | 1979979923456 | ---- | 
c1a05108-90d7-421d-a9b4-d4cc65c48429 | 3920a4e1-fc3f-45e2-84d5-0d3f1b8ad608 | 
946ee7b7-0770-49c9-ac76-0ce95a433d0f | ---- | f
VMNAME_Disk2 | VMNAME_Disk2 Auto-generated for Live Storage Migration | 
51f68304-e1a9-4400-aabc-8e3341d55fdc | 2018-03-16 15:07:35+01 | OK |
   1 | 1979979923456 | ---- | 
c1a05108-90d7-421d-a9b4-d4cc65c48429 | 51f68304-e1a9-4400-aabc-8e3341d55fdc | 
4c6475b1-352a-4114-b647-505cccbe6663 | ---- | f
VMNAME_Disk2 | Active VM  | 
d59a9f9d-f0dc-48ec-97e8-9e7a8b81d76d | 2018-03-18 20:54:23+01 | OK |
   1 | 1979979923456 | 946ee7b7-0770-49c9-ac76-0ce95a433d0f | 
c1a05108-90d7-421d-a9b4-d4cc65c48429 | d59a9f9d-f0dc-48ec-97e8-9e7a8b81d76d | 
4659b5e0-93c1-478d-97d0-ec1cf4052028 | 946ee7b7-0770-49c9-ac76-0ce95a433d0f | t

Is there a way to recover that disk?

All the best,
Simone

Von: users-boun...@ovirt.org <users-boun...@ovirt.org> Im Auftrag von Bruckner, 
Simone
Gesendet: Sonntag, 18. März 2018 22:15
An: users@ovirt.org
Betreff: [ovirt-users] Cannot start VM after live storage migration - Bad 
volume specification

Hi all,

  we did a live storage migration of one of three disks of a vm that failed 
because the vm became not responding when deleting the auto-snapshot:

2018-03-16 15:07:32.084+01 |0 | Snapshot 'VMNAME_Disk2 Auto-generated 
for Live Storage Migration' creation for VM 'VMNAME' was initiated by xxx
2018-03-16 15:07:32.097+01 |0 | User xxx moving disk VMNAME_Disk2 to 
domain VMHOST_LUN_211.
2018-03-16 15:08:56.304+01 |0 | Snapshot 'VMNAME_Disk2 Auto-generated 
for Live Storage Migration' creation for VM 'VMNAME' has been completed.
2018-03-16 16:40:48.89+01  |0 | Snapshot 'VMNAME_Disk2 Auto-generated 
for Live Storage Migration' deletion for VM 'VMNAME' was initiated by xxx.
2018-03-16 16:44:44.813+01 |1 | VM VMNAME is not responding.
2018-03-18 18:40:51.258+01 |2 | Failed to delete snapshot 'VMNAME_Disk2 
Auto-generated for Live Storage Migration' for VM 'VMNAME'.
2018-03-18 18:40:54.506+01 |1 | Possible failure while deleting 
VMNAME_Disk2 from the source Storage Domain VMHOST_LUN_211 during the move 
operation. The Storage Domain may be manually cleaned-up from possible leftover
s (User:xxx).

Now we cannot start the vm anymore as long as this disk is online. Error 
message is "VM VMNAME is down with error. Exit message: Bad volume 
specification {'index': 2, 'domainID': 'ecc71a64-62c1-43f4-bf1f-3bc1b22c7a8a', 
'reqsize': '0', 'name': 'vdc', 'truesize': '2147483648', 'format': 'cow', 
'discard': False, 'volumeID': '4659b5e0-93c1-478d-97d0-ec1cf4052028', 
'apparentsize': '2147483648', 'imageID': 
'c1a05108-90d7-421d-a9b4-d4cc65c48429', 'specParams': {}, 'iface': 'virtio', 
'cache': 'none', 'propagateErrors': 'off', 'poolID': 
'5849b030-626e-47cb-ad90-3ce782d831b3', 'device': 'disk', 'path': 
'/rhev/data-center/mnt/blockSD/ecc71a64-62c1-43f4-bf1f-3bc1b22c7a8a/images/c1a05108-90d7-421d-a9b4-d4cc65c48429/4659b5e0-93c1-478d-97d0-ec1cf4052028',
 'serial': 'c1a05108-90d7-421d-a9b4-d4cc65c48429', 'diskType': 'block', 'type': 
'block'}."

vdsm.log:
2018-03-18 21:53:33,815+0100 ERROR (vm/7d05e511) [storage.TaskManager.Task] 
(Task='fc3bac16-64f3-4910-8bc4-6cfdd4d270da') 

[ovirt-users] Cannot start VM after live storage migration - Bad volume specification

2018-03-18 Thread Bruckner, Simone
Hi all,

  we did a live storage migration of one of three disks of a vm that failed 
because the vm became not responding when deleting the auto-snapshot:

2018-03-16 15:07:32.084+01 |0 | Snapshot 'VMNAME_Disk2 Auto-generated 
for Live Storage Migration' creation for VM 'VMNAME' was initiated by xxx
2018-03-16 15:07:32.097+01 |0 | User xxx moving disk VMNAME_Disk2 to 
domain VMHOST_LUN_211.
2018-03-16 15:08:56.304+01 |0 | Snapshot 'VMNAME_Disk2 Auto-generated 
for Live Storage Migration' creation for VM 'VMNAME' has been completed.
2018-03-16 16:40:48.89+01  |0 | Snapshot 'VMNAME_Disk2 Auto-generated 
for Live Storage Migration' deletion for VM 'VMNAME' was initiated by xxx.
2018-03-16 16:44:44.813+01 |1 | VM VMNAME is not responding.
2018-03-18 18:40:51.258+01 |2 | Failed to delete snapshot 'VMNAME_Disk2 
Auto-generated for Live Storage Migration' for VM 'VMNAME'.
2018-03-18 18:40:54.506+01 |1 | Possible failure while deleting 
VMNAME_Disk2 from the source Storage Domain VMHOST_LUN_211 during the move 
operation. The Storage Domain may be manually cleaned-up from possible leftover
s (User:xxx).

Now we cannot start the vm anymore as long as this disk is online. Error 
message is "VM VMNAME is down with error. Exit message: Bad volume 
specification {'index': 2, 'domainID': 'ecc71a64-62c1-43f4-bf1f-3bc1b22c7a8a', 
'reqsize': '0', 'name': 'vdc', 'truesize': '2147483648', 'format': 'cow', 
'discard': False, 'volumeID': '4659b5e0-93c1-478d-97d0-ec1cf4052028', 
'apparentsize': '2147483648', 'imageID': 
'c1a05108-90d7-421d-a9b4-d4cc65c48429', 'specParams': {}, 'iface': 'virtio', 
'cache': 'none', 'propagateErrors': 'off', 'poolID': 
'5849b030-626e-47cb-ad90-3ce782d831b3', 'device': 'disk', 'path': 
'/rhev/data-center/mnt/blockSD/ecc71a64-62c1-43f4-bf1f-3bc1b22c7a8a/images/c1a05108-90d7-421d-a9b4-d4cc65c48429/4659b5e0-93c1-478d-97d0-ec1cf4052028',
 'serial': 'c1a05108-90d7-421d-a9b4-d4cc65c48429', 'diskType': 'block', 'type': 
'block'}."

vdsm.log:
2018-03-18 21:53:33,815+0100 ERROR (vm/7d05e511) [storage.TaskManager.Task] 
(Task='fc3bac16-64f3-4910-8bc4-6cfdd4d270da') Unexpected error (task:875)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in 
_run
return fn(*args, **kargs)
  File "", line 2, in prepareImage
  File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in method
ret = func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 3179, in 
prepareImage
raise se.prepareIllegalVolumeError(volUUID)
prepareIllegalVolumeError: Cannot prepare illegal volume: 
('4c6475b1-352a-4114-b647-505cccbe6663',)
2018-03-18 21:53:33,816+0100 INFO  (vm/7d05e511) [storage.TaskManager.Task] 
(Task='fc3bac16-64f3-4910-8bc4-6cfdd4d270da') aborting: Task is aborted: 
"Cannot prepare illegal volume: ('4c6475b1-352a-4114-b647-505cccbe6663',)" - 
code 227 (task:1181)
2018-03-18 21:53:33,816+0100 ERROR (vm/7d05e511) [storage.Dispatcher] FINISH 
prepareImage error=Cannot prepare illegal volume: 
('4c6475b1-352a-4114-b647-505cccbe6663',) (dispatcher:82)
2018-03-18 21:53:33,816+0100 ERROR (vm/7d05e511) [virt.vm] 
(vmId='7d05e511-2e97-4002-bded-285ec4e30587') The vm start process failed 
(vm:927)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 856, in 
_startUnderlyingVm
self._run()
 File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2661, in _run
self._devices = self._make_devices()
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2608, in 
_make_devices
self._preparePathsForDrives(disk_params)
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 1001, in 
_preparePathsForDrives
drive['path'] = self.cif.prepareVolumePath(drive, self.id)
  File "/usr/lib/python2.7/site-packages/vdsm/clientIF.py", line 393, in 
prepareVolumePath
raise vm.VolumeError(drive)
VolumeError: Bad volume specification {'index': 2, 'domainID': 
'ecc71a64-62c1-43f4-bf1f-3bc1b22c7a8a', 'reqsize': '0', 'name': 'vdc', 
'truesize': '2147483648', 'format': 'cow', 'discard': False, 'volumeID': 
'4659b5e0-93c1-478d-97d0-ec1cf4052028', 'apparentsize': '2147483648', 
'imageID': 'c1a05108-90d7-421d-a9b4-d4cc65c48429', 'specParams': {}, 'iface': 
'virtio', 'cache': 'none', 'propagateErrors': 'off', 'poolID': 
'5849b030-626e-47cb-ad90-3ce782d831b3', 'device': 'disk', 'path': 
'/rhev/data-center/mnt/blockSD/ecc71a64-62c1-43f4-bf1f-3bc1b22c7a8a/images/c1a05108-90d7-421d-a9b4-d4cc65c48429/4659b5e0-93c1-478d-97d0-ec1cf4052028',
 'serial': 'c1a05108-90d7-421d-a9b4-d4cc65c48429', 'diskType': 'block', 'type': 
'block'}
2018-03-18 21:53:33,817+0100 INFO  (vm/7d05e511) [virt.vm] 
(vmId='7d05e511-2e97-4002-bded-285ec4e30587') Changed state to Down: Bad volume 
specification {'index': 2, 'domainID': 'ecc71a64-62c1-43f4-bf1f-3bc1b22c7a8a', 
'reqsize': '0', 'name': 'vdc', 'truesize': '2147483648', 

Re: [ovirt-users] Cannot activate storage domain

2018-03-11 Thread Bruckner, Simone
Shani,

  all storage domains are FC.

In the meantime I could track it down to corrupt vg metadata. I filed a bug 
(https://bugzilla.redhat.com/show_bug.cgi?id=1553133).

I ran vgcfgrestore, copied the VMs I could recover and recreated the LUNs. 
Would there have been another way of recovering?

All the best,
Simone

Von: Shani Leviim [mailto:slev...@redhat.com]
Gesendet: Sonntag, 11. März 2018 14:09
An: Bruckner, Simone <simone.bruck...@fabasoft.com>
Cc: users@ovirt.org
Betreff: Re: [ovirt-users] Cannot activate storage domain

Hi Simone,
Sorry for the delay replying you.

Does the second storage domain you've mentioned is also an FC?
If so, please execute the following command  /usr/libexec/vdsm/fc-scan -v
on one host of each inactive storage domain and share the results you've got.


Regards,
Shani Leviim

On Thu, Mar 8, 2018 at 9:42 AM, Bruckner, Simone 
<simone.bruck...@fabasoft.com<mailto:simone.bruck...@fabasoft.com>> wrote:
Hi Shani,

  today I again lost access to a storage domain. So currently I have two 
storage domains that we cannot activate any more.

I uploaded the logfiles to our Cloud Service: [ZIP Archive]  
logfiles.tar.gz<https://at.cloud.fabasoft.com/folio/public/1zf8q45o1e9l8334agryb2crdd>
I lost access today, March 8th 2018 around 0.55am CET
I tried to actived the storage domain around 6.40am CET

Please let me know if there is anything I can do to get this addressed.

Thank you very much,
Simone



Von: users-boun...@ovirt.org<mailto:users-boun...@ovirt.org> 
[users-boun...@ovirt.org<mailto:users-boun...@ovirt.org>]" im Auftrag von 
"Bruckner, Simone 
[simone.bruck...@fabasoft.com<mailto:simone.bruck...@fabasoft.com>]
Gesendet: Dienstag, 06. März 2018 10:19
An: Shani Leviim
Cc: users@ovirt.org<mailto:users@ovirt.org>
Betreff: Re: [ovirt-users] Cannot activate storage domain

Hi Shani,

  please find the logs attached.

Thank you,
Simone

Von: Shani Leviim [mailto:slev...@redhat.com<mailto:slev...@redhat.com>]
Gesendet: Dienstag, 6. März 2018 09:48
An: Bruckner, Simone 
<simone.bruck...@fabasoft.com<mailto:simone.bruck...@fabasoft.com>>
Cc: users@ovirt.org<mailto:users@ovirt.org>
Betreff: Re: [ovirt-users] Cannot activate storage domain

Hi Simone,
Can you please share your vdsm and engine logs?

Regards,
Shani Leviim

On Tue, Mar 6, 2018 at 7:34 AM, Bruckner, Simone 
<simone.bruck...@fabasoft.com<mailto:simone.bruck...@fabasoft.com><mailto:simone.bruck...@fabasoft.com<mailto:simone.bruck...@fabasoft.com>>>
 wrote:
Hello, I apologize for bringing this one up again, but does anybody know if 
there is a change to recover a storage domain, that cannot be activated?

Thank you,
Simone

Von: 
users-boun...@ovirt.org<mailto:users-boun...@ovirt.org><mailto:users-boun...@ovirt.org<mailto:users-boun...@ovirt.org>>
 
[mailto:users-boun...@ovirt.org<mailto:users-boun...@ovirt.org><mailto:users-boun...@ovirt.org<mailto:users-boun...@ovirt.org>>]
 Im Auftrag von Bruckner, Simone
Gesendet: Freitag, 2. März 2018 17:03

An: 
users@ovirt.org<mailto:users@ovirt.org><mailto:users@ovirt.org<mailto:users@ovirt.org>>
Betreff: Re: [ovirt-users] Cannot activate storage domain

Hi all,

  I managed to get the inactive storage domain to maintenance by stopping all 
running VMs that were using it, but I am still not able to activate it.

Trying to activate results in the following events:

For each host:
VDSM  command GetVGInfoVDS failed: Volume Group does not exist: 
(u'vg_uuid: 813oRe-64r8-mloU-k9G2-LFsS-dXSG-hpN4kf',)

And finally:
VDSM command ActivateStorageDomainVDS failed: Storage domain does not exist: 
(u'b83c159c-4ad6-4613-ba16-bab95ccd10c0',)

Is there anything I can do to recover this storage domain?

Thank you and all the best,
Simone

Von: 
users-boun...@ovirt.org<mailto:users-boun...@ovirt.org><mailto:users-boun...@ovirt.org<mailto:users-boun...@ovirt.org>>
 [mailto:users-boun...@ovirt.org<mailto:users-boun...@ovirt.org>] Im Auftrag 
von Bruckner, Simone
Gesendet: Donnerstag, 1. März 2018 17:57
An: 
users@ovirt.org<mailto:users@ovirt.org><mailto:users@ovirt.org<mailto:users@ovirt.org>>
Betreff: Re: [ovirt-users] Cannot activate storage domain

Hi,

  we are still struggling getting a storage domain online again. We tried to 
put the storage domain in maintenance mode, that led to “Failed to update OVF 
disks 809cc8d7-7687-46cf-a342-3be48674a9b3, OVF data isn't updated on those OVF 
stores”.

Trying again with ignoring OVF update failures put the storage domain in 
“preparing for maintenance”. We see the following message on all hosts: “Error 
releasing host id 26 for domain b83c159c-4ad6-4613-ba16-bab95ccd10c0 
(monitor:578)”.

Querying the storage domain using vdsm-client on the SPM resulted in
# vdsm-client StorageDomain getInfo 
"storagedomainID&quo

Re: [ovirt-users] Faulty multipath only cleared with VDSM restart

2018-03-11 Thread Bruckner, Simone
Hi Fred,

  thank you for the explanation. I restarted VDSM and will monitor the 
behaviour.

Does that faulty multipath report have any side effects on stability and 
performance?

All the best,
Oliver

Von: Fred Rolland [mailto:froll...@redhat.com]
Gesendet: Sonntag, 11. März 2018 11:23
An: Bruckner, Simone <simone.bruck...@fabasoft.com>
Cc: users@ovirt.org
Betreff: Re: [ovirt-users] Faulty multipath only cleared with VDSM restart

Hi Simone,
The multipath health is built on VDSM start from the current multipath state, 
and after that it is maintained based on events sent by udev.
You can read about the implementation details in [1].
It seems that in your scenario, either udev did not sent the needed clearing 
events or that Vdsm mishandled them.
Therefore only restart of the Vdsm will clear the report.
In order to be able to debug the issue, we will need Vdsm logs with debug level 
(on storage log) when the issue is happening.

Thanks,
Fred

[1] 
https://ovirt.org/develop/release-management/features/storage/multipath-events/

On Fri, Mar 9, 2018 at 1:07 PM, Bruckner, Simone 
<simone.bruck...@fabasoft.com<mailto:simone.bruck...@fabasoft.com>> wrote:
Hi,

  after rebooting SAN switches we see faulty multipath entries in VDSM.

Running vdsm-client Host getStats shows multipathHealth entries

"multipathHealth": {
  "3600601603cc04500a2f9cd597080db0e": {
"valid_paths": 2,
"failed_paths": [
  "sdcl",
  "sdde"
]
  },
  …

Running multipath –ll does not show any errors.

After restarting VSDM, the multipathHealth entires from vdsm-client are empty 
again.

Is the a way to clear those multipathHealth entires without restarting VDSM?

Thank you and all the best,
Simone


___
Users mailing list
Users@ovirt.org<mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Multiple "Active VM before the preview" snapshots

2018-03-10 Thread Bruckner, Simone
Hi,

  we see some VMs that show an inconsistent view of snapshots. Checking die 
database for one example vm shows the following result:

select snapshot_id, status, description  from snapshots where vm_id = 
'420a6445-df02-da6a-e4e3-ddc451b2914d';
 snapshot_id  |   status   | description
--++--
602f6aa2-f9fa-4fc6-8349-a8afa1f55137 | OK | Active VM before the preview
12097e2a-7baf-497f-bd60-bfec7b111828 | IN_PREVIEW | base
dd82844c-c46a-4010-9af3-bec836abea2c | OK | Active VM
e70a4715-0511-4ee4-b309-d24a54c56c2f | OK | Active VM before the preview

We cannot perform any snapshot, clone, copy operations on those vms. Is there a 
way to get this cleared?

All the best,
Simone

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Faulty multipath only cleared with VDSM restart

2018-03-09 Thread Bruckner, Simone
Hi,

  after rebooting SAN switches we see faulty multipath entries in VDSM.

Running vdsm-client Host getStats shows multipathHealth entries

"multipathHealth": {
  "3600601603cc04500a2f9cd597080db0e": {
"valid_paths": 2,
"failed_paths": [
  "sdcl",
  "sdde"
]
  },
  ...

Running multipath -ll does not show any errors.

After restarting VSDM, the multipathHealth entires from vdsm-client are empty 
again.

Is the a way to clear those multipathHealth entires without restarting VDSM?

Thank you and all the best,
Simone

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Cannot activate storage domain

2018-03-07 Thread Bruckner, Simone
Hi Shani,

  today I again lost access to a storage domain. So currently I have two 
storage domains that we cannot activate any more.

I uploaded the logfiles to our Cloud Service: [ZIP Archive]  
logfiles.tar.gz<https://at.cloud.fabasoft.com/folio/public/1zf8q45o1e9l8334agryb2crdd>
I lost access today, March 8th 2018 around 0.55am CET
I tried to actived the storage domain around 6.40am CET

Please let me know if there is anything I can do to get this addressed.

Thank you very much,
Simone



Von: users-boun...@ovirt.org [users-boun...@ovirt.org]" im Auftrag von 
"Bruckner, Simone [simone.bruck...@fabasoft.com]
Gesendet: Dienstag, 06. März 2018 10:19
An: Shani Leviim
Cc: users@ovirt.org
Betreff: Re: [ovirt-users] Cannot activate storage domain

Hi Shani,

  please find the logs attached.

Thank you,
Simone

Von: Shani Leviim [mailto:slev...@redhat.com]
Gesendet: Dienstag, 6. März 2018 09:48
An: Bruckner, Simone <simone.bruck...@fabasoft.com>
Cc: users@ovirt.org
Betreff: Re: [ovirt-users] Cannot activate storage domain

Hi Simone,
Can you please share your vdsm and engine logs?

Regards,
Shani Leviim

On Tue, Mar 6, 2018 at 7:34 AM, Bruckner, Simone 
<simone.bruck...@fabasoft.com<mailto:simone.bruck...@fabasoft.com>> wrote:
Hello, I apologize for bringing this one up again, but does anybody know if 
there is a change to recover a storage domain, that cannot be activated?

Thank you,
Simone

Von: users-boun...@ovirt.org<mailto:users-boun...@ovirt.org> 
[mailto:users-boun...@ovirt.org<mailto:users-boun...@ovirt.org>] Im Auftrag von 
Bruckner, Simone
Gesendet: Freitag, 2. März 2018 17:03

An: users@ovirt.org<mailto:users@ovirt.org>
Betreff: Re: [ovirt-users] Cannot activate storage domain

Hi all,

  I managed to get the inactive storage domain to maintenance by stopping all 
running VMs that were using it, but I am still not able to activate it.

Trying to activate results in the following events:

For each host:
VDSM  command GetVGInfoVDS failed: Volume Group does not exist: 
(u'vg_uuid: 813oRe-64r8-mloU-k9G2-LFsS-dXSG-hpN4kf',)

And finally:
VDSM command ActivateStorageDomainVDS failed: Storage domain does not exist: 
(u'b83c159c-4ad6-4613-ba16-bab95ccd10c0',)

Is there anything I can do to recover this storage domain?

Thank you and all the best,
Simone

Von: users-boun...@ovirt.org<mailto:users-boun...@ovirt.org> 
[mailto:users-boun...@ovirt.org] Im Auftrag von Bruckner, Simone
Gesendet: Donnerstag, 1. März 2018 17:57
An: users@ovirt.org<mailto:users@ovirt.org>
Betreff: Re: [ovirt-users] Cannot activate storage domain

Hi,

  we are still struggling getting a storage domain online again. We tried to 
put the storage domain in maintenance mode, that led to “Failed to update OVF 
disks 809cc8d7-7687-46cf-a342-3be48674a9b3, OVF data isn't updated on those OVF 
stores”.

Trying again with ignoring OVF update failures put the storage domain in 
“preparing for maintenance”. We see the following message on all hosts: “Error 
releasing host id 26 for domain b83c159c-4ad6-4613-ba16-bab95ccd10c0 
(monitor:578)”.

Querying the storage domain using vdsm-client on the SPM resulted in
# vdsm-client StorageDomain getInfo 
"storagedomainID"="b83c159c-4ad6-4613-ba16-bab95ccd10c0"
vdsm-client: Command StorageDomain.getInfo with args {'storagedomainID': 
'b83c159c-4ad6-4613-ba16-bab95ccd10c0'} failed:
(code=358, message=Storage domain does not exist: 
(u'b83c159c-4ad6-4613-ba16-bab95ccd10c0',))

Any ideas?

Thank you and all the best,
Simone

Von: users-boun...@ovirt.org<mailto:users-boun...@ovirt.org> 
[mailto:users-boun...@ovirt.org] Im Auftrag von Bruckner, Simone
Gesendet: Mittwoch, 28. Februar 2018 15:52
An: users@ovirt.org<mailto:users@ovirt.org>
Betreff: [ovirt-users] Cannot activate storage domain

Hi all,

  we run a small oVirt installation that we also use for automated testing 
(automatically creating, dropping vms).

We got an inactive FC storage domain that we cannot activate any more. We see 
several events at that time starting with:

VM perftest-c17 is down with error. Exit message: Unable to get volume size for 
domain b83c159c-4ad6-4613-ba16-bab95ccd10c0 volume 
686376c1-4be1-44c3-89a3-0a8addc8fdf2.

Trying to activate the strorage domain results in the following alert event for 
each host:

VDSM  command GetVGInfoVDS failed: Volume Group does not exist: 
(u'vg_uuid: 813oRe-64r8-mloU-k9G2-LFsS-dXSG-hpN4kf',)

And after those messages from all hosts we get:

VDSM command ActivateStorageDomainVDS failed: Storage domain does not exist: 
(u'b83c159c-4ad6-4613-ba16-bab95ccd10c0',)
Failed to activate Storage Domain VMHOST_LUN_205 (Data Center Production) by 

Invalid status on Data Center Production. Setting status to Non Responsive.
Storage Pool Manager runs on Host 
vmhost003.fabagl.fabasoft.com<http://vmhost003.fabagl.fabasoft.com> (Address: 
vmhost003.fabagl.fabasoft.com&l

Re: [ovirt-users] Cannot activate storage domain

2018-03-05 Thread Bruckner, Simone
Hello, I apologize for bringing this one up again, but does anybody know if 
there is a change to recover a storage domain, that cannot be activated?

Thank you,
Simone

Von: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] Im Auftrag von 
Bruckner, Simone
Gesendet: Freitag, 2. März 2018 17:03
An: users@ovirt.org
Betreff: Re: [ovirt-users] Cannot activate storage domain

Hi all,

  I managed to get the inactive storage domain to maintenance by stopping all 
running VMs that were using it, but I am still not able to activate it.

Trying to activate results in the following events:

For each host:
VDSM  command GetVGInfoVDS failed: Volume Group does not exist: 
(u'vg_uuid: 813oRe-64r8-mloU-k9G2-LFsS-dXSG-hpN4kf',)

And finally:
VDSM command ActivateStorageDomainVDS failed: Storage domain does not exist: 
(u'b83c159c-4ad6-4613-ba16-bab95ccd10c0',)

Is there anything I can do to recover this storage domain?

Thank you and all the best,
Simone

Von: users-boun...@ovirt.org<mailto:users-boun...@ovirt.org> 
[mailto:users-boun...@ovirt.org] Im Auftrag von Bruckner, Simone
Gesendet: Donnerstag, 1. März 2018 17:57
An: users@ovirt.org<mailto:users@ovirt.org>
Betreff: Re: [ovirt-users] Cannot activate storage domain

Hi,

  we are still struggling getting a storage domain online again. We tried to 
put the storage domain in maintenance mode, that led to "Failed to update OVF 
disks 809cc8d7-7687-46cf-a342-3be48674a9b3, OVF data isn't updated on those OVF 
stores".

Trying again with ignoring OVF update failures put the storage domain in 
"preparing for maintenance". We see the following message on all hosts: "Error 
releasing host id 26 for domain b83c159c-4ad6-4613-ba16-bab95ccd10c0 
(monitor:578)".

Querying the storage domain using vdsm-client on the SPM resulted in
# vdsm-client StorageDomain getInfo 
"storagedomainID"="b83c159c-4ad6-4613-ba16-bab95ccd10c0"
vdsm-client: Command StorageDomain.getInfo with args {'storagedomainID': 
'b83c159c-4ad6-4613-ba16-bab95ccd10c0'} failed:
(code=358, message=Storage domain does not exist: 
(u'b83c159c-4ad6-4613-ba16-bab95ccd10c0',))

Any ideas?

Thank you and all the best,
Simone

Von: users-boun...@ovirt.org<mailto:users-boun...@ovirt.org> 
[mailto:users-boun...@ovirt.org] Im Auftrag von Bruckner, Simone
Gesendet: Mittwoch, 28. Februar 2018 15:52
An: users@ovirt.org<mailto:users@ovirt.org>
Betreff: [ovirt-users] Cannot activate storage domain

Hi all,

  we run a small oVirt installation that we also use for automated testing 
(automatically creating, dropping vms).

We got an inactive FC storage domain that we cannot activate any more. We see 
several events at that time starting with:

VM perftest-c17 is down with error. Exit message: Unable to get volume size for 
domain b83c159c-4ad6-4613-ba16-bab95ccd10c0 volume 
686376c1-4be1-44c3-89a3-0a8addc8fdf2.

Trying to activate the strorage domain results in the following alert event for 
each host:

VDSM  command GetVGInfoVDS failed: Volume Group does not exist: 
(u'vg_uuid: 813oRe-64r8-mloU-k9G2-LFsS-dXSG-hpN4kf',)

And after those messages from all hosts we get:

VDSM command ActivateStorageDomainVDS failed: Storage domain does not exist: 
(u'b83c159c-4ad6-4613-ba16-bab95ccd10c0',)
Failed to activate Storage Domain VMHOST_LUN_205 (Data Center Production) by 

Invalid status on Data Center Production. Setting status to Non Responsive.
Storage Pool Manager runs on Host vmhost003.fabagl.fabasoft.com (Address: 
vmhost003.fabagl.fabasoft.com), Data Center Production.

Checking the hosts with multipath -ll we see the LUN without errors.

We run oVirt 4.2.1 on CentOS 7.4. Hosts are CentOS 7.4 hosts with oVirt 
installed using oVirt engine.
Hosts are connected to about 30 FC LUNs (8 TB each) on two all-flash storage 
arrays.

Thank you,
Simone Bruckner



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Cannot activate storage domain

2018-03-02 Thread Bruckner, Simone
Hi all,

  I managed to get the inactive storage domain to maintenance by stopping all 
running VMs that were using it, but I am still not able to activate it.

Trying to activate results in the following events:

For each host:
VDSM  command GetVGInfoVDS failed: Volume Group does not exist: 
(u'vg_uuid: 813oRe-64r8-mloU-k9G2-LFsS-dXSG-hpN4kf',)

And finally:
VDSM command ActivateStorageDomainVDS failed: Storage domain does not exist: 
(u'b83c159c-4ad6-4613-ba16-bab95ccd10c0',)

Is there anything I can do to recover this storage domain?

Thank you and all the best,
Simone

Von: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] Im Auftrag von 
Bruckner, Simone
Gesendet: Donnerstag, 1. März 2018 17:57
An: users@ovirt.org
Betreff: Re: [ovirt-users] Cannot activate storage domain

Hi,

  we are still struggling getting a storage domain online again. We tried to 
put the storage domain in maintenance mode, that led to "Failed to update OVF 
disks 809cc8d7-7687-46cf-a342-3be48674a9b3, OVF data isn't updated on those OVF 
stores".

Trying again with ignoring OVF update failures put the storage domain in 
"preparing for maintenance". We see the following message on all hosts: "Error 
releasing host id 26 for domain b83c159c-4ad6-4613-ba16-bab95ccd10c0 
(monitor:578)".

Querying the storage domain using vdsm-client on the SPM resulted in
# vdsm-client StorageDomain getInfo 
"storagedomainID"="b83c159c-4ad6-4613-ba16-bab95ccd10c0"
vdsm-client: Command StorageDomain.getInfo with args {'storagedomainID': 
'b83c159c-4ad6-4613-ba16-bab95ccd10c0'} failed:
(code=358, message=Storage domain does not exist: 
(u'b83c159c-4ad6-4613-ba16-bab95ccd10c0',))

Any ideas?

Thank you and all the best,
Simone

Von: users-boun...@ovirt.org<mailto:users-boun...@ovirt.org> 
[mailto:users-boun...@ovirt.org] Im Auftrag von Bruckner, Simone
Gesendet: Mittwoch, 28. Februar 2018 15:52
An: users@ovirt.org<mailto:users@ovirt.org>
Betreff: [ovirt-users] Cannot activate storage domain

Hi all,

  we run a small oVirt installation that we also use for automated testing 
(automatically creating, dropping vms).

We got an inactive FC storage domain that we cannot activate any more. We see 
several events at that time starting with:

VM perftest-c17 is down with error. Exit message: Unable to get volume size for 
domain b83c159c-4ad6-4613-ba16-bab95ccd10c0 volume 
686376c1-4be1-44c3-89a3-0a8addc8fdf2.

Trying to activate the strorage domain results in the following alert event for 
each host:

VDSM  command GetVGInfoVDS failed: Volume Group does not exist: 
(u'vg_uuid: 813oRe-64r8-mloU-k9G2-LFsS-dXSG-hpN4kf',)

And after those messages from all hosts we get:

VDSM command ActivateStorageDomainVDS failed: Storage domain does not exist: 
(u'b83c159c-4ad6-4613-ba16-bab95ccd10c0',)
Failed to activate Storage Domain VMHOST_LUN_205 (Data Center Production) by 

Invalid status on Data Center Production. Setting status to Non Responsive.
Storage Pool Manager runs on Host vmhost003.fabagl.fabasoft.com (Address: 
vmhost003.fabagl.fabasoft.com), Data Center Production.

Checking the hosts with multipath -ll we see the LUN without errors.

We run oVirt 4.2.1 on CentOS 7.4. Hosts are CentOS 7.4 hosts with oVirt 
installed using oVirt engine.
Hosts are connected to about 30 FC LUNs (8 TB each) on two all-flash storage 
arrays.

Thank you,
Simone Bruckner



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Cannot activate storage domain

2018-03-01 Thread Bruckner, Simone
Hi,

  we are still struggling getting a storage domain online again. We tried to 
put the storage domain in maintenance mode, that led to "Failed to update OVF 
disks 809cc8d7-7687-46cf-a342-3be48674a9b3, OVF data isn't updated on those OVF 
stores".

Trying again with ignoring OVF update failures put the storage domain in 
"preparing for maintenance". We see the following message on all hosts: "Error 
releasing host id 26 for domain b83c159c-4ad6-4613-ba16-bab95ccd10c0 
(monitor:578)".

Querying the storage domain using vdsm-client on the SPM resulted in
# vdsm-client StorageDomain getInfo 
"storagedomainID"="b83c159c-4ad6-4613-ba16-bab95ccd10c0"
vdsm-client: Command StorageDomain.getInfo with args {'storagedomainID': 
'b83c159c-4ad6-4613-ba16-bab95ccd10c0'} failed:
(code=358, message=Storage domain does not exist: 
(u'b83c159c-4ad6-4613-ba16-bab95ccd10c0',))

Any ideas?

Thank you and all the best,
Simone

Von: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] Im Auftrag von 
Bruckner, Simone
Gesendet: Mittwoch, 28. Februar 2018 15:52
An: users@ovirt.org
Betreff: [ovirt-users] Cannot activate storage domain

Hi all,

  we run a small oVirt installation that we also use for automated testing 
(automatically creating, dropping vms).

We got an inactive FC storage domain that we cannot activate any more. We see 
several events at that time starting with:

VM perftest-c17 is down with error. Exit message: Unable to get volume size for 
domain b83c159c-4ad6-4613-ba16-bab95ccd10c0 volume 
686376c1-4be1-44c3-89a3-0a8addc8fdf2.

Trying to activate the strorage domain results in the following alert event for 
each host:

VDSM  command GetVGInfoVDS failed: Volume Group does not exist: 
(u'vg_uuid: 813oRe-64r8-mloU-k9G2-LFsS-dXSG-hpN4kf',)

And after those messages from all hosts we get:

VDSM command ActivateStorageDomainVDS failed: Storage domain does not exist: 
(u'b83c159c-4ad6-4613-ba16-bab95ccd10c0',)
Failed to activate Storage Domain VMHOST_LUN_205 (Data Center Production) by 

Invalid status on Data Center Production. Setting status to Non Responsive.
Storage Pool Manager runs on Host vmhost003.fabagl.fabasoft.com (Address: 
vmhost003.fabagl.fabasoft.com), Data Center Production.

Checking the hosts with multipath -ll we see the LUN without errors.

We run oVirt 4.2.1 on CentOS 7.4. Hosts are CentOS 7.4 hosts with oVirt 
installed using oVirt engine.
Hosts are connected to about 30 FC LUNs (8 TB each) on two all-flash storage 
arrays.

Thank you,
Simone Bruckner



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Rückruf: Cannot activate host from maintenance mode

2018-02-28 Thread Bruckner, Simone
Bruckner, Simone möchte die Nachricht "[ovirt-users] Cannot activate host from 
maintenance mode" zurückrufen.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Cannot activate storage domain

2018-02-28 Thread Bruckner, Simone
Hi all,



  we run a small oVirt installation that we also use for automated testing 
(automatically creating, dropping vms).



We got an inactive FC storage domain that we cannot activate any more. We see 
several events at that time starting with:



VM perftest-c17 is down with error. Exit message: Unable to get volume size for 
domain b83c159c-4ad6-4613-ba16-bab95ccd10c0 volume 
686376c1-4be1-44c3-89a3-0a8addc8fdf2.



Trying to activate the strorage domain results in the following alert event for 
each host:



VDSM  command GetVGInfoVDS failed: Volume Group does not exist: 
(u'vg_uuid: 813oRe-64r8-mloU-k9G2-LFsS-dXSG-hpN4kf',)



And after those messages from all hosts we get:



VDSM command ActivateStorageDomainVDS failed: Storage domain does not exist: 
(u'b83c159c-4ad6-4613-ba16-bab95ccd10c0',)

Failed to activate Storage Domain VMHOST_LUN_205 (Data Center Production) by 


Invalid status on Data Center Production. Setting status to Non Responsive.

Storage Pool Manager runs on Host vmhost003.fabagl.fabasoft.com (Address: 
vmhost003.fabagl.fabasoft.com), Data Center Production.



Checking the hosts with multipath -ll we see the LUN without errors.



We run oVirt 4.2.1 on CentOS 7.4. Hosts are CentOS 7.4 hosts with oVirt 
installed using oVirt engine.

Hosts are connected to about 30 FC LUNs (8 TB each) on two all-flash storage 
arrays.



Thank you,

Simone Bruckner







___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] cbs.centos.org

2016-10-10 Thread Bruckner, Simone
Hi all,

  I'm trying to update my oVirt installation but cbs.centos.org (referenced by 
ovirt-4.0-dependencies.repo) seems to be down. Any ideas when it will be up 
again?

All the best,
Simone Bruckner

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Cannot change Cluster Compatibility Version when a VM is active

2016-09-23 Thread Bruckner, Simone
Michal,

  thanks!

All the best,
Simone

Von: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] Im Auftrag von 
Michal Skrivanek
Gesendet: Freitag, 23. September 2016 17:22
An: Albl, Oliver <oliver.a...@fabasoft.com>
Cc: users@ovirt.org
Betreff: Re: [ovirt-users] Cannot change Cluster Compatibility Version when a 
VM is active


On 23 Sep 2016, at 17:16, Albl, Oliver 
<oliver.a...@fabasoft.com<mailto:oliver.a...@fabasoft.com>> wrote:

Michal,

  thank you for your quick response! I am running 300+ VMs so any other (safe) 
option would be very welcome…

and are they all in a 3.5 cluster?
Your priority should be to get them to 3.6 compatibility prior upgrading to 
4.0, then once in 4.0.3+ there is a safe-enough way how to properly update 3.6 
to a 4.0 cluster level.

otherwise, any <3.6.7 would allow you to do that or there is a manual postgres 
workaround mentioned in that bug which you can try/risk - basically as long as 
you can reboot those VMs in a foreseeable future(prior to use any of the new 
features) you can follow the manual workaround and upgrade to 3.6….just make 
sure you stop and start them eventually



All the best,
Simone

Von: users-boun...@ovirt.org<mailto:users-boun...@ovirt.org> 
[mailto:users-boun...@ovirt.org] Im Auftrag von Michal Skrivanek
Gesendet: Freitag, 23. September 2016 16:57
An: Bruckner, Simone 
<simone.bruck...@fabasoft.com<mailto:simone.bruck...@fabasoft.com>>
Cc: users@ovirt.org<mailto:users@ovirt.org>
Betreff: Re: [ovirt-users] Cannot change Cluster Compatibility Version when a 
VM is active


On 23 Sep 2016, at 16:49, Bruckner, Simone 
<simone.bruck...@fabasoft.com<mailto:simone.bruck...@fabasoft.com>> wrote:

Hi all,

  I am trying to upgrade an oVirt installation (3.6.7.5-1.el6) to 4.0. My 
datacenters and clusters have 3.5 compatibility settings.

I followed the instructions from 
http://www.ovirt.org/documentation/migration-engine-3.6-to-4.0/ but cannot 
proceed in engine-setup as 3.5 compatibility is not supported.

When trying to change cluster compatibility from 3.5 to 3.6 I receive “Cannot 
change Cluster Compatibility Version when a VM is active. Please shutdown all 
VMs in the Cluster.” According 
tohttps://bugzilla.redhat.com/show_bug.cgi?id=1341023 this should be fixed in 
3.6.7. Any ideas?

this bug is blocking it, later bugs
(linked from there) allows it, though there are other issues…so if you have an 
option to shut them down please do so.
note those are RHEV bugs, not oVirt bugs so the exact build may differ

Thanks,
michal




Best Regards,
Simone Bruckner

___
Users mailing list
Users@ovirt.org<mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Cannot change Cluster Compatibility Version when a VM is active

2016-09-23 Thread Bruckner, Simone
Hi all,

  I am trying to upgrade an oVirt installation (3.6.7.5-1.el6) to 4.0. My 
datacenters and clusters have 3.5 compatibility settings.

I followed the instructions from 
http://www.ovirt.org/documentation/migration-engine-3.6-to-4.0/ but cannot 
proceed in engine-setup as 3.5 compatibility is not supported.

When trying to change cluster compatibility from 3.5 to 3.6 I receive "Cannot 
change Cluster Compatibility Version when a VM is active. Please shutdown all 
VMs in the Cluster." According to 
https://bugzilla.redhat.com/show_bug.cgi?id=1341023 this should be fixed in 
3.6.7. Any ideas?

Best Regards,
Simone Bruckner

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users