[ovirt-users] Re: Hyperconverged setup - storage architecture - scaling

2020-01-09 Thread Sahina Bose
On Thu, Jan 9, 2020 at 10:22 PM C Williams  wrote:

>   Hello,
>
> I did not see an answer to this ...
>
> "> 3. If the limit of hosts per datacenter is 250, then (in theory ) the
> recomended way in reaching this treshold would be to create 20 separated
> oVirt logical clusters with 12 nodes per each ( and datacenter managed from
> one ha-engine ) ?"
>
> I have an existing oVirt datacenter with its own engine, hypervisors, etc.
> Could I create hyperconverged clusters managed by my current datacenter ?
> Ex. Cluster 1 -- 12 hyperconverged physical machines (storage/compute),
> Cluster 2 -- 12 hyperconverged physical machines, etc.
>

Yes, you can add multiple clusters to be managed by your existing engine.
The deployment flow would be different though, as the installation via
cockpit also deploys the engine for the servers selected.
You would need to create a custom ansible playbook that sets up the gluster
volumes and add the hosts to the existing engine. (or do the creation of
cluster and gluster volumes via the engine UI)


> Please let me know.
>
> Thank You
>
> C Williams
>
> On Tue, Jan 29, 2019 at 4:21 AM Sahina Bose  wrote:
>
>> On Mon, Jan 28, 2019 at 6:22 PM Leo David  wrote:
>> >
>> > Hello Everyone,
>> > Reading through the document:
>> > "Red Hat Hyperconverged Infrastructure for Virtualization 1.5
>> >  Automating RHHI for Virtualization deployment"
>> >
>> > Regarding storage scaling,  i see the following statements:
>> >
>> > 2.7. SCALING
>> > Red Hat Hyperconverged Infrastructure for Virtualization is supported
>> for one node, and for clusters of 3, 6, 9, and 12 nodes.
>> > The initial deployment is either 1 or 3 nodes.
>> > There are two supported methods of horizontally scaling Red Hat
>> Hyperconverged Infrastructure for Virtualization:
>> >
>> > 1 Add new hyperconverged nodes to the cluster, in sets of three, up to
>> the maximum of 12 hyperconverged nodes.
>> >
>> > 2 Create new Gluster volumes using new disks on existing hyperconverged
>> nodes.
>> > You cannot create a volume that spans more than 3 nodes, or expand an
>> existing volume so that it spans across more than 3 nodes at a time
>> >
>> > 2.9.1. Prerequisites for geo-replication
>> > Be aware of the following requirements and limitations when configuring
>> geo-replication:
>> > One geo-replicated volume only
>> > Red Hat Hyperconverged Infrastructure for Virtualization (RHHI for
>> Virtualization) supports only one geo-replicated volume. Red Hat recommends
>> backing up the volume that stores the data of your virtual machines, as
>> this is usually contains the most valuable data.
>> > --
>> >
>> > Also  in oVirtEngine UI, when I add a brick to an existing volume i get
>> the following warning:
>> >
>> > "Expanding gluster volume in a hyper-converged setup is not recommended
>> as it could lead to degraded performance. To expand storage for cluster, it
>> is advised to add additional gluster volumes."
>> >
>> > Those things are raising a couple of questions that maybe for some for
>> you guys are easy to answer, but for me it creates a bit of confusion...
>> > I am also referring to RedHat product documentation,  because I  treat
>> oVirt as production-ready as RHHI is.
>>
>> oVirt and RHHI though as close to each other as possible do differ in
>> the versions used of the various components and the support
>> limitations imposed.
>> >
>> > 1. Is there any reason for not going to distributed-replicated volumes
>> ( ie: spread one volume across 6,9, or 12 nodes ) ?
>> > - ie: is recomanded that in a 9 nodes scenario I should have 3
>> separated volumes,  but how should I deal with the folowing question
>>
>> The reason for this limitation was a bug encountered when scaling a
>> replica 3 volume to distribute-replica. This has since been fixed in
>> the latest release of glusterfs.
>>
>> >
>> > 2. If only one geo-replicated volume can be configured,  how should I
>> deal with 2nd and 3rd volume replication for disaster recovery
>>
>> It is possible to have more than 1 geo-replicated volume as long as
>> your network and CPU resources support this.
>>
>> >
>> > 3. If the limit of hosts per datacenter is 250, then (in theory ) the
>> recomended way in reaching this treshold would be to create 20 separated
>> oVirt logical clusters with 12 nodes per each ( and datacenter managed from
>> one ha-engine ) ?
>> >
>> > 4. In present, I have the folowing one 9 nodes cluster , all hosts
>> contributing with 2 disks each  to a single replica 3 distributed
>> replicated volume. They where added to the volume in the following order:
>>   > node1 - disk1
>> > node2 - disk1
>> > ..
>> > node9 - disk1
>> > node1 - disk2
>> > node2 - disk2
>> > ..
>> > node9 - disk2
>> > At the moment, the volume is arbitrated, but I intend to go for full
>> distributed replica 3.
>> >
>> > Is this a bad setup ? Why ?
>> > It oviously brakes the redhat recommended rules...
>> >
>> > Is there anyone so kind to discuss on these things ?
>> >
>> > 

[ovirt-users] Re: Setting up cockpit?

2020-01-09 Thread Strahil

On Jan 9, 2020 18:58, m.skrzetu...@gmail.com wrote:
>
> Hello everyone, 
>
> I'd like to get cockpit to work because currently when I click "Host Console" 
> on a host I just get "connection refused". I checked and after the engine 
> installation the cockpit service was not running. When I start it, it runs 
> and answers on port 9090, however the SSL certificate is broken. 
>
> - How do I auto enable cockpit on installation? 
systemctl enable --now cockpit.socket  -H < destination server>  
> - How do I supply my own SSL certification to cockpit? 
Put them in /etc/cockpit/ws-certs.d/

Source:https://www.redhat.com/en/blog/linux-system-administration-management-console-cockpit
>
> Kind regards 
> Skrzetuski


Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/X5O5L7MJOAXX44WK3ATIBZYAH6ENFAQ2/


[ovirt-users] Re: Setting up cockpit?

2020-01-09 Thread m . skrzetuski
OK, it seems that SSL cert has to be placed in /etc/cockpit/ws-certs.d/.
It seems you have to start the cockpit.socket. Why is it not started with the 
engine?
And is there Single Sign On for cockpit? I don't want to give users passwords 
on my host.
And how do I connect cockpit to the engine? I can't configure it over the Web 
UI, I get the error: "Please provide valid oVirt engine fully qualified domain 
name (FQDN) and port (443 by default)", however I am providing correct values.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2YNV3IRFMLOCUVZZPXF2JQXP7SS3MAPD/


[ovirt-users] Setting up cockpit?

2020-01-09 Thread m . skrzetuski
Hello everyone,

I'd like to get cockpit to work because currently when I click "Host Console" 
on a host I just get "connection refused". I checked and after the engine 
installation the cockpit service was not running. When I start it, it runs and 
answers on port 9090, however the SSL certificate is broken.

- How do I auto enable cockpit on installation?
- How do I supply my own SSL certification to cockpit?

Kind regards
Skrzetuski
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6D4HAKXA4SCFIKCMWPLA3JZ5Z75HWDSG/


[ovirt-users] Re: Hyperconverged setup - storage architecture - scaling

2020-01-09 Thread C Williams
  Hello,

I did not see an answer to this ...

"> 3. If the limit of hosts per datacenter is 250, then (in theory ) the
recomended way in reaching this treshold would be to create 20 separated
oVirt logical clusters with 12 nodes per each ( and datacenter managed from
one ha-engine ) ?"

I have an existing oVirt datacenter with its own engine, hypervisors, etc.
Could I create hyperconverged clusters managed by my current datacenter ?
Ex. Cluster 1 -- 12 hyperconverged physical machines (storage/compute),
Cluster 2 -- 12 hyperconverged physical machines, etc.

Please let me know.

Thank You

C Williams

On Tue, Jan 29, 2019 at 4:21 AM Sahina Bose  wrote:

> On Mon, Jan 28, 2019 at 6:22 PM Leo David  wrote:
> >
> > Hello Everyone,
> > Reading through the document:
> > "Red Hat Hyperconverged Infrastructure for Virtualization 1.5
> >  Automating RHHI for Virtualization deployment"
> >
> > Regarding storage scaling,  i see the following statements:
> >
> > 2.7. SCALING
> > Red Hat Hyperconverged Infrastructure for Virtualization is supported
> for one node, and for clusters of 3, 6, 9, and 12 nodes.
> > The initial deployment is either 1 or 3 nodes.
> > There are two supported methods of horizontally scaling Red Hat
> Hyperconverged Infrastructure for Virtualization:
> >
> > 1 Add new hyperconverged nodes to the cluster, in sets of three, up to
> the maximum of 12 hyperconverged nodes.
> >
> > 2 Create new Gluster volumes using new disks on existing hyperconverged
> nodes.
> > You cannot create a volume that spans more than 3 nodes, or expand an
> existing volume so that it spans across more than 3 nodes at a time
> >
> > 2.9.1. Prerequisites for geo-replication
> > Be aware of the following requirements and limitations when configuring
> geo-replication:
> > One geo-replicated volume only
> > Red Hat Hyperconverged Infrastructure for Virtualization (RHHI for
> Virtualization) supports only one geo-replicated volume. Red Hat recommends
> backing up the volume that stores the data of your virtual machines, as
> this is usually contains the most valuable data.
> > --
> >
> > Also  in oVirtEngine UI, when I add a brick to an existing volume i get
> the following warning:
> >
> > "Expanding gluster volume in a hyper-converged setup is not recommended
> as it could lead to degraded performance. To expand storage for cluster, it
> is advised to add additional gluster volumes."
> >
> > Those things are raising a couple of questions that maybe for some for
> you guys are easy to answer, but for me it creates a bit of confusion...
> > I am also referring to RedHat product documentation,  because I  treat
> oVirt as production-ready as RHHI is.
>
> oVirt and RHHI though as close to each other as possible do differ in
> the versions used of the various components and the support
> limitations imposed.
> >
> > 1. Is there any reason for not going to distributed-replicated volumes (
> ie: spread one volume across 6,9, or 12 nodes ) ?
> > - ie: is recomanded that in a 9 nodes scenario I should have 3 separated
> volumes,  but how should I deal with the folowing question
>
> The reason for this limitation was a bug encountered when scaling a
> replica 3 volume to distribute-replica. This has since been fixed in
> the latest release of glusterfs.
>
> >
> > 2. If only one geo-replicated volume can be configured,  how should I
> deal with 2nd and 3rd volume replication for disaster recovery
>
> It is possible to have more than 1 geo-replicated volume as long as
> your network and CPU resources support this.
>
> >
> > 3. If the limit of hosts per datacenter is 250, then (in theory ) the
> recomended way in reaching this treshold would be to create 20 separated
> oVirt logical clusters with 12 nodes per each ( and datacenter managed from
> one ha-engine ) ?
> >
> > 4. In present, I have the folowing one 9 nodes cluster , all hosts
> contributing with 2 disks each  to a single replica 3 distributed
> replicated volume. They where added to the volume in the following order:
>   > node1 - disk1
> > node2 - disk1
> > ..
> > node9 - disk1
> > node1 - disk2
> > node2 - disk2
> > ..
> > node9 - disk2
> > At the moment, the volume is arbitrated, but I intend to go for full
> distributed replica 3.
> >
> > Is this a bad setup ? Why ?
> > It oviously brakes the redhat recommended rules...
> >
> > Is there anyone so kind to discuss on these things ?
> >
> > Thank you very much !
> >
> > Leo
> >
> >
> > --
> > Best regards, Leo David
> >
> >
> >
> >
> > --
> > Best regards, Leo David
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/QGZZJIT4JSLYSOVLVYZADXJTWVEM42KY/
> 

[ovirt-users] Re: After NAS crash, one VM will not start up, Cannot prepare illegal volume

2020-01-09 Thread David Johnson
 And we're up!

Regards,
David Johnson
Director of Development, Maxis Technology
844.696.2947 ext 702 (o)  |  479.531.3590 (c)
djohn...@maxistechnology.com


[image: Maxis Techncology] 
www.maxistechnology.com


*stay connected *


On Thu, Jan 9, 2020 at 7:24 AM David Johnson 
wrote:

> Never mind, I see that I have to repeat the process for other drives.
>
> Regards,
> David Johnson
> Director of Development, Maxis Technology
> 844.696.2947 ext 702 (o)  |  479.531.3590 (c)
> djohn...@maxistechnology.com
>
>
> [image: Maxis Techncology] 
> www.maxistechnology.com
>
>
> *stay connected *
>
>
> On Thu, Jan 9, 2020 at 7:17 AM David Johnson 
> wrote:
>
>> Thank you again.
>>
>> After updating legality to LEGAL,
>>
>> [root@mx-ovirt-host2 ~]# vdsm-client Volume getInfo
>> storagepoolID=25cd9bfc-bab6-11e8-90f3-78acc0b47b4d
>> storagedomainID=6e627364-5e0c-4250-ac95-7cd914d0175f
>> imageID=4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6
>> volumeID=f8066c56-6db1-4605-8d7c-0739335d30b8
>> {
>> "status": "OK",
>> "lease": {
>> "path": "/rhev/data-center/mnt/192.168.2.220:
>> _mnt_ovirt-freenas/6e627364-5e0c-4250-ac95-7cd914d0175f/images/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6/f8066c56-6db1-4605-8d7c-0739335d30b8.lease",
>> "owners": [],
>> "version": null,
>> "offset": 0
>> },
>> "domain": "6e627364-5e0c-4250-ac95-7cd914d0175f",
>> "capacity": "1503238553600",
>> "voltype": "LEAF",
>> "description": "",
>> "parent": "a912e388-d80d-4f56-805b-ea5e2f35d741",
>> "format": "COW",
>> "generation": 0,
>> "image": "4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6",
>> "uuid": "f8066c56-6db1-4605-8d7c-0739335d30b8",
>> "disktype": "DATA",
>> "legality": "LEGAL",
>> "mtime": "0",
>> "apparentsize": "36440899584",
>> "truesize": "16916186624",
>> "type": "SPARSE",
>> "children": [],
>> "pool": "",
>> "ctime": "1571669201"
>> }
>>
>> Attempt to start the VM result are:
>>
>> Log excerpt:
>>
>> 2020-01-09 06:47:46,575-0600 INFO  (vm/c5d0a42f) [storage.StorageDomain]
>> Creating symlink from 
>> /rhev/data-center/mnt/192.168.2.220:_mnt_ovirt-freenas/6e627364-5e0c-4250-ac95-7cd914d0175f/images/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6
>> to
>> /var/run/vdsm/storage/6e627364-5e0c-4250-ac95-7cd914d0175f/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6
>> (fileSD:580)
>> 2020-01-09 06:47:46,581-0600 INFO  (vm/c5d0a42f) [vdsm.api] FINISH
>> prepareImage return={'info': {'path': 
>> u'/rhev/data-center/mnt/192.168.2.220:_mnt_ovirt-freenas/6e627364-5e0c-4250-ac95-7cd914d0175f/images/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6/f8066c56-6db1-4605-8d7c-0739335d30b8',
>> 'type': 'file'}, 'path': 
>> u'/rhev/data-center/mnt/192.168.2.220:_mnt_ovirt-freenas/6e627364-5e0c-4250-ac95-7cd914d0175f/images/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6/f8066c56-6db1-4605-8d7c-0739335d30b8',
>> 'imgVolumesInfo': [{'domainID': '6e627364-5e0c-4250-ac95-7cd914d0175f',
>> 'leaseOffset': 0, 'path': 
>> u'/rhev/data-center/mnt/192.168.2.220:_mnt_ovirt-freenas/6e627364-5e0c-4250-ac95-7cd914d0175f/images/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6/a912e388-d80d-4f56-805b-ea5e2f35d741',
>> 'volumeID': u'a912e388-d80d-4f56-805b-ea5e2f35d741', 'leasePath':
>> u'/rhev/data-center/mnt/192.168.2.220:_mnt_ovirt-freenas/6e627364-5e0c-4250-ac95-7cd914d0175f/images/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6/a912e388-d80d-4f56-805b-ea5e2f35d741.lease',
>> 'imageID': '4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6'}, {'domainID':
>> '6e627364-5e0c-4250-ac95-7cd914d0175f', 'leaseOffset': 0, 'path':
>> u'/rhev/data-center/mnt/192.168.2.220:_mnt_ovirt-freenas/6e627364-5e0c-4250-ac95-7cd914d0175f/images/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6/f8066c56-6db1-4605-8d7c-0739335d30b8',
>> 'volumeID': u'f8066c56-6db1-4605-8d7c-0739335d30b8', 'leasePath':
>> u'/rhev/data-center/mnt/192.168.2.220:_mnt_ovirt-freenas/6e627364-5e0c-4250-ac95-7cd914d0175f/images/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6/f8066c56-6db1-4605-8d7c-0739335d30b8.lease',
>> 'imageID': '4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6'}]} from=internal,
>> task_id=865d2ff4-4e63-44dc-b8f8-9d93cad9892f (api:52)
>> 2020-01-09 06:47:46,582-0600 INFO  (vm/c5d0a42f) [vds] prepared volume
>> path: 
>> /rhev/data-center/mnt/192.168.2.220:_mnt_ovirt-freenas/6e627364-5e0c-4250-ac95-7cd914d0175f/images/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6/f8066c56-6db1-4605-8d7c-0739335d30b8
>> (clientIF:497)
>> 2020-01-09 06:47:46,583-0600 INFO  (vm/c5d0a42f) [vdsm.api] START
>> prepareImage(sdUUID='ec6ccb14-03c2-49cc-9cc0-b1a87d582ed7',
>> spUUID='25cd9bfc-bab6-11e8-90f3-78acc0b47b4d',
>> imgUUID='60077050-6f99-41db-b280-446f018b104b',
>> leafUUID='a67eb40c-e0a1-49cc-9179-bebb263d6e9c', allowIllegal=False)
>> from=internal, task_id=08830292-0f75-4c5b-a411-695894c66475 (api:46)
>> 2020-01-09 06:47:46,632-0600 INFO  (vm/c5d0a42f) [vdsm.api] FINISH
>> prepareImage error=Cannot prepare 

[ovirt-users] Re: After NAS crash, one VM will not start up, Cannot prepare illegal volume

2020-01-09 Thread David Johnson
Never mind, I see that I have to repeat the process for other drives.

Regards,
David Johnson
Director of Development, Maxis Technology
844.696.2947 ext 702 (o)  |  479.531.3590 (c)
djohn...@maxistechnology.com


[image: Maxis Techncology] 
www.maxistechnology.com


*stay connected *


On Thu, Jan 9, 2020 at 7:17 AM David Johnson 
wrote:

> Thank you again.
>
> After updating legality to LEGAL,
>
> [root@mx-ovirt-host2 ~]# vdsm-client Volume getInfo
> storagepoolID=25cd9bfc-bab6-11e8-90f3-78acc0b47b4d
> storagedomainID=6e627364-5e0c-4250-ac95-7cd914d0175f
> imageID=4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6
> volumeID=f8066c56-6db1-4605-8d7c-0739335d30b8
> {
> "status": "OK",
> "lease": {
> "path": "/rhev/data-center/mnt/192.168.2.220:
> _mnt_ovirt-freenas/6e627364-5e0c-4250-ac95-7cd914d0175f/images/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6/f8066c56-6db1-4605-8d7c-0739335d30b8.lease",
> "owners": [],
> "version": null,
> "offset": 0
> },
> "domain": "6e627364-5e0c-4250-ac95-7cd914d0175f",
> "capacity": "1503238553600",
> "voltype": "LEAF",
> "description": "",
> "parent": "a912e388-d80d-4f56-805b-ea5e2f35d741",
> "format": "COW",
> "generation": 0,
> "image": "4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6",
> "uuid": "f8066c56-6db1-4605-8d7c-0739335d30b8",
> "disktype": "DATA",
> "legality": "LEGAL",
> "mtime": "0",
> "apparentsize": "36440899584",
> "truesize": "16916186624",
> "type": "SPARSE",
> "children": [],
> "pool": "",
> "ctime": "1571669201"
> }
>
> Attempt to start the VM result are:
>
> Log excerpt:
>
> 2020-01-09 06:47:46,575-0600 INFO  (vm/c5d0a42f) [storage.StorageDomain]
> Creating symlink from 
> /rhev/data-center/mnt/192.168.2.220:_mnt_ovirt-freenas/6e627364-5e0c-4250-ac95-7cd914d0175f/images/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6
> to
> /var/run/vdsm/storage/6e627364-5e0c-4250-ac95-7cd914d0175f/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6
> (fileSD:580)
> 2020-01-09 06:47:46,581-0600 INFO  (vm/c5d0a42f) [vdsm.api] FINISH
> prepareImage return={'info': {'path': 
> u'/rhev/data-center/mnt/192.168.2.220:_mnt_ovirt-freenas/6e627364-5e0c-4250-ac95-7cd914d0175f/images/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6/f8066c56-6db1-4605-8d7c-0739335d30b8',
> 'type': 'file'}, 'path': 
> u'/rhev/data-center/mnt/192.168.2.220:_mnt_ovirt-freenas/6e627364-5e0c-4250-ac95-7cd914d0175f/images/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6/f8066c56-6db1-4605-8d7c-0739335d30b8',
> 'imgVolumesInfo': [{'domainID': '6e627364-5e0c-4250-ac95-7cd914d0175f',
> 'leaseOffset': 0, 'path': 
> u'/rhev/data-center/mnt/192.168.2.220:_mnt_ovirt-freenas/6e627364-5e0c-4250-ac95-7cd914d0175f/images/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6/a912e388-d80d-4f56-805b-ea5e2f35d741',
> 'volumeID': u'a912e388-d80d-4f56-805b-ea5e2f35d741', 'leasePath':
> u'/rhev/data-center/mnt/192.168.2.220:_mnt_ovirt-freenas/6e627364-5e0c-4250-ac95-7cd914d0175f/images/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6/a912e388-d80d-4f56-805b-ea5e2f35d741.lease',
> 'imageID': '4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6'}, {'domainID':
> '6e627364-5e0c-4250-ac95-7cd914d0175f', 'leaseOffset': 0, 'path':
> u'/rhev/data-center/mnt/192.168.2.220:_mnt_ovirt-freenas/6e627364-5e0c-4250-ac95-7cd914d0175f/images/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6/f8066c56-6db1-4605-8d7c-0739335d30b8',
> 'volumeID': u'f8066c56-6db1-4605-8d7c-0739335d30b8', 'leasePath':
> u'/rhev/data-center/mnt/192.168.2.220:_mnt_ovirt-freenas/6e627364-5e0c-4250-ac95-7cd914d0175f/images/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6/f8066c56-6db1-4605-8d7c-0739335d30b8.lease',
> 'imageID': '4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6'}]} from=internal,
> task_id=865d2ff4-4e63-44dc-b8f8-9d93cad9892f (api:52)
> 2020-01-09 06:47:46,582-0600 INFO  (vm/c5d0a42f) [vds] prepared volume
> path: 
> /rhev/data-center/mnt/192.168.2.220:_mnt_ovirt-freenas/6e627364-5e0c-4250-ac95-7cd914d0175f/images/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6/f8066c56-6db1-4605-8d7c-0739335d30b8
> (clientIF:497)
> 2020-01-09 06:47:46,583-0600 INFO  (vm/c5d0a42f) [vdsm.api] START
> prepareImage(sdUUID='ec6ccb14-03c2-49cc-9cc0-b1a87d582ed7',
> spUUID='25cd9bfc-bab6-11e8-90f3-78acc0b47b4d',
> imgUUID='60077050-6f99-41db-b280-446f018b104b',
> leafUUID='a67eb40c-e0a1-49cc-9179-bebb263d6e9c', allowIllegal=False)
> from=internal, task_id=08830292-0f75-4c5b-a411-695894c66475 (api:46)
> 2020-01-09 06:47:46,632-0600 INFO  (vm/c5d0a42f) [vdsm.api] FINISH
> prepareImage error=Cannot prepare illegal volume:
> (u'a67eb40c-e0a1-49cc-9179-bebb263d6e9c',) from=internal,
> task_id=08830292-0f75-4c5b-a411-695894c66475 (api:50)
> 2020-01-09 06:47:46,632-0600 ERROR (vm/c5d0a42f)
> [storage.TaskManager.Task] (Task='08830292-0f75-4c5b-a411-695894c66475')
> Unexpected error (task:875)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882,
> in _run
> return fn(*args, **kargs)
>   File "", line 2, 

[ovirt-users] Re: After NAS crash, one VM will not start up, Cannot prepare illegal volume

2020-01-09 Thread David Johnson
Thank you again.

After updating legality to LEGAL,

[root@mx-ovirt-host2 ~]# vdsm-client Volume getInfo
storagepoolID=25cd9bfc-bab6-11e8-90f3-78acc0b47b4d
storagedomainID=6e627364-5e0c-4250-ac95-7cd914d0175f
imageID=4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6
volumeID=f8066c56-6db1-4605-8d7c-0739335d30b8
{
"status": "OK",
"lease": {
"path": "/rhev/data-center/mnt/192.168.2.220:
_mnt_ovirt-freenas/6e627364-5e0c-4250-ac95-7cd914d0175f/images/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6/f8066c56-6db1-4605-8d7c-0739335d30b8.lease",
"owners": [],
"version": null,
"offset": 0
},
"domain": "6e627364-5e0c-4250-ac95-7cd914d0175f",
"capacity": "1503238553600",
"voltype": "LEAF",
"description": "",
"parent": "a912e388-d80d-4f56-805b-ea5e2f35d741",
"format": "COW",
"generation": 0,
"image": "4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6",
"uuid": "f8066c56-6db1-4605-8d7c-0739335d30b8",
"disktype": "DATA",
"legality": "LEGAL",
"mtime": "0",
"apparentsize": "36440899584",
"truesize": "16916186624",
"type": "SPARSE",
"children": [],
"pool": "",
"ctime": "1571669201"
}

Attempt to start the VM result are:

Log excerpt:

2020-01-09 06:47:46,575-0600 INFO  (vm/c5d0a42f) [storage.StorageDomain]
Creating symlink from
/rhev/data-center/mnt/192.168.2.220:_mnt_ovirt-freenas/6e627364-5e0c-4250-ac95-7cd914d0175f/images/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6
to
/var/run/vdsm/storage/6e627364-5e0c-4250-ac95-7cd914d0175f/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6
(fileSD:580)
2020-01-09 06:47:46,581-0600 INFO  (vm/c5d0a42f) [vdsm.api] FINISH
prepareImage return={'info': {'path':
u'/rhev/data-center/mnt/192.168.2.220:_mnt_ovirt-freenas/6e627364-5e0c-4250-ac95-7cd914d0175f/images/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6/f8066c56-6db1-4605-8d7c-0739335d30b8',
'type': 'file'}, 'path':
u'/rhev/data-center/mnt/192.168.2.220:_mnt_ovirt-freenas/6e627364-5e0c-4250-ac95-7cd914d0175f/images/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6/f8066c56-6db1-4605-8d7c-0739335d30b8',
'imgVolumesInfo': [{'domainID': '6e627364-5e0c-4250-ac95-7cd914d0175f',
'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/192.168.2.220:_mnt_ovirt-freenas/6e627364-5e0c-4250-ac95-7cd914d0175f/images/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6/a912e388-d80d-4f56-805b-ea5e2f35d741',
'volumeID': u'a912e388-d80d-4f56-805b-ea5e2f35d741', 'leasePath':
u'/rhev/data-center/mnt/192.168.2.220:_mnt_ovirt-freenas/6e627364-5e0c-4250-ac95-7cd914d0175f/images/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6/a912e388-d80d-4f56-805b-ea5e2f35d741.lease',
'imageID': '4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6'}, {'domainID':
'6e627364-5e0c-4250-ac95-7cd914d0175f', 'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/192.168.2.220:_mnt_ovirt-freenas/6e627364-5e0c-4250-ac95-7cd914d0175f/images/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6/f8066c56-6db1-4605-8d7c-0739335d30b8',
'volumeID': u'f8066c56-6db1-4605-8d7c-0739335d30b8', 'leasePath':
u'/rhev/data-center/mnt/192.168.2.220:_mnt_ovirt-freenas/6e627364-5e0c-4250-ac95-7cd914d0175f/images/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6/f8066c56-6db1-4605-8d7c-0739335d30b8.lease',
'imageID': '4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6'}]} from=internal,
task_id=865d2ff4-4e63-44dc-b8f8-9d93cad9892f (api:52)
2020-01-09 06:47:46,582-0600 INFO  (vm/c5d0a42f) [vds] prepared volume
path: 
/rhev/data-center/mnt/192.168.2.220:_mnt_ovirt-freenas/6e627364-5e0c-4250-ac95-7cd914d0175f/images/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6/f8066c56-6db1-4605-8d7c-0739335d30b8
(clientIF:497)
2020-01-09 06:47:46,583-0600 INFO  (vm/c5d0a42f) [vdsm.api] START
prepareImage(sdUUID='ec6ccb14-03c2-49cc-9cc0-b1a87d582ed7',
spUUID='25cd9bfc-bab6-11e8-90f3-78acc0b47b4d',
imgUUID='60077050-6f99-41db-b280-446f018b104b',
leafUUID='a67eb40c-e0a1-49cc-9179-bebb263d6e9c', allowIllegal=False)
from=internal, task_id=08830292-0f75-4c5b-a411-695894c66475 (api:46)
2020-01-09 06:47:46,632-0600 INFO  (vm/c5d0a42f) [vdsm.api] FINISH
prepareImage error=Cannot prepare illegal volume:
(u'a67eb40c-e0a1-49cc-9179-bebb263d6e9c',) from=internal,
task_id=08830292-0f75-4c5b-a411-695894c66475 (api:50)
2020-01-09 06:47:46,632-0600 ERROR (vm/c5d0a42f) [storage.TaskManager.Task]
(Task='08830292-0f75-4c5b-a411-695894c66475') Unexpected error (task:875)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882,
in _run
return fn(*args, **kargs)
  File "", line 2, in prepareImage
  File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in
method
ret = func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 3187,
in prepareImage
raise se.prepareIllegalVolumeError(volUUID)
prepareIllegalVolumeError: Cannot prepare illegal volume:
(u'a67eb40c-e0a1-49cc-9179-bebb263d6e9c',)
2020-01-09 06:47:46,633-0600 INFO  (vm/c5d0a42f) [storage.TaskManager.Task]
(Task='08830292-0f75-4c5b-a411-695894c66475') aborting: Task is aborted:
"Cannot prepare illegal volume: 

[ovirt-users] Re: AWX and error using ovirt as an inventory source

2020-01-09 Thread Gianluca Cecchi
On Tue, Jan 7, 2020 at 4:42 PM Gianluca Cecchi 
wrote:

> On Wed, Dec 18, 2019 at 7:28 PM Nathanaël Blanchet 
> wrote:
>
>> Hello Ondra what do you think about this question?
>> ovirt4.py may need some modifications to get required IPs/hostnames when
>> multiple vm has multiple interfaces ?
>> Personalized hack works for me but I have to modify the file each time I
>> upgrade AWX.
>> --
>>
>>
[snip]

Any feedback on this and why the sync of the inventory from the GUI doesn't
> update the ansible_host variable?
> Nathanaël, can you try, if you have a dev environment, to change the
> ansible_host to another value and see if it is picked up or not?
> In the mean time I see that in my bug entry for awx the status has been
> put to need_info, but I don't know why and I have asked more info about
> it
>
> Gianluca
>

I think I have found the culprit: during source inventory sync it is the
ovirt4.py of awx_task container that is executed and not the ovirt4.py of
awx_web container.
In fact creating a new inventory based on another RHV environment I got the
same problem related to ansible_host variable value.
As soon as I modify instead the
/var/lib/awx/venv/awx/lib/python3.6/site-packages/awx/plugins/inventory/ovirt4.py
on awx_task container I both get correct values (vm.fqdn) for new created
inventories and when updating existing ones.

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LRW4BLZPBHUVAXY36HJPYQ3XB66XQ24X/


[ovirt-users] Re: After NAS crash, one VM will not start up, Cannot prepare illegal volume

2020-01-09 Thread David Johnson
Additional info:

The failure appears to be from a simple legality check:

  def isLegal(self):
try:
legality = self.getMetaParam(sc.LEGALITY)
return legality != sc.ILLEGAL_VOL
except se.MetaDataKeyNotFoundError:
return True

Looking at the metadata above, the legality is 'LEGAL'

Regards,
David Johnson
Director of Development, Maxis Technology
844.696.2947 ext 702 (o)  |  479.531.3590 (c)
djohn...@maxistechnology.com


[image: Maxis Techncology] 
www.maxistechnology.com


*stay connected *


On Thu, Jan 9, 2020 at 6:15 AM David Johnson 
wrote:

> We had a drive in our NAS fail, but afterwards one of our VM's will not
> start.
>
> The boot drive on the VM is (so near as I can tell) the only drive
> affected.
>
> I confirmed that the disk images (active and snapshot) are both valid with
> qemu.
>
> I followed the instructions at
> https://www.canarytek.com/2017/07/02/Recover_oVirt_Illegal_Snapshots.html to
> identify the snapshot images that were marked "invalid" and marked them as
> valid.
>
> update images set imagestatus=1 where imagestatus=4;
>
>
>
> Log excerpt from attempt to start VM:
> 2020-01-09 02:18:44,908-0600 INFO  (vm/c5d0a42f) [vdsm.api] START
> prepareImage(sdUUID='6e627364-5e0c-4250-ac95-7cd914d0175f',
> spUUID='25cd9bfc-bab6-11e8-90f3-78acc0b47b4d',
> imgUUID='4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6',
> leafUUID='f8066c56-6db1-4605-8d7c-0739335d30b8', allowIllegal=False)
> from=internal, task_id=26053225-6569-4b73-abdd-7d6c7e15d1e9 (api:46)
> 2020-01-09 02:18:44,931-0600 INFO  (vm/c5d0a42f) [vdsm.api] FINISH
> prepareImage error=Cannot prepare illegal volume:
> (u'f8066c56-6db1-4605-8d7c-0739335d30b8',) from=internal,
> task_id=26053225-6569-4b73-abdd-7d6c7e15d1e9 (api:50)
> 2020-01-09 02:18:44,932-0600 ERROR (vm/c5d0a42f)
> [storage.TaskManager.Task] (Task='26053225-6569-4b73-abdd-7d6c7e15d1e9')
> Unexpected error (task:875)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882,
> in _run
> return fn(*args, **kargs)
>   File "", line 2, in prepareImage
>   File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in
> method
> ret = func(*args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 3187,
> in prepareImage
> raise se.prepareIllegalVolumeError(volUUID)
> prepareIllegalVolumeError: Cannot prepare illegal volume:
> (u'f8066c56-6db1-4605-8d7c-0739335d30b8',)
> 2020-01-09 02:18:44,932-0600 INFO  (vm/c5d0a42f)
> [storage.TaskManager.Task] (Task='26053225-6569-4b73-abdd-7d6c7e15d1e9')
> aborting: Task is aborted: "Cannot prepare illegal volume:
> (u'f8066c56-6db1-4605-8d7c-0739335d30b8',)" - code 227 (task:1181)
> 2020-01-09 02:18:44,933-0600 ERROR (vm/c5d0a42f) [storage.Dispatcher]
> FINISH prepareImage error=Cannot prepare illegal volume:
> (u'f8066c56-6db1-4605-8d7c-0739335d30b8',) (dispatcher:82)
> 2020-01-09 02:18:44,933-0600 ERROR (vm/c5d0a42f) [virt.vm]
> (vmId='c5d0a42f-3b1e-43ee-a567-7844654011f5') The vm start process failed
> (vm:949)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 878, in
> _startUnderlyingVm
> self._run()
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2798, in
> _run
> self._devices = self._make_devices()
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2639, in
> _make_devices
> disk_objs = self._perform_host_local_adjustment()
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2712, in
> _perform_host_local_adjustment
> self._preparePathsForDrives(disk_params)
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 1023, in
> _preparePathsForDrives
> drive['path'] = self.cif.prepareVolumePath(drive, self.id)
>   File "/usr/lib/python2.7/site-packages/vdsm/clientIF.py", line 417, in
> prepareVolumePath
> raise vm.VolumeError(drive)
> VolumeError: Bad volume specification {'address': {'bus': '0',
> 'controller': '0', 'type': 'drive', 'target': '0', 'unit': '0'}, 'serial':
> '4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6', 'index': 0, 'iface': 'scsi',
> 'apparentsize': '36440899584', 'specParams': {}, 'cache': 'writeback',
> 'imageID': '4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6', 'truesize':
> '16916186624', 'type': 'disk', 'domainID':
> '6e627364-5e0c-4250-ac95-7cd914d0175f', 'reqsize': '0', 'format': 'cow',
> 'poolID': '25cd9bfc-bab6-11e8-90f3-78acc0b47b4d', 'device': 'disk', 'path':
> '/rhev/data-center/25cd9bfc-bab6-11e8-90f3-78acc0b47b4d/6e627364-5e0c-4250-ac95-7cd914d0175f/images/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6/f8066c56-6db1-4605-8d7c-0739335d30b8',
> 'propagateErrors': 'off', 'name': 'sda', 'bootOrder': '1', 'volumeID':
> 'f8066c56-6db1-4605-8d7c-0739335d30b8', 'diskType': 'file', 'alias':
> 'ua-4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6', 'discard': False}
> 2020-01-09 02:18:44,934-0600 INFO  

[ovirt-users] Re: Storage I/O problem.

2020-01-09 Thread Christian Reiss

I found these in the logs:

[2020-01-09 12:30:01.690759] W [MSGID: 114031] 
[client-rpc-fops_v2.c:2697:client4_0_readv_cbk] 0-vms-client-2: remote 
operation failed [Invalid argument]
[2020-01-09 12:30:01.691284] W [MSGID: 114031] 
[client-rpc-fops_v2.c:2697:client4_0_readv_cbk] 0-vms-client-0: remote 
operation failed [Invalid argument]
[2020-01-09 12:30:01.691469] W [MSGID: 114031] 
[client-rpc-fops_v2.c:2697:client4_0_readv_cbk] 0-vms-client-1: remote 
operation failed [Invalid argument]
[2020-01-09 12:30:01.691500] W [fuse-bridge.c:2830:fuse_readv_cbk] 
0-glusterfs-fuse: 2509: READ => -1 
gfid=dbec5303-64c8-4e56-ae27-455f34fdfccc fd=0x7fcd38007148 (Invalid 
argument)
[2020-01-09 12:30:01.694257] W [fuse-bridge.c:2830:fuse_readv_cbk] 
0-glusterfs-fuse: 2514: READ => -1 
gfid=dbec5303-64c8-4e56-ae27-455f34fdfccc fd=0x7fcd38007148 (Invalid 
argument)
[2020-01-09 12:30:02.036328] W [MSGID: 114031] 
[client-rpc-fops_v2.c:2634:client4_0_lookup_cbk] 0-vms-client-2: remote 
operation failed. Path: /.shard/dbec5303-64c8-4e56-ae27-455f34fdfccc.12 
(----) [Permission denied]
[2020-01-09 12:30:02.036544] W [MSGID: 114031] 
[client-rpc-fops_v2.c:2634:client4_0_lookup_cbk] 0-vms-client-0: remote 
operation failed. Path: /.shard/dbec5303-64c8-4e56-ae27-455f34fdfccc.12 
(----) [Permission denied]
[2020-01-09 12:30:02.037320] W [MSGID: 114031] 
[client-rpc-fops_v2.c:2634:client4_0_lookup_cbk] 0-vms-client-1: remote 
operation failed. Path: /.shard/dbec5303-64c8-4e56-ae27-455f34fdfccc.12 
(----) [Permission denied]
[2020-01-09 12:30:02.037338] E [MSGID: 133010] 
[shard.c:2327:shard_common_lookup_shards_cbk] 0-vms-shard: Lookup on 
shard 12 failed. Base file gfid = dbec5303-64c8-4e56-ae27-455f34fdfccc 
[Permission denied]
[2020-01-09 12:30:02.037371] W [fuse-bridge.c:2830:fuse_readv_cbk] 
0-glusterfs-fuse: 2543: READ => -1 
gfid=dbec5303-64c8-4e56-ae27-455f34fdfccc fd=0x7fcd38014788 (Permission 
denied)



Anyone? Help? :)

-Chris.

On 08/01/2020 17:10, Christian Reiss wrote:

Ugh,

After having rebooted the 3-way HCI Cluster everything came back online, 
all the gluster volumes are online, no split-brain info detected. Also 
they are mounted on all nodes.


The Volumes, Domains and Disks are all marked green in the Engine.

Launching a VM fails with

"VM test01 has been paused due to storage I/O problem."

A md5 sum of an uploaded ISO image from all three mounted cluster 
members yield the same md5, so does the source file on my PC. Creating a 
new VM and attaching that ISO fails: The ISO is not readable (says the 
vm console).


The Disk Image from the test01 vm seems sounds (file has correct size, 
file-tool shows correct magic header), other files (configs) are readable.


I did complete cluster reboot etc.

gluster> volume heal vms info split-brain
Brick node01:/gluster_bricks/vms/vms
Status: Connected
Number of entries in split-brain: 0

Brick node02:/gluster_bricks/vms/vms
Status: Connected
Number of entries in split-brain: 0

Brick node03:/gluster_bricks/vms/vms
Status: Connected
Number of entries in split-brain: 0



Brick node01:/gluster_bricks/vms/vms
Status: Connected
Total Number of entries: 0
Number of entries in heal pending: 0
Number of entries in split-brain: 0
Number of entries possibly healing: 0

Brick node02:/gluster_bricks/vms/vms
Status: Connected
Total Number of entries: 0
Number of entries in heal pending: 0
Number of entries in split-brain: 0
Number of entries possibly healing: 0

Brick node03:/gluster_bricks/vms/vms
Status: Connected
Total Number of entries: 0
Number of entries in heal pending: 0
Number of entries in split-brain: 0
Number of entries possibly healing: 0


How would I fix this issue?
Anyone got a clue on how to proceed?

-Chris.



--
with kind regards,
mit freundlichen Gruessen,

Christian Reiss
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4OOJAHBUKDM6KZ2IGJQ6KHYWAJZSORCO/


[ovirt-users] Re: After NAS crash, one VM will not start up, Cannot prepare illegal volume

2020-01-09 Thread David Johnson
Thank you for the quick response.

Where do I find that?

Regards,
David Johnson
Director of Development, Maxis Technology
844.696.2947 ext 702 (o)  |  479.531.3590 (c)
djohn...@maxistechnology.com


[image: Maxis Techncology] 
www.maxistechnology.com


*stay connected *


On Thu, Jan 9, 2020 at 6:24 AM Benny Zlotnik  wrote:

> Did you change the volume metadata to LEGAL on the storage as well?
>
>
> On Thu, Jan 9, 2020 at 2:19 PM David Johnson 
> wrote:
>
>> We had a drive in our NAS fail, but afterwards one of our VM's will not
>> start.
>>
>> The boot drive on the VM is (so near as I can tell) the only drive
>> affected.
>>
>> I confirmed that the disk images (active and snapshot) are both valid
>> with qemu.
>>
>> I followed the instructions at
>> https://www.canarytek.com/2017/07/02/Recover_oVirt_Illegal_Snapshots.html to
>> identify the snapshot images that were marked "invalid" and marked them as
>> valid.
>>
>> update images set imagestatus=1 where imagestatus=4;
>>
>>
>>
>> Log excerpt from attempt to start VM:
>> 2020-01-09 02:18:44,908-0600 INFO  (vm/c5d0a42f) [vdsm.api] START
>> prepareImage(sdUUID='6e627364-5e0c-4250-ac95-7cd914d0175f',
>> spUUID='25cd9bfc-bab6-11e8-90f3-78acc0b47b4d',
>> imgUUID='4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6',
>> leafUUID='f8066c56-6db1-4605-8d7c-0739335d30b8', allowIllegal=False)
>> from=internal, task_id=26053225-6569-4b73-abdd-7d6c7e15d1e9 (api:46)
>> 2020-01-09 02:18:44,931-0600 INFO  (vm/c5d0a42f) [vdsm.api] FINISH
>> prepareImage error=Cannot prepare illegal volume:
>> (u'f8066c56-6db1-4605-8d7c-0739335d30b8',) from=internal,
>> task_id=26053225-6569-4b73-abdd-7d6c7e15d1e9 (api:50)
>> 2020-01-09 02:18:44,932-0600 ERROR (vm/c5d0a42f)
>> [storage.TaskManager.Task] (Task='26053225-6569-4b73-abdd-7d6c7e15d1e9')
>> Unexpected error (task:875)
>> Traceback (most recent call last):
>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882,
>> in _run
>> return fn(*args, **kargs)
>>   File "", line 2, in prepareImage
>>   File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in
>> method
>> ret = func(*args, **kwargs)
>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 3187,
>> in prepareImage
>> raise se.prepareIllegalVolumeError(volUUID)
>> prepareIllegalVolumeError: Cannot prepare illegal volume:
>> (u'f8066c56-6db1-4605-8d7c-0739335d30b8',)
>> 2020-01-09 02:18:44,932-0600 INFO  (vm/c5d0a42f)
>> [storage.TaskManager.Task] (Task='26053225-6569-4b73-abdd-7d6c7e15d1e9')
>> aborting: Task is aborted: "Cannot prepare illegal volume:
>> (u'f8066c56-6db1-4605-8d7c-0739335d30b8',)" - code 227 (task:1181)
>> 2020-01-09 02:18:44,933-0600 ERROR (vm/c5d0a42f) [storage.Dispatcher]
>> FINISH prepareImage error=Cannot prepare illegal volume:
>> (u'f8066c56-6db1-4605-8d7c-0739335d30b8',) (dispatcher:82)
>> 2020-01-09 02:18:44,933-0600 ERROR (vm/c5d0a42f) [virt.vm]
>> (vmId='c5d0a42f-3b1e-43ee-a567-7844654011f5') The vm start process failed
>> (vm:949)
>> Traceback (most recent call last):
>>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 878, in
>> _startUnderlyingVm
>> self._run()
>>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2798, in
>> _run
>> self._devices = self._make_devices()
>>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2639, in
>> _make_devices
>> disk_objs = self._perform_host_local_adjustment()
>>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2712, in
>> _perform_host_local_adjustment
>> self._preparePathsForDrives(disk_params)
>>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 1023, in
>> _preparePathsForDrives
>> drive['path'] = self.cif.prepareVolumePath(drive, self.id)
>>   File "/usr/lib/python2.7/site-packages/vdsm/clientIF.py", line 417, in
>> prepareVolumePath
>> raise vm.VolumeError(drive)
>> VolumeError: Bad volume specification {'address': {'bus': '0',
>> 'controller': '0', 'type': 'drive', 'target': '0', 'unit': '0'}, 'serial':
>> '4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6', 'index': 0, 'iface': 'scsi',
>> 'apparentsize': '36440899584', 'specParams': {}, 'cache': 'writeback',
>> 'imageID': '4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6', 'truesize':
>> '16916186624', 'type': 'disk', 'domainID':
>> '6e627364-5e0c-4250-ac95-7cd914d0175f', 'reqsize': '0', 'format': 'cow',
>> 'poolID': '25cd9bfc-bab6-11e8-90f3-78acc0b47b4d', 'device': 'disk', 'path':
>> '/rhev/data-center/25cd9bfc-bab6-11e8-90f3-78acc0b47b4d/6e627364-5e0c-4250-ac95-7cd914d0175f/images/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6/f8066c56-6db1-4605-8d7c-0739335d30b8',
>> 'propagateErrors': 'off', 'name': 'sda', 'bootOrder': '1', 'volumeID':
>> 'f8066c56-6db1-4605-8d7c-0739335d30b8', 'diskType': 'file', 'alias':
>> 'ua-4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6', 'discard': False}
>> 2020-01-09 02:18:44,934-0600 INFO  (vm/c5d0a42f) [virt.vm]
>> 

[ovirt-users] Re: After NAS crash, one VM will not start up, Cannot prepare illegal volume

2020-01-09 Thread Benny Zlotnik
Did you change the volume metadata to LEGAL on the storage as well?


On Thu, Jan 9, 2020 at 2:19 PM David Johnson 
wrote:

> We had a drive in our NAS fail, but afterwards one of our VM's will not
> start.
>
> The boot drive on the VM is (so near as I can tell) the only drive
> affected.
>
> I confirmed that the disk images (active and snapshot) are both valid with
> qemu.
>
> I followed the instructions at
> https://www.canarytek.com/2017/07/02/Recover_oVirt_Illegal_Snapshots.html to
> identify the snapshot images that were marked "invalid" and marked them as
> valid.
>
> update images set imagestatus=1 where imagestatus=4;
>
>
>
> Log excerpt from attempt to start VM:
> 2020-01-09 02:18:44,908-0600 INFO  (vm/c5d0a42f) [vdsm.api] START
> prepareImage(sdUUID='6e627364-5e0c-4250-ac95-7cd914d0175f',
> spUUID='25cd9bfc-bab6-11e8-90f3-78acc0b47b4d',
> imgUUID='4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6',
> leafUUID='f8066c56-6db1-4605-8d7c-0739335d30b8', allowIllegal=False)
> from=internal, task_id=26053225-6569-4b73-abdd-7d6c7e15d1e9 (api:46)
> 2020-01-09 02:18:44,931-0600 INFO  (vm/c5d0a42f) [vdsm.api] FINISH
> prepareImage error=Cannot prepare illegal volume:
> (u'f8066c56-6db1-4605-8d7c-0739335d30b8',) from=internal,
> task_id=26053225-6569-4b73-abdd-7d6c7e15d1e9 (api:50)
> 2020-01-09 02:18:44,932-0600 ERROR (vm/c5d0a42f)
> [storage.TaskManager.Task] (Task='26053225-6569-4b73-abdd-7d6c7e15d1e9')
> Unexpected error (task:875)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882,
> in _run
> return fn(*args, **kargs)
>   File "", line 2, in prepareImage
>   File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in
> method
> ret = func(*args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 3187,
> in prepareImage
> raise se.prepareIllegalVolumeError(volUUID)
> prepareIllegalVolumeError: Cannot prepare illegal volume:
> (u'f8066c56-6db1-4605-8d7c-0739335d30b8',)
> 2020-01-09 02:18:44,932-0600 INFO  (vm/c5d0a42f)
> [storage.TaskManager.Task] (Task='26053225-6569-4b73-abdd-7d6c7e15d1e9')
> aborting: Task is aborted: "Cannot prepare illegal volume:
> (u'f8066c56-6db1-4605-8d7c-0739335d30b8',)" - code 227 (task:1181)
> 2020-01-09 02:18:44,933-0600 ERROR (vm/c5d0a42f) [storage.Dispatcher]
> FINISH prepareImage error=Cannot prepare illegal volume:
> (u'f8066c56-6db1-4605-8d7c-0739335d30b8',) (dispatcher:82)
> 2020-01-09 02:18:44,933-0600 ERROR (vm/c5d0a42f) [virt.vm]
> (vmId='c5d0a42f-3b1e-43ee-a567-7844654011f5') The vm start process failed
> (vm:949)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 878, in
> _startUnderlyingVm
> self._run()
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2798, in
> _run
> self._devices = self._make_devices()
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2639, in
> _make_devices
> disk_objs = self._perform_host_local_adjustment()
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2712, in
> _perform_host_local_adjustment
> self._preparePathsForDrives(disk_params)
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 1023, in
> _preparePathsForDrives
> drive['path'] = self.cif.prepareVolumePath(drive, self.id)
>   File "/usr/lib/python2.7/site-packages/vdsm/clientIF.py", line 417, in
> prepareVolumePath
> raise vm.VolumeError(drive)
> VolumeError: Bad volume specification {'address': {'bus': '0',
> 'controller': '0', 'type': 'drive', 'target': '0', 'unit': '0'}, 'serial':
> '4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6', 'index': 0, 'iface': 'scsi',
> 'apparentsize': '36440899584', 'specParams': {}, 'cache': 'writeback',
> 'imageID': '4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6', 'truesize':
> '16916186624', 'type': 'disk', 'domainID':
> '6e627364-5e0c-4250-ac95-7cd914d0175f', 'reqsize': '0', 'format': 'cow',
> 'poolID': '25cd9bfc-bab6-11e8-90f3-78acc0b47b4d', 'device': 'disk', 'path':
> '/rhev/data-center/25cd9bfc-bab6-11e8-90f3-78acc0b47b4d/6e627364-5e0c-4250-ac95-7cd914d0175f/images/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6/f8066c56-6db1-4605-8d7c-0739335d30b8',
> 'propagateErrors': 'off', 'name': 'sda', 'bootOrder': '1', 'volumeID':
> 'f8066c56-6db1-4605-8d7c-0739335d30b8', 'diskType': 'file', 'alias':
> 'ua-4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6', 'discard': False}
> 2020-01-09 02:18:44,934-0600 INFO  (vm/c5d0a42f) [virt.vm]
> (vmId='c5d0a42f-3b1e-43ee-a567-7844654011f5') Changed state to Down: Bad
> volume specification {'address': {'bus': '0', 'controller': '0', 'type':
> 'drive', 'target': '0', 'unit': '0'}, 'serial':
> '4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6', 'index': 0, 'iface': 'scsi',
> 'apparentsize': '36440899584', 'specParams': {}, 'cache': 'writeback',
> 'imageID': '4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6', 'truesize':
> '16916186624', 'type': 'disk', 'domainID':
> '6e627364-5e0c-4250-ac95-7cd914d0175f', 'reqsize': '0', 'format': 'cow',

[ovirt-users] After NAS crash, one VM will not start up, Cannot prepare illegal volume

2020-01-09 Thread David Johnson
We had a drive in our NAS fail, but afterwards one of our VM's will not
start.

The boot drive on the VM is (so near as I can tell) the only drive affected.

I confirmed that the disk images (active and snapshot) are both valid with
qemu.

I followed the instructions at
https://www.canarytek.com/2017/07/02/Recover_oVirt_Illegal_Snapshots.html to
identify the snapshot images that were marked "invalid" and marked them as
valid.

update images set imagestatus=1 where imagestatus=4;



Log excerpt from attempt to start VM:
2020-01-09 02:18:44,908-0600 INFO  (vm/c5d0a42f) [vdsm.api] START
prepareImage(sdUUID='6e627364-5e0c-4250-ac95-7cd914d0175f',
spUUID='25cd9bfc-bab6-11e8-90f3-78acc0b47b4d',
imgUUID='4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6',
leafUUID='f8066c56-6db1-4605-8d7c-0739335d30b8', allowIllegal=False)
from=internal, task_id=26053225-6569-4b73-abdd-7d6c7e15d1e9 (api:46)
2020-01-09 02:18:44,931-0600 INFO  (vm/c5d0a42f) [vdsm.api] FINISH
prepareImage error=Cannot prepare illegal volume:
(u'f8066c56-6db1-4605-8d7c-0739335d30b8',) from=internal,
task_id=26053225-6569-4b73-abdd-7d6c7e15d1e9 (api:50)
2020-01-09 02:18:44,932-0600 ERROR (vm/c5d0a42f) [storage.TaskManager.Task]
(Task='26053225-6569-4b73-abdd-7d6c7e15d1e9') Unexpected error (task:875)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882,
in _run
return fn(*args, **kargs)
  File "", line 2, in prepareImage
  File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in
method
ret = func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 3187,
in prepareImage
raise se.prepareIllegalVolumeError(volUUID)
prepareIllegalVolumeError: Cannot prepare illegal volume:
(u'f8066c56-6db1-4605-8d7c-0739335d30b8',)
2020-01-09 02:18:44,932-0600 INFO  (vm/c5d0a42f) [storage.TaskManager.Task]
(Task='26053225-6569-4b73-abdd-7d6c7e15d1e9') aborting: Task is aborted:
"Cannot prepare illegal volume: (u'f8066c56-6db1-4605-8d7c-0739335d30b8',)"
- code 227 (task:1181)
2020-01-09 02:18:44,933-0600 ERROR (vm/c5d0a42f) [storage.Dispatcher]
FINISH prepareImage error=Cannot prepare illegal volume:
(u'f8066c56-6db1-4605-8d7c-0739335d30b8',) (dispatcher:82)
2020-01-09 02:18:44,933-0600 ERROR (vm/c5d0a42f) [virt.vm]
(vmId='c5d0a42f-3b1e-43ee-a567-7844654011f5') The vm start process failed
(vm:949)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 878, in
_startUnderlyingVm
self._run()
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2798, in
_run
self._devices = self._make_devices()
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2639, in
_make_devices
disk_objs = self._perform_host_local_adjustment()
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2712, in
_perform_host_local_adjustment
self._preparePathsForDrives(disk_params)
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 1023, in
_preparePathsForDrives
drive['path'] = self.cif.prepareVolumePath(drive, self.id)
  File "/usr/lib/python2.7/site-packages/vdsm/clientIF.py", line 417, in
prepareVolumePath
raise vm.VolumeError(drive)
VolumeError: Bad volume specification {'address': {'bus': '0',
'controller': '0', 'type': 'drive', 'target': '0', 'unit': '0'}, 'serial':
'4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6', 'index': 0, 'iface': 'scsi',
'apparentsize': '36440899584', 'specParams': {}, 'cache': 'writeback',
'imageID': '4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6', 'truesize':
'16916186624', 'type': 'disk', 'domainID':
'6e627364-5e0c-4250-ac95-7cd914d0175f', 'reqsize': '0', 'format': 'cow',
'poolID': '25cd9bfc-bab6-11e8-90f3-78acc0b47b4d', 'device': 'disk', 'path':
'/rhev/data-center/25cd9bfc-bab6-11e8-90f3-78acc0b47b4d/6e627364-5e0c-4250-ac95-7cd914d0175f/images/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6/f8066c56-6db1-4605-8d7c-0739335d30b8',
'propagateErrors': 'off', 'name': 'sda', 'bootOrder': '1', 'volumeID':
'f8066c56-6db1-4605-8d7c-0739335d30b8', 'diskType': 'file', 'alias':
'ua-4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6', 'discard': False}
2020-01-09 02:18:44,934-0600 INFO  (vm/c5d0a42f) [virt.vm]
(vmId='c5d0a42f-3b1e-43ee-a567-7844654011f5') Changed state to Down: Bad
volume specification {'address': {'bus': '0', 'controller': '0', 'type':
'drive', 'target': '0', 'unit': '0'}, 'serial':
'4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6', 'index': 0, 'iface': 'scsi',
'apparentsize': '36440899584', 'specParams': {}, 'cache': 'writeback',
'imageID': '4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6', 'truesize':
'16916186624', 'type': 'disk', 'domainID':
'6e627364-5e0c-4250-ac95-7cd914d0175f', 'reqsize': '0', 'format': 'cow',
'poolID': '25cd9bfc-bab6-11e8-90f3-78acc0b47b4d', 'device': 'disk', 'path':
'/rhev/data-center/25cd9bfc-bab6-11e8-90f3-78acc0b47b4d/6e627364-5e0c-4250-ac95-7cd914d0175f/images/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6/f8066c56-6db1-4605-8d7c-0739335d30b8',
'propagateErrors': 'off', 'name': 'sda', 'bootOrder': 

[ovirt-users] Re: Low disk space on Storage

2020-01-09 Thread Demeter Tibor
Dear Users, 

I have exactly same problem. 
When I trying to migrate a big (16 TB) disk image to an another storage domain, 
i've got this error message: 

Error while executing action: Cannot move Virtual Disk. Low disk space on 
Storage Domain Backup. 

But I have a lot of space... 

Ovirt 4.3.7 

Please help me what can I do? 

Regards, 

Tibor 

- 2019. nov.. 13., 11:11,  írta: 

> The update fixes the issue.

> Thanks

> De: "Benny Zlotnik" 
> Para: supo...@logicworks.pt
> Cc: "users" 
> Enviadas: Terça-feira, 12 De Novembro de 2019 11:48:47
> Assunto: Re: [ovirt-users] Low disk space on Storage

> This was fixed in 4.3.6, I suggest upgrading

> On Tue, Nov 12, 2019 at 12:45 PM  wrote:

> > Hi,

> > I'm running ovirt Version:4.3.4.3-1.el7
> > My filesystem disk has 30 GB free space.
> > Cannot start a VM due to an I/O error storage.
> > When tryng to move the disk to another storage domain get this error:
>> Error while executing action: Cannot move Virtual Disk. Low disk space on
> > Storage Domain DATA4.

> > The sum of pre-allocated disk is the total of the storage domain disk.

> > Any idea what can I do to move a disk to other storage domain?

> > Many thanks

> > --
> > 
> > Jose Ferradeira
> > http://www.logicworks.pt
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
> > https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
> > https://lists.ovirt.org/archives/list/users@ovirt.org/message/MQFPGUPB43I7OO7FXEPLG4XSG5X2INLJ/

> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/M4SDOU7S2IFIFZFPHOKRNCXFQ3C25AO2/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PVVRGUMPMIEZKJYJVPEUP4WKAV72P6LD/