Re: [ovirt-users] Move a VM between 2 setups

2016-11-15 Thread ccox
Export domains are to move VMs between oVirt environments.  As
already mentioned, you attach the export domain to one data center,
export the vm, maint/detach, attach to different data center, attach
and import (then maint/detach, etc.).

The language "export domains  'between' engines" is a bit
confusing...

- Original Message -
From:
 "Christophe TREFOIS" 

To:
"Beckman Daniel" , "users"

Cc:

Sent:
Mon, 14 Nov 2016 20:10:16 +
Subject:
Re: [ovirt-users] Move a VM between 2 setups

Hi Daniel,

 

Fantastic information. Thanks so much for this.

 

I was already using export domains inside my current engine, but not
sure if it would also work “between” engines so to speak.

 

Have a great evening!

Christophe

 

FROM: Beckman, Daniel [mailto:daniel.beck...@ingramcontent.com] 
SENT: lundi 14 novembre 2016 20:43
TO: Christophe TREFOIS ; users

SUBJECT: Re: [ovirt-users] Move a VM between 2 setups

 

Hi Christophe,

 

An “export domain” is made for just this purpose. Create a NFS
(version 3) share and make it accessible to the hypervisors for each
engine. (It should be a dedicated NFS share, not used for anything
else.) As I recall it should be owned by vdsm:vdsm (36:36). In one of
the engines (doesn’t matter which), in the web admin page go to
Storage and add a new NFS based export domain, using the NFS share you
created. Once it’s activated, test it out; try right-clicking on a
VM to “export” it. 

 

Note that there can only be one engine connected to a given export
domain at any one time. When you’re done testing the export domain
on the first engine, you need to put it into “maintenance” and
ultimately “detach” it.

 

Then go to the other engine, and this time under Storage instead of
“new domain” click “import domain” and enter the same NFS
share information. It should recognize that you already have an export
domain setup under that NFS share.  Attach and activate it, and under
Storage /   / VM Import, try importing the
VM you had previously exported. 

 

This is covered (sparsely) in the oVirt documentation at
https://www.ovirt.org/documentation/admin-guide/administration-guide/
[1], and it’s covered more coherently in the commercial RHV
documentation at
https://access.redhat.com/documentation/en/red-hat-virtualization/4.0/single/administration-guide#Storage_properties
[2]. 

 

Best,

Daniel

 

FROM:  on behalf of Christophe TREFOIS

DATE: Monday, November 14, 2016 at 11:14 AM
TO: users 
SUBJECT: [ovirt-users] Move a VM between 2 setups

 

Hi, 

 

We have a setup where we want to deploy 2 engines as the network
between 2 buildings is unreliable.

 

With 2 engines, we then want to be able to move VMs (one time) from
current engine where they are running to new engine in the other
building.

 

Is there a recommended workflow for doing this?

We have access to shared NFS for this task if required.

 

Thanks for any pointers,

Christophe

-- 

DR CHRISTOPHE TREFOIS, DIPL.-ING.  

Technical Specialist / Post-Doc

UNIVERSITÉ DU LUXEMBOURG

LUXEMBOURG CENTRE FOR SYSTEMS BIOMEDICINE

Campus Belval | House of Biomedicine  

6, avenue du Swing 

L-4367 Belvaux  

T: 
+352 46 66 44 6124
 

F: 
+352 46 66 44 6949
  

http://www.uni.lu/lcsb
 [6]

 [7]  

 [8]  

 [9]  

 [10]  

 [11]


This message is confidential and may contain privileged information. 

It is intended for the named recipient only. 

If you receive it in error please notify me and permanently delete the
original message and any copies. 



  

 

 

Links:
--
[1]
https://www.ovirt.org/documentation/admin-guide/administration-guide/
[2]
https://access.redhat.com/documentation/en/red-hat-virtualization/4.0/single/administration-guide#Storage_properties
[3] mailto:users-boun...@ovirt.org
[4] mailto:christophe.tref...@uni.lu
[5] mailto:users@ovirt.org
[6] http://www.uni.lu/lcsb
[7] https://www.facebook.com/trefex
[8] https://twitter.com/Trefex
[9] https://plus.google.com/+ChristopheTrefois/
[10] https://www.linkedin.com/in/trefoischristophe
[11] http://skype?call

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] I don't need reports anymore. How do I get rid of it? ovirt 3.6.7

2016-09-29 Thread ccox
I'd like to get rid of the Jasper server from my mgmt host altogether
(not a migration, I just don't want it at all).

I disable the dwh and reports services (Centos 7), but I think the
engine needs to be configured to not try to send anything (at least it
looks like that to me from the logs).

What are the steps to removing Reports?

I'm using oVirt 3.6.7


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Import and Exporting domains (to export/import across datacenters) no longer working

2016-04-26 Thread ccox
In the past we had a 3.4 and a 3.5 ovirt.  I was able to attach an Export
domain to one, export Templates and/or VMs... put in into Maint mode,
Detach it from the datacenter, go to the other ovirt manager, import the
Domain (sometimes forcing an attach, sometimes it would automatically
attach) and import my VM and/or template.  Afterwards, placing it back
into Maint mode and Dettaching.

Often times there would be a residual icon left under Storage for the
detached item... and the process was fairly repeatable.

Now, it's getting repeatable in a bad way.  Now it won't import, I have to
manually do two table record deletes and a REST call to remove the
connected storage and then and only then can I successfully import.

psql db mods look like (from the ovirt manage box):
First to find erroneous data:
select * from storage_domain_static where storage_domain_type = 3;
Then we locate the offending uuid (see full one below on the REST call)
and delete the record our of storage_domain_dynamic and static like so:
engine=# delete from storage_domain_dynamic where id =
'9f00c1d9-3f2a-41b9-80c3-344900622b07';
DELETE 1
engine=# select
Deletestorage_domain_static('9f00c1d9-3f2a-41b9-80c3-344900622b07');

Folks here talked about the latter command, but it would fail if the
reference in the storage_domain_dynamic table wasn't removed first.


Rest call looks something like this:
curl -v -u "admin@internal:ourpassword" -X DELETE
https://ovirt.example.com/ovirt-engine/api/storageconnections/9f00c1d9-3f2a-41b9-80c3-344900622b07

Any ideas about what is not apparently broken on our site?

Is there any other way to do export/import across datacenters?


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Need to clear domain Export already exists in an ovirt (3.5)

2015-12-03 Thread ccox
Our ovirt 3.5 host thinks it has an export domain, but it's not visible
anywhere, but it's keeping us from importing a domain from a different
datacenter.  What database update do I need to issue to clear the bad
state from the ovirt 3.5 we are trying to Import into?

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Any way to correlate a VM disk (e.g. /dev/vda) to the vdsm ovirt disk?

2015-10-07 Thread ccox
> Hi ccox,
> you can see the disk id mapping to device if you execute 'ls -l
> /dev/disk/by-id/' .
> Second way, and easier, is to make sure you have guest-agent installed on
> your guest virtual machine and using rest API you can run GET command:
> GET on .../api/vms/{vm_id}/disks
>
> You will see an attribute called "" .
> I hope that helps

should have said I'm running 3.4.  I don't think there's a logical_name in
that version.  And by-id or by-uuid doesn't seem to match anything.

Maybe this can't be done in 3.4?


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Any way to correlate a VM disk (e.g. /dev/vda) to the vdsm ovirt disk?

2015-10-06 Thread ccox
I want to correlate virtual disks back to their originating storage under
ovirt. Is there any way to do this?

e.g. (made up example)

/dev/vda

maps to ovirt disk

disk1_vm serial 978e00a3-b4c9-4962-bc4f-ffc9267acdd8

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Is a dedicated oVirt mgmt VLAN still needed for oVirt host nodes?

2015-09-14 Thread ccox
We have an oVirt environment that I inherited.  One is running 3.4.0 and
one is running 3.5.0.

Seem in both cases the prior administrator stated that a dedicated VLAN
was necessary for oVirt mgmt.  That is, we could not run multiple tagged
VLANs on a nic for a given oVirt host node.

Does any of this make sense?  Is this true?  Is it still true for more
contemporary versions of oVirt?

My problem is that our nodes are blades and I only have two physical nics
per blade.  In our network for redundancy we need to have the two nics
have the same VLANs so that things failover ok.  Which means we have to
share the oVirt mgmt network on the same wire.  That's the ideal.

Currently we have a whole nic on the blade just for oVirt management.  Is
this a requirement?


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users