Re: [Users] virt-manager migration

2014-04-03 Thread Ryan Barry

 From: Jeremiah Jahn jerem...@goodinassociates.com
 To: users@ovirt.org
 Sent: Wednesday, April 2, 2014 8:38:02 PM
 Subject: [Users] virt-manager migration


 Anyway, long story short. I'm having a difficult time finding 
documentation
 on migrating from virt-manager to oVirt. As well as installing 
ovirt-nodes
 manually. I'd love to find this perfect world where I can just 
install the
 ovirt-node RPMs on my already running Hosts and begin to have them 
managed

 by the oVirt engine. Without and serious downtime.

The usual way is to go through virt-v2v. Essentially, you'd install the 
engine somewhere and configure a storage domain (the properties of which 
vary, but it's UUIDed and the UUID must match the engine) to bring the 
datacenter up, then add an export domain (which is also UUIDed).


Once an export domain is created, virt-v2v can move your VMs over, but 
with downtime.


As far as turning your existing hosts into nodes, adding them from the 
engine is the easiest way (there's a wizard for this). It's possible to 
install the ovirt-node RPMs directly, but they take over your system a 
bit, and it's probably not what you're looking for. The engine can 
manage regular EL6/fedora hosts.


But registering to the engine will reconfigure libvirt, so the general 
path is:


Install engine.
Live-migrate VMs off one of your hosts.
Add that host as a node.
virt-v2v machines than can take downtime (can you get a maintenance window)?
Bring them up on the new node.
Repeat until your environment is converted.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] virt-manager migration

2014-04-03 Thread Jeremiah Jahn
Well, that was fun. So I let the ovirt engine install to a running
host that already had kvm/libvirt running on it. Don't ask why, but it
did happen.  After figuring out how to setup a sasl user/password and
adding qemu to the disk group I could startup all of my guests again.
  My host now shows up in the list of hosts, but has One of the
Logical Networks defined for this Cluster is Unreachable by the Host.
 error sitting on it.  ovirt-node-setup also tells me I should setup a
network.   I currently have 6 bridges running on this thing all one
for each vlan. I'm unsure as to how to meld the 'bondX' in
ovirt-node-setup with my current network configuration to resolve the
error.  esp given that I don't actually want to bond any of my NIC's
together at this point.  I do realise I'm doing this the hard way.  My
goal at the moment is to just get the host to fully report in the
engine, at which point I think I'll be able to use v2v to finish up
the rest.

Thanks for any suggestions or pointers.


On Thu, Apr 3, 2014 at 10:29 AM, Ryan Barry rba...@redhat.com wrote:
 From: Jeremiah Jahn jerem...@goodinassociates.com
 To: users@ovirt.org
 Sent: Wednesday, April 2, 2014 8:38:02 PM
 Subject: [Users] virt-manager migration



 Anyway, long story short. I'm having a difficult time finding
 documentation
 on migrating from virt-manager to oVirt. As well as installing ovirt-nodes
 manually. I'd love to find this perfect world where I can just install the
 ovirt-node RPMs on my already running Hosts and begin to have them managed
 by the oVirt engine. Without and serious downtime.

 The usual way is to go through virt-v2v. Essentially, you'd install the
 engine somewhere and configure a storage domain (the properties of which
 vary, but it's UUIDed and the UUID must match the engine) to bring the
 datacenter up, then add an export domain (which is also UUIDed).

 Once an export domain is created, virt-v2v can move your VMs over, but with
 downtime.

 As far as turning your existing hosts into nodes, adding them from the
 engine is the easiest way (there's a wizard for this). It's possible to
 install the ovirt-node RPMs directly, but they take over your system a bit,
 and it's probably not what you're looking for. The engine can manage regular
 EL6/fedora hosts.

 But registering to the engine will reconfigure libvirt, so the general path
 is:

 Install engine.
 Live-migrate VMs off one of your hosts.
 Add that host as a node.
 virt-v2v machines than can take downtime (can you get a maintenance window)?
 Bring them up on the new node.
 Repeat until your environment is converted.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] virt-manager migration

2014-04-03 Thread Ryan Barry

On 04/03/2014 03:58 PM, Jeremiah Jahn wrote:

Well, that was fun. So I let the ovirt engine install to a running
host that already had kvm/libvirt running on it. Don't ask why, but it
did happen.  After figuring out how to setup a sasl user/password and
adding qemu to the disk group I could startup all of my guests again.
   My host now shows up in the list of hosts, but has One of the
Logical Networks defined for this Cluster is Unreachable by the Host.
  error sitting on it.  ovirt-node-setup also tells me I should setup a
network.   I currently have 6 bridges running on this thing all one
for each vlan. I'm unsure as to how to meld the 'bondX' in
ovirt-node-setup with my current network configuration to resolve the
error.  esp given that I don't actually want to bond any of my NIC's
together at this point.  I do realise I'm doing this the hard way.  My
goal at the moment is to just get the host to fully report in the
engine, at which point I think I'll be able to use v2v to finish up
the rest.

Thanks for any suggestions or pointers.

That's ok. I probably would have skipped ovirt-node-setup (which is 
intended to run on the oVirt Node ISO) and used the New wizard from 
the Engine to add it (which would install requisite RPMs, etc).


The one of the networks is not reachable error isn't really my area, 
but it's probably looking for the host to be reachable by IP on a bridge 
called ovirtmgmt.


An engine developer can verify this, but I'd guess that adding a virtual 
NIC to whatever VLAN has the IP the engine sees and bridging that to 
ovirtmgmt with the appropriate address would work. It may only need the 
right bridge (ovirtmgmt, probably) defined. I've never experimented with 
this aspect of it, to be honest.


On Thu, Apr 3, 2014 at 10:29 AM, Ryan Barry rba...@redhat.com wrote:

From: Jeremiah Jahn jerem...@goodinassociates.com
To: users@ovirt.org
Sent: Wednesday, April 2, 2014 8:38:02 PM
Subject: [Users] virt-manager migration







Anyway, long story short. I'm having a difficult time finding
documentation
on migrating from virt-manager to oVirt. As well as installing ovirt-nodes
manually. I'd love to find this perfect world where I can just install the
ovirt-node RPMs on my already running Hosts and begin to have them managed
by the oVirt engine. Without and serious downtime.


The usual way is to go through virt-v2v. Essentially, you'd install the
engine somewhere and configure a storage domain (the properties of which
vary, but it's UUIDed and the UUID must match the engine) to bring the
datacenter up, then add an export domain (which is also UUIDed).

Once an export domain is created, virt-v2v can move your VMs over, but with
downtime.

As far as turning your existing hosts into nodes, adding them from the
engine is the easiest way (there's a wizard for this). It's possible to
install the ovirt-node RPMs directly, but they take over your system a bit,
and it's probably not what you're looking for. The engine can manage regular
EL6/fedora hosts.

But registering to the engine will reconfigure libvirt, so the general path
is:

Install engine.
Live-migrate VMs off one of your hosts.
Add that host as a node.
virt-v2v machines than can take downtime (can you get a maintenance window)?
Bring them up on the new node.
Repeat until your environment is converted.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] virt-manager migration

2014-04-03 Thread Jeremiah Jahn
I was just using ovirt-node-setup to see what going on. Also used it
to setup my resolv.conf. I did use the New wizard, but it seems to
have failed at the point where ovirtmgmt magic was supposed to happen.
Dragging the ovrtmgmt network onto the proper interface in the host
then results in udev trying to restart repeatedly my vlan device after
failing with a dhcpclient request. Since it's already got an ip on
that vlan all hell breaks loose and no dhcp response ever comes.
Thanks for the help though, I hope someone else on the list can give
me a pointer or some help.

-jj-

On Thu, Apr 3, 2014 at 3:04 PM, Ryan Barry rba...@redhat.com wrote:
 On 04/03/2014 03:58 PM, Jeremiah Jahn wrote:

 Well, that was fun. So I let the ovirt engine install to a running
 host that already had kvm/libvirt running on it. Don't ask why, but it
 did happen.  After figuring out how to setup a sasl user/password and
 adding qemu to the disk group I could startup all of my guests again.
My host now shows up in the list of hosts, but has One of the
 Logical Networks defined for this Cluster is Unreachable by the Host.
   error sitting on it.  ovirt-node-setup also tells me I should setup a
 network.   I currently have 6 bridges running on this thing all one
 for each vlan. I'm unsure as to how to meld the 'bondX' in
 ovirt-node-setup with my current network configuration to resolve the
 error.  esp given that I don't actually want to bond any of my NIC's
 together at this point.  I do realise I'm doing this the hard way.  My
 goal at the moment is to just get the host to fully report in the
 engine, at which point I think I'll be able to use v2v to finish up
 the rest.

 Thanks for any suggestions or pointers.

 That's ok. I probably would have skipped ovirt-node-setup (which is intended
 to run on the oVirt Node ISO) and used the New wizard from the Engine to
 add it (which would install requisite RPMs, etc).

 The one of the networks is not reachable error isn't really my area, but
 it's probably looking for the host to be reachable by IP on a bridge called
 ovirtmgmt.

 An engine developer can verify this, but I'd guess that adding a virtual NIC
 to whatever VLAN has the IP the engine sees and bridging that to ovirtmgmt
 with the appropriate address would work. It may only need the right bridge
 (ovirtmgmt, probably) defined. I've never experimented with this aspect of
 it, to be honest.


 On Thu, Apr 3, 2014 at 10:29 AM, Ryan Barry rba...@redhat.com wrote:

 From: Jeremiah Jahn jerem...@goodinassociates.com
 To: users@ovirt.org
 Sent: Wednesday, April 2, 2014 8:38:02 PM
 Subject: [Users] virt-manager migration




 Anyway, long story short. I'm having a difficult time finding
 documentation
 on migrating from virt-manager to oVirt. As well as installing
 ovirt-nodes
 manually. I'd love to find this perfect world where I can just install
 the
 ovirt-node RPMs on my already running Hosts and begin to have them
 managed
 by the oVirt engine. Without and serious downtime.


 The usual way is to go through virt-v2v. Essentially, you'd install the
 engine somewhere and configure a storage domain (the properties of which
 vary, but it's UUIDed and the UUID must match the engine) to bring the
 datacenter up, then add an export domain (which is also UUIDed).

 Once an export domain is created, virt-v2v can move your VMs over, but
 with
 downtime.

 As far as turning your existing hosts into nodes, adding them from the
 engine is the easiest way (there's a wizard for this). It's possible to
 install the ovirt-node RPMs directly, but they take over your system a
 bit,
 and it's probably not what you're looking for. The engine can manage
 regular
 EL6/fedora hosts.

 But registering to the engine will reconfigure libvirt, so the general
 path
 is:

 Install engine.
 Live-migrate VMs off one of your hosts.
 Add that host as a node.
 virt-v2v machines than can take downtime (can you get a maintenance
 window)?
 Bring them up on the new node.
 Repeat until your environment is converted.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] virt-manager migration

2014-04-02 Thread Jeremiah Jahn
Quick background:

I have a number of hosts already running SL6.5 and have been using
virt-manager with them for some time.  All of the guests are by-id fiber
channel LUNs. I'm attempting to migrate from virt-manager to oVirt. I'll be
dropping my SAN array and migrating all of the LUNs to a gluster system,
which will be exporting the images to my blades over SCST or LIO.  FC is
what i already have and won't saturate my network as opposed to trying to
use the 1Gbs network cards.  Gluster is being used for redundancy and
geo-replication, but not image hosting to the hosts.

Anyway, long story short. I'm having a difficult time finding documentation
on migrating from virt-manager to oVirt. As well as installing ovirt-nodes
manually. I'd love to find this perfect world where I can just install the
ovirt-node RPMs on my already running Hosts and begin to have them managed
by the oVirt engine. Without and serious downtime.

Any suggestions or pointers to documentation would be great. Even comments
as to making this transition smoother or approaching it differently are
appreciated.


thanks,
-jj-
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users