Re: [Users] Can I move local_cluster in all-in-one setup?

2013-01-13 Thread Mike Kolesnik
- Original Message -

> On Thu, Jan 10, 2013 at 5:10 PM, René Koch (ovido) wrote:

> > If not, please do a resync.
> 
> It seems it is not in sync, if I understand correctly the "two
> arrows" symbol in:
> https://docs.google.com/file/d/0BwoPbcrMv8mvUFFVaVl1TTlVVVE/edit

> My network page is instead this
> https://docs.google.com/file/d/0BwoPbcrMv8mveERiMUlKY094TVk/edit

> The problem is that I'm not able to make it synced, tried both with
> and without selecting the "verify" checkbox at the bottom
Hi Gianluca, 

Can you please send engine + vdsm logs from when you tried to check the "Sync" 
checkbox and run the "Setup networks" command on this host? 

It is out of sync because the engine network is sitting directly on top of the 
interface instead of the VLAN, but it should be fixed when the "Setup network" 
command is done .. 

Regards, 
Mike 

> it seems a dog trying to bite its tail (at least we say this way in
> Italy with similar situations... ;-)

> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Can I move local_cluster in all-in-one setup?

2013-01-11 Thread Gianluca Cecchi
On Thu, Jan 10, 2013 at 5:10 PM, René Koch (ovido) wrote:

> If not, please do a resync.


It seems it is not in sync, if I understand correctly the "two arrows"
symbol in:
https://docs.google.com/file/d/0BwoPbcrMv8mvUFFVaVl1TTlVVVE/edit

My network page is instead this
https://docs.google.com/file/d/0BwoPbcrMv8mveERiMUlKY094TVk/edit

The problem is that I'm not able to make it synced, tried both with and
without selecting the "verify" checkbox at the bottom
it seems a dog trying to bite its tail (at least we say this way in Italy
with similar situations... ;-)
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Can I move local_cluster in all-in-one setup?

2013-01-10 Thread Koch (ovido)
Can you have a look at your network setup dialog in ovirt webadmin if
the network is in sync?

If not, please do a resync.


You're right - the ip address should be on interface ovirtmgmt, not on
em3.


Regards,
René


On Thu, 2013-01-10 at 00:41 +0100, Gianluca Cecchi wrote:
> 
> On Wed, Jan 9, 2013 at 5:22 PM, René Koch (ovido)  wrote:
> I hope this will help you with oVirt.
> Maybe you should cleanup your all-in-one setup and recreate it
> using the
> above steps.
> 
> 
> I think substantially I made your steps.
> 
> 
> I take another test.
> The host comes with also another adapter (em4) that is on vlan66
> This is unconfigured in oVirt.
> Then I create a new vlan named vlan66 with target vm
> Then I run another virt-v2v of a vm named zensrv that is on vlan 66
> from qemu on CentOS 6.3 to oVirt 
> 
> 
> # time virt-v2v -o rhev -osd 10.4.4.59:/EXPORT --network vlan66 zensrv
> zensrv_002: 100%
> []D
>  0h02m22s
> virt-v2v: WARNING: /etc/fstab references unknown device /dev/vda2.
> This entry must be manually fixed after conversion.
> virt-v2v: WARNING: /etc/fstab references unknown device /dev/vda1.
> This entry must be manually fixed after conversion.
> virt-v2v: WARNING: /boot/grub/device.map references unknown
> device /dev/vda. This entry must be manually fixed after conversion.
> virt-v2v: zensrv configured with virtio drivers.
> 
> 
> real 3m16.051s
> user 0m58.953s
> sys 0m46.729s
> 
> 
> NOTE: actually the disk in oVirt after import is marked as VirtIO (as
> it was on source) and boots without any problem
> 
> 
> Well, this vm is perfectly configured in its vlan and reachable as it
> was on its original host.
> 
> 
> After configuring this new vlan on host, this is the situation
> 
> 
> [g.cecchi@f18aio ~]$ ip addr list
> 1: lo:  mtu 16436 qdisc noqueue state UNKNOWN 
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
> inet6 ::1/128 scope host 
>valid_lft forever preferred_lft forever
> 2: em1:  mtu 1500 qdisc mq state UP
> qlen 1000
> link/ether 00:1e:0b:21:b8:c4 brd ff:ff:ff:ff:ff:ff
> inet6 fe80::21e:bff:fe21:b8c4/64 scope link 
>valid_lft forever preferred_lft forever
> 3: em3:  mtu 1500 qdisc mq master
> ovirtmgmt state UP qlen 1000
> link/ether 00:1c:c4:ab:3a:dd brd ff:ff:ff:ff:ff:ff
> inet6 fe80::21c:c4ff:feab:3add/64 scope link 
>valid_lft forever preferred_lft forever
> 4: em2:  mtu 1500 qdisc mq state UP
> qlen 1000
> link/ether 00:1e:0b:21:b8:c6 brd ff:ff:ff:ff:ff:ff
> inet6 fe80::21e:bff:fe21:b8c6/64 scope link 
>valid_lft forever preferred_lft forever
> 5: em4:  mtu 1500 qdisc mq state UP
> qlen 1000
> link/ether 00:1c:c4:ab:3a:de brd ff:ff:ff:ff:ff:ff
> inet6 fe80::21c:c4ff:feab:3ade/64 scope link 
>valid_lft forever preferred_lft forever
> 6: ovirtmgmt:  mtu 1500 qdisc noqueue
> state UP 
> link/ether 00:1c:c4:ab:3a:dd brd ff:ff:ff:ff:ff:ff
> inet6 fe80::21c:c4ff:feab:3add/64 scope link 
>valid_lft forever preferred_lft forever
> 7: em3.65@em3:  mtu 1500 qdisc
> noqueue state UP 
> link/ether 00:1c:c4:ab:3a:dd brd ff:ff:ff:ff:ff:ff
> inet 10.4.4.59/24 brd 10.4.4.255 scope global em3.65
> inet6 fe80::21c:c4ff:feab:3add/64 scope link 
>valid_lft forever preferred_lft forever
> 10: ;vdsmdummy;:  mtu 1500 qdisc noop state DOWN 
> link/ether ea:e8:c9:57:87:fb brd ff:ff:ff:ff:ff:ff
> 11: bond0:  mtu 1500 qdisc noop state
> DOWN 
> link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
> 12: bond4:  mtu 1500 qdisc noop state
> DOWN 
> link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
> 14: vnet0:  mtu 1500 qdisc pfifo_fast
> master ovirtmgmt state UNKNOWN qlen 500
> link/ether fe:54:00:d3:8f:a3 brd ff:ff:ff:ff:ff:ff
> inet6 fe80::fc54:ff:fed3:8fa3/64 scope link 
>valid_lft forever preferred_lft forever
> 15: em4.66@em4:  mtu 1500 qdisc
> noqueue master vlan66 state UP 
> link/ether 00:1c:c4:ab:3a:de brd ff:ff:ff:ff:ff:ff
> inet6 fe80::21c:c4ff:feab:3ade/64 scope link 
>valid_lft forever preferred_lft forever
> 16: vlan66:  mtu 1500 qdisc noqueue
> state UP 
> link/ether 00:1c:c4:ab:3a:de brd ff:ff:ff:ff:ff:ff
> inet6 fe80::21c:c4ff:feab:3ade/64 scope link 
>valid_lft forever preferred_lft forever
> 17: vnet1:  mtu 1500 qdisc pfifo_fast
> master vlan66 state UNKNOWN qlen 500
> link/ether fe:54:00:43:d9:df brd ff:ff:ff:ff:ff:ff
> inet6 fe80::fc54:ff:fe43:d9df/64 scope link 
>valid_lft forever preferred_lft forever
> 
> 
> and, from a bridge point of view
> 
> 
> [g.cecchi@f18aio ~]$ sudo brctl show
> bridge name bridge id STP enabled interfaces
> ;vdsmdummy; 8000. no 
> ovirtmgmt 8000.001cc4ab3add no em3
> vnet0
> vlan66 8000.001cc4ab3ade no em4.66
> vnet1
> 
> 
> vnet0 is interface of c56cr that shoud be in vlan65
> vnet1 is int

Re: [Users] Can I move local_cluster in all-in-one setup?

2013-01-09 Thread Gianluca Cecchi
On Thu, Jan 10, 2013 at 12:41 AM, Gianluca Cecchi  wrote:

> Please note that while on ovirtmgmt bridge there is em3 as physical
> interface, on vlan66 there is em3... while I think it should be em3.65


this sentence should be
Please note that while on vlan66 bridge there is em4.66 as physical
interface, on ovirtmgmt there is em3... while I think it should be em3.65
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Can I move local_cluster in all-in-one setup?

2013-01-09 Thread Gianluca Cecchi
On Wed, Jan 9, 2013 at 5:22 PM, René Koch (ovido)  wrote:

> I hope this will help you with oVirt.
> Maybe you should cleanup your all-in-one setup and recreate it using the
> above steps.
>

I think substantially I made your steps.

I take another test.
The host comes with also another adapter (em4) that is on vlan66
This is unconfigured in oVirt.
Then I create a new vlan named vlan66 with target vm
Then I run another virt-v2v of a vm named zensrv that is on vlan 66 from
qemu on CentOS 6.3 to oVirt

# time virt-v2v -o rhev -osd 10.4.4.59:/EXPORT --network vlan66 zensrv
zensrv_002: 100%
[]D
0h02m22s
virt-v2v: WARNING: /etc/fstab references unknown device /dev/vda2. This
entry must be manually fixed after conversion.
virt-v2v: WARNING: /etc/fstab references unknown device /dev/vda1. This
entry must be manually fixed after conversion.
virt-v2v: WARNING: /boot/grub/device.map references unknown device
/dev/vda. This entry must be manually fixed after conversion.
virt-v2v: zensrv configured with virtio drivers.

real 3m16.051s
user 0m58.953s
sys 0m46.729s

NOTE: actually the disk in oVirt after import is marked as VirtIO (as it
was on source) and boots without any problem

Well, this vm is perfectly configured in its vlan and reachable as it was
on its original host.

After configuring this new vlan on host, this is the situation

[g.cecchi@f18aio ~]$ ip addr list
1: lo:  mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
   valid_lft forever preferred_lft forever
2: em1:  mtu 1500 qdisc mq state UP qlen
1000
link/ether 00:1e:0b:21:b8:c4 brd ff:ff:ff:ff:ff:ff
inet6 fe80::21e:bff:fe21:b8c4/64 scope link
   valid_lft forever preferred_lft forever
3: em3:  mtu 1500 qdisc mq master
ovirtmgmt state UP qlen 1000
link/ether 00:1c:c4:ab:3a:dd brd ff:ff:ff:ff:ff:ff
inet6 fe80::21c:c4ff:feab:3add/64 scope link
   valid_lft forever preferred_lft forever
4: em2:  mtu 1500 qdisc mq state UP qlen
1000
link/ether 00:1e:0b:21:b8:c6 brd ff:ff:ff:ff:ff:ff
inet6 fe80::21e:bff:fe21:b8c6/64 scope link
   valid_lft forever preferred_lft forever
5: em4:  mtu 1500 qdisc mq state UP qlen
1000
link/ether 00:1c:c4:ab:3a:de brd ff:ff:ff:ff:ff:ff
inet6 fe80::21c:c4ff:feab:3ade/64 scope link
   valid_lft forever preferred_lft forever
6: ovirtmgmt:  mtu 1500 qdisc noqueue
state UP
link/ether 00:1c:c4:ab:3a:dd brd ff:ff:ff:ff:ff:ff
inet6 fe80::21c:c4ff:feab:3add/64 scope link
   valid_lft forever preferred_lft forever
7: em3.65@em3:  mtu 1500 qdisc noqueue
state UP
link/ether 00:1c:c4:ab:3a:dd brd ff:ff:ff:ff:ff:ff
inet 10.4.4.59/24 brd 10.4.4.255 scope global em3.65
inet6 fe80::21c:c4ff:feab:3add/64 scope link
   valid_lft forever preferred_lft forever
10: ;vdsmdummy;:  mtu 1500 qdisc noop state DOWN
link/ether ea:e8:c9:57:87:fb brd ff:ff:ff:ff:ff:ff
11: bond0:  mtu 1500 qdisc noop state DOWN
link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
12: bond4:  mtu 1500 qdisc noop state DOWN
link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
14: vnet0:  mtu 1500 qdisc pfifo_fast
master ovirtmgmt state UNKNOWN qlen 500
link/ether fe:54:00:d3:8f:a3 brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc54:ff:fed3:8fa3/64 scope link
   valid_lft forever preferred_lft forever
15: em4.66@em4:  mtu 1500 qdisc noqueue
master vlan66 state UP
link/ether 00:1c:c4:ab:3a:de brd ff:ff:ff:ff:ff:ff
inet6 fe80::21c:c4ff:feab:3ade/64 scope link
   valid_lft forever preferred_lft forever
16: vlan66:  mtu 1500 qdisc noqueue state
UP
link/ether 00:1c:c4:ab:3a:de brd ff:ff:ff:ff:ff:ff
inet6 fe80::21c:c4ff:feab:3ade/64 scope link
   valid_lft forever preferred_lft forever
17: vnet1:  mtu 1500 qdisc pfifo_fast
master vlan66 state UNKNOWN qlen 500
link/ether fe:54:00:43:d9:df brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc54:ff:fe43:d9df/64 scope link
   valid_lft forever preferred_lft forever

and, from a bridge point of view

[g.cecchi@f18aio ~]$ sudo brctl show
bridge name bridge id STP enabled interfaces
;vdsmdummy; 8000. no
ovirtmgmt 8000.001cc4ab3add no em3
vnet0
vlan66 8000.001cc4ab3ade no em4.66
vnet1

vnet0 is interface of c56cr that shoud be in vlan65
vnet1 is interface of zensrv that is correctly on vlan66

Please note that while on ovirtmgmt bridge there is em3 as physical
interface, on vlan66 there is em3... while I think it should be em3.65

Also, I noticed in similar configurations in Qemu+KVM on CentOS, that the
ip (10.4.4.59 in my case) should be on the bridge, if present.
So in my situation it should be on ovirtmgmt, while it is on em3.65.

I could tweak configuration files in /etc/sysconf/network-scripts. Now they
are this way

ovirtmgmt/vlan65
[g.cecchi@f18aio network-scripts]$ cat ifcfg-em3.65
DEVICE=em3.65
ONBOOT=yes
VLAN=yes
NM_CONTROL

Re: [Users] Can I move local_cluster in all-in-one setup?

2013-01-09 Thread Koch (ovido)
Hi Gianluca,

I just successfully installed RHEV 3.1 all-in-one with VLAN tagging on
rhevm (ovirtmgmt) network - steps should be the same on oVirt:

1. Configure VLAN on host (ifcfg-eth0.)
2. Install oVirt environment with all-in-one setup
3. Login to webadmin
4. Set local storage to maintenance
5. Force remove datacenter
6. Create new datacenter (new name, type: local)
7. Create new cluster (new name)
8. Edit ovirtmgmt network on new datacenter (set VLAN)
9. Bring host to maintenance
10. Edit host
10.1 Change datacenter to new datacenter
10.2 Change cluster to new cluster
11. Create new local storage
12. Verify that datacenter and host are online and that network is in
sync including VLAN
13. Attach ISO domain (I used the one created with rhevm-setup)
14. Create vm

I hope this will help you with oVirt.
Maybe you should cleanup your all-in-one setup and recreate it using the
above steps.


-- 
Best Regards

René Koch
Senior Solution Architect


ovido gmbh - "Das Linux Systemhaus"
Brünner Straße 163, A-1210 Wien

Phone:   +43 720 / 530 670
Mobile:  +43 660 / 512 21 31
E-Mail:  r.k...@ovido.at



On Wed, 2013-01-09 at 00:47 +0100, Gianluca Cecchi wrote:
> On Tue, Jan 8, 2013 at 12:38 PM, René Koch (ovido)  wrote:
> Hi Gianluca,
> 
> You could try the following:
> 
> 1) Create new datacenter and cluster
> 2) Set VLAN for ovirtmgmt network in new DC
> 3) Delete localhost from Default cluster
> 4) Configure VLAN tagging on host manually
> 5) Join localhost to new cluster using oVirt Admin Portal
> 
> Before joining localhost to the new cluster, comment out
> rebooting in
> vds_bootstrap_complete.py:
> 
> $ vi /usr/share/vdsm-bootstrap/vds_bootstrap_complete.py
> #deployUtil.reboot()
> 
> 
> Regards,
> René
> 
> 
> Hello,
> I worked on this and
> 1) ok
> 2) ok
> 3) ko
> Error while executing action: 
> Cannot remove Host, as it contains a local Storage Domain. 
> Please activate the Host and remove the Data Center first.
> - If Host cannot be activated, use the Force-Remove option on the Data
> Center object 
> (select the Data Center and right click on it with the mouse).
> - Please note that this action is destructive.
> 
> 
> ---> I put host in maintenance but get the same result, so 
> ---> force remove datacenter local_datacenter
> got warning related to sp domain possible problems (in my case empty
> so I then removed the directory and recreated it empty
> 
> 
> 4) it was already ok with classic ifcfg-em3.65 file
> in /etc/sysconfig/network-scripts
> 5) ok
> 6) create a new data domain of type local_host --> ok
> 7) activate iso domain and upload some isos --> ok
> 
> 
> 8) test to create win 7 32 bit vm --> ok
> 
> 
> But when I try run once I get:
> 
> 
>  Error: 
> 
> 
> w7test:
> Cannot run VM. There are no available running Hosts in the Host
> Cluster.
> 
> 
> I attach images for 
> cluster view
> https://docs.google.com/open?id=0BwoPbcrMv8mvMkFoUExLOVM5c1U
> 
> 
> 
> storage view
> https://docs.google.com/open?id=0BwoPbcrMv8mvZFBwLXlhX0hISHc
> 
> 
> 
> network view
> https://docs.google.com/open?id=0BwoPbcrMv8mvV2tTQkRBVVgzT28
> 
> 
> 
> vm view
> https://docs.google.com/open?id=0BwoPbcrMv8mva0hjR18wa2JTN1E
> 
> 
> 
> It seems all ok to me...
> engine.log, cut arounf the vm creation, is here:
> https://docs.google.com/open?id=0BwoPbcrMv8mvWF96R2VYVUJoNDQ
> 
> 
> 
> Instead in vdsm.log I don't see any error...
> 
> 
> Thanks for your help debugging this
> 
> 
> Gianluca
> 
> 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Can I move local_cluster in all-in-one setup?

2013-01-08 Thread Gianluca Cecchi
On Tue, Jan 8, 2013 at 12:38 PM, René Koch (ovido)  wrote:

> Hi Gianluca,
>
> You could try the following:
>
> 1) Create new datacenter and cluster
> 2) Set VLAN for ovirtmgmt network in new DC
> 3) Delete localhost from Default cluster
> 4) Configure VLAN tagging on host manually
> 5) Join localhost to new cluster using oVirt Admin Portal
>
> Before joining localhost to the new cluster, comment out rebooting in
> vds_bootstrap_complete.py:
>
> $ vi /usr/share/vdsm-bootstrap/vds_bootstrap_complete.py
> #deployUtil.reboot()
>
>
> Regards,
> René
>

Hello,
I worked on this and
1) ok
2) ok
3) ko
Error while executing action:
Cannot remove Host, as it contains a local Storage Domain.
Please activate the Host and remove the Data Center first.
- If Host cannot be activated, use the Force-Remove option on the Data
Center object
(select the Data Center and right click on it with the mouse).
- Please note that this action is destructive.

---> I put host in maintenance but get the same result, so
---> force remove datacenter local_datacenter
got warning related to sp domain possible problems (in my case empty so I
then removed the directory and recreated it empty

4) it was already ok with classic ifcfg-em3.65 file in
/etc/sysconfig/network-scripts
5) ok
6) create a new data domain of type local_host --> ok
7) activate iso domain and upload some isos --> ok

8) test to create win 7 32 bit vm --> ok

But when I try run once I get:

 Error:

w7test:
Cannot run VM. There are no available running Hosts in the Host Cluster.

I attach images for
cluster view
https://docs.google.com/open?id=0BwoPbcrMv8mvMkFoUExLOVM5c1U

storage view
https://docs.google.com/open?id=0BwoPbcrMv8mvZFBwLXlhX0hISHc

network view
https://docs.google.com/open?id=0BwoPbcrMv8mvV2tTQkRBVVgzT28

vm view
https://docs.google.com/open?id=0BwoPbcrMv8mva0hjR18wa2JTN1E

It seems all ok to me...
engine.log, cut arounf the vm creation, is here:
https://docs.google.com/open?id=0BwoPbcrMv8mvWF96R2VYVUJoNDQ

Instead in vdsm.log I don't see any error...

Thanks for your help debugging this

Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Can I move local_cluster in all-in-one setup?

2013-01-08 Thread Koch (ovido)
Hi Gianluca,

You could try the following:

- Create new datacenter and cluster
- Set VLAN for ovirtmgmt network in new DC
- Delete localhost from Default cluster
- Configure VLAN tagging on host manually
- Join localhost to new cluster using oVirt Admin Portal

Before joining localhost to the new cluster, comment out rebooting in
vds_bootstrap_complete.py:

$ vi /usr/share/vdsm-bootstrap/vds_bootstrap_complete.py
#deployUtil.reboot()


Regards,
René



On Tue, 2013-01-08 at 12:21 +0100, Gianluca Cecchi wrote:
> 
> On Tue, Jan 8, 2013 at 10:45 AM, Itamar Heim rote:
> 
> 
> 
> you can't move a cluster while it associated to a DC, you need
> to delete the DC first.
> the AIO is intended to ramp you up. if you know what you are
> doing, to can change anything you want later on.
> 
> as for making the rhevm network vlan'd at setup time, adding a
> few installer/network folks.
> 
> 
> You say that I have to delete the DC first, but I think I can't delete
> a DC if it contains some cluster...
> 
> 
> If I understood correctly what written in another thread
> (due to the problems related to archiving I have nto a link, because
> it was started on 27/12/12; its subject was
> [Users] tagged vs untagged and sharing the interface
> 
> )
> with the target to edit ovirtmgmt for a pre-existing DC, named DC1,
> containing a cluster CL1 it was prposed a possible workflow (not
> tested said the op)
>  
> 1) create temporary datacenter DC2
> 2) put CL1 in DC2
> 3) edit ovirtmgmt in DC1
> 4) put CL1 in DC1 again
> 5) delete DC2
> 
> 
> If this is correct, how can I do it, in terms of operations related to
> hosts, storage, clusters?
> I don't find one because I don't understand how to complete 2)
> Otherwise if 2) is not possible how can I do?
>  
> 
> Thanks and sorry if I was not clear before...
> 
> 
> Gianluca
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Can I move local_cluster in all-in-one setup?

2013-01-08 Thread Gianluca Cecchi
On Tue, Jan 8, 2013 at 10:45 AM, Itamar Heim rote:

>
>>
> you can't move a cluster while it associated to a DC, you need to delete
> the DC first.
> the AIO is intended to ramp you up. if you know what you are doing, to can
> change anything you want later on.
>
> as for making the rhevm network vlan'd at setup time, adding a few
> installer/network folks.
>

You say that I have to delete the DC first, but I think I can't delete a DC
if it contains some cluster...

If I understood correctly what written in another thread
(due to the problems related to archiving I have nto a link, because it was
started on 27/12/12; its subject was
[Users] tagged vs untagged and sharing the interface
)
with the target to edit ovirtmgmt for a pre-existing DC, named DC1,
containing a cluster CL1 it was prposed a possible workflow (not tested
said the op)

1) create temporary datacenter DC2
2) put CL1 in DC2
3) edit ovirtmgmt in DC1
4) put CL1 in DC1 again
5) delete DC2

If this is correct, how can I do it, in terms of operations related to
hosts, storage, clusters?
I don't find one because I don't understand how to complete 2)
Otherwise if 2) is not possible how can I do?

Thanks and sorry if I was not clear before...

Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Can I move local_cluster in all-in-one setup?

2013-01-08 Thread Itamar Heim

On 01/07/2013 06:22 PM, Gianluca Cecchi wrote:

Hello,
working on f18 and ovirt nightly as of
ovirt-engine-3.2.0-1.20130106.git0cb01e1.fc18.noarch

Can I configure ootb an all-in-one setup with vlan tagging for the to-be
created ovirtmgmt lan?
In that case how has to be the network config for the host before
running engine-setup?
classic eth0 + eth0.vlanid or other things?

In case the answer is no, can I configure it after creation?

I saw in another thread that in general I can create a temporary DC and
move the clusters there so that I can then edit the ovirtmgmt network
making it vlan tagged.

Is this possible in all-in-one too?

I created another Datacenter named tempdc and I try to move my
local_cluster there but I don't see how I can.

I edit local_cluster, but the option to change DC is greyed out


you can't move a cluster while it associated to a DC, you need to delete 
the DC first.
the AIO is intended to ramp you up. if you know what you are doing, to 
can change anything you want later on.


as for making the rhevm network vlan'd at setup time, adding a few 
installer/network folks.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Can I move local_cluster in all-in-one setup?

2013-01-07 Thread Gianluca Cecchi
Hello,
working on f18 and ovirt nightly as of
ovirt-engine-3.2.0-1.20130106.git0cb01e1.fc18.noarch

Can I configure ootb an all-in-one setup with vlan tagging for the to-be
created ovirtmgmt lan?
In that case how has to be the network config for the host before running
engine-setup?
classic eth0 + eth0.vlanid or other things?

In case the answer is no, can I configure it after creation?

I saw in another thread that in general I can create a temporary DC and
move the clusters there so that I can then edit the ovirtmgmt network
making it vlan tagged.

Is this possible in all-in-one too?

I created another Datacenter named tempdc and I try to move my
local_cluster there but I don't see how I can.

I edit local_cluster, but the option to change DC is greyed out

Thanks,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users