[Users] Question about network card order

2012-10-03 Thread Kevin Maziere Aubry
Hello

My first email on the mailing, I hope the first one of a longue serie.

The email concerns a Network Card order issue for which I like to report a
have your advises.

I've made a template which contains 2 network cards, Each one bridge on
different vlan network.
When I create a VM from this template, the network card mac address
assignation is random... so that by default my fedora assign eth0 to the
smallest mac address, and so on for eth1/eth2 ...

But no way to define network card order in template, so that sometime the
smallest mac address is on eth0, sometime on eth1 (and in fact more often
on eth1), sometime my VM works, sometime I have to destroy and recreate
network interfaces.

Is there a workaround or anything else ?

Thanks

Kévin

-- 

Kevin Mazière
Responsable Infrastructure
Alter Way – Hosting
 1 rue Royal - 227 Bureaux de la Colline
92213 Saint-Cloud Cedex
Tél : +33 (0)1 41 16 38 41
Mob : +33 (0)7 62 55 57 05
 http://www.alterway.fr
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Migrating ovirt-engine to new server

2012-10-03 Thread Neil
Hi guys,

Please could someone assist me.

I'm trying to migrate my ovirt-engine(3.1 but very early release) to
latest 3.1 on new machine(both Centos 6.3 dreyou repo). After the
intial engine-setup I can log into webadmin without a problem.

I then imported my DB from the old server (psql -U postgres -d engine
-w  /root/dump.sql) and start the ovirt-engine again, but now when
try and browse to the webadmin I don't get anything at all, no web
page displays at all, I'm guessing that there are lots of additional
steps that I'm missing, so perhaps this is normal. I haven't attached
any logs etc because I'm fairly certain that I'm missing some vital
steps before I can actually log into my system using the new DB.

I've changed the hostname of the machine -- previous name was
node02.blabla.com and now it's backup.blabla.com does this matter? My
guess is the certificates won't work with the DB now, is there any way
to get around this, or should I rather just use the same hostname on
the new server?

I also haven't copied any physical files or configs across yet, which
files/folders do I need?

I'm using a FC SAN so I'm assuming I won't need to export my VM's, it
should just be a matter of detaching my main storage domain and then
re-attaching it on the new server?  I see when I try and detach from
the old server it says can't export while there are Templates/VM's on
the system must I hit the remove button on each VM on the old system
in order to detach and then attach to on the new system? (presume it
will only remove the VM's from the old system and not actually delete
the VM image?)

I'm using no templates, so I'm assuming these don't need to be
exported/imported?

Sorry for all the questions, any help is greatly appreciated!

Regards.

Neil Wilson.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Setup Networks Screencast

2012-10-03 Thread Muli Salem
Hi all,

You are welcome to watch the newest screencast on the oVirt channel, about the 
3.1 Setup Networks feature:

http://www.youtube.com/watch?v=3EMrqQR-f3wfeature=plcp

Regards,
Muli
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] nested KVM

2012-10-03 Thread Nathanaël Blanchet

Hi,

Thanks for your answer. everything is all right now, I've managed to 
resolve the issue thanks to your suggestion.

This is what I've done:

1. I recall that my processor is a Westmere one
2. first on the hypervisor :
   [root@node ~]# qemu-kvm -version
   QEMU emulator version 1.0.1 (qemu-kvm-devel), Copyright (c)
   2003-2008 Fabrice Bellard

   this qemu-kvm release is the one from the F17 update repository

3. still on the hypervisor
   [root@node ~]# qemu-kvm -cpu ?
   x86   Opteron_G3
   x86   Opteron_G2
   x86   Opteron_G1
   x86  SandyBridge
   x86  Nehalem
   x86   Penryn
   x86   Conroe
   x86   [n270]
   x86 [athlon]
   x86   [pentium3]
   x86   [pentium2]
   x86[pentium]
   x86[486]
   x86[coreduo]
   x86  [kvm32]
   x86 [qemu32]
   x86  [kvm64]
   x86   [core2duo]
   x86 [phenom]
   x86 [qemu64]



   So we can see there that no Westmere or neither Opteron_G4 is
   supported by 1.0.1 qemu-kvm. the consequence is this message on the
   hypervisor logs : Preferred CPU model Westmere not allowed by
   hypervisor; closest supported model will be used

4. when doing ps aux | grep enable-kvm, we see the -cpu flag being
   kvm64, a generic kvm processor. But none of the proposed item into
   the CPU name cluster field matches with kvm64, that's the reason
   why it's impossible there to run a nested KVM. A kvm64 item should
   successfully launch the vm, but it will never be implemented into
   the ovirt webadmin because nested KVM is only for testing purpose
   (and vm performances are very bad !)

5. So what doing when we have a Westmere processor type and a F17 so as
   to successfully run a nested kvm upgrading to qemu 1.2 thanks to
   the virt preview repository
   (http://fedoraproject.org/wiki/Virtualization_Preview_Repository)
   now running qemu-kvm -cpu ? gives :
   [root@node ~]# qemu-kvm -cpu ?
   x86   Opteron_G4
   x86   Opteron_G3
   x86   Opteron_G2
   x86   Opteron_G1
   x86  SandyBridge
   x86 Westmere
   x86  Nehalem
   x86   Penryn
   x86   Conroe
   x86   [n270]
   x86 [athlon]
   x86   [pentium3]
   x86   [pentium2]
   x86[pentium]
   x86[486]
   x86[coreduo]
   x86  [kvm32]
   x86 [qemu32]
   x86  [kvm64]
   x86   [core2duo]
   x86 [phenom]
   x86 [qemu64]

   Now when running the hypervisor with the Westmere cpu flag is
   successfull and the kvm64 flag is not used anymore. Then, chosing
   the Westmere Family into the CPU family cluster field will launch
   the nested vm with the right Westmere flag !

Obviously, qemu-kvm 1.2 will be included by default into F18 and all of 
this will be deprecated but waiting for, upgrading the qemu version can 
be very useful having a running test environnment with many virtual 
hypervisor when the hardware is a Wetmere family one.


Le 26/09/2012 04:06, Mark Wu a écrit :

On 09/26/2012 12:58 AM, Nathanaël Blanchet wrote:

Hello,

I've tried many times to run a node as a guest in ovirt following 
http://wiki.ovirt.org/wiki/Vdsm_Developers#Running_Node_as_guest_-_Nested_KVM 
. The result is that I'm able to register such a host in engine, but 
it is impossible to start any vm on it. If I boot on an ISO, I can 
see the first prompt and move between the items. But since I begin 
the installation I have a blank screen and the vm goes into pause 
mode in vit-manager. Then I have no way else than rebooting the 
hypervisor guest because it is impossible to resume. When I get logs 
on the real host, I can find this :
arning : x86Decode:1306 : Preferred CPU model Westmere not allowed by 
hypervisor; closest supported model will be used
Sep 25 17:11:59 khamsin libvirtd[23053]: 2012-09-25 
15:11:59.384+: 23054: warning : x86Decode:1306 : Preferred CPU 
model Westmere not allowed by hypervisor; closest supported model 
will be used
Sep 25 17:12:19 khamsin libvirtd[23053]: 2012-09-25 
15:12:19.150+: 23055: warning : x86Decode:1306 : Preferred CPU 
model Westmere not allowed by hypervisor; closest supported model 
will be used
Sep 25 17:45:48 khamsin libvirtd[23053]: 2012-09-25 
15:45:48.342+: 23058: warning : x86Decode:1306 : Preferred CPU 
model Westmere not allowed by hypervisor; closest supported model 
will be used
Sep 25 18:07:05 khamsin libvirtd[23053]: 2012-09-25 
16:07:05.834+: 23058: warning : x86Decode:1306 : Preferred CPU 
model Nehalem not allowed by hypervisor; closest supported model will 
be used
Sep 25 18:44:47 khamsin libvirtd[23053]: 2012-09-25 
16:44:47.340+: 23057: warning : x86Decode:1306 : Preferred CPU 
model Penryn not allowed by hypervisor; closest supported model will 
be used


As you can see, I have tried many cpu type for launching this guest 
hypervisor, but none of them is accepted by the hypervisor. Plus, 
I've 

Re: [Users] Question about network card order

2012-10-03 Thread Igor Lvovsky


- Original Message -
 From: Kevin Maziere Aubry kevin.mazi...@alterway.fr
 To: users@ovirt.org
 Sent: Wednesday, October 3, 2012 9:58:44 AM
 Subject: [Users] Question about network card order
 
 
 Hello
 
 
 My first email on the mailing, I hope the first one of a longue
 serie.
 

Welcome to community :)

 
 The email concerns a Network Card order issue for which I like to
 report a have your advises.
 
 
 I've made a template which contains 2 network cards, Each one bridge
 on different vlan network.
 When I create a VM from this template, the network card mac address
 assignation is random... so that by default my fedora assign eth0 to
 the smallest mac address, and so on for eth1/eth2 ...
 
 
 But no way to define network card order in template, so that sometime
 the smallest mac address is on eth0, sometime on eth1 (and in fact
 more often on eth1), sometime my VM works, sometime I have to
 destroy and recreate network interfaces.
 
 

Unfortunately it's a known issue that related mostly to guest kernel rather 
than RHEV-M.
This is not related to order of NICs during VM start, because RHEV-M keep the 
MAC assignment
and even PCI address assignment same over VM reboots.

The real reason hidden somewhere deep in kernel/udev  behaviour.
We need to understand why 2 NICs with same MAC addresses and same PCI addresses 
may get different names (eth0/eth1)
over machine reboot.

Michael, any ideas?

 Is there a workaround or anything else ?

 
No, I am not aware of any workaround


Regards,
Igor

 
 Thanks
 
 
 Kévin
 
 
 --
 
 Kevin Mazière
 Responsable Infrastructure
 Alter Way – Hosting
 1 rue Royal - 227 Bureaux de la Colline
 92213 Saint-Cloud Cedex
 Tél : +33 (0)1 41 16 38 41
 Mob : +33 (0)7 62 55 57 05
 http://www.alterway.fr
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Question about network card order

2012-10-03 Thread Kevin Maziere Aubry
Thanks for your reply.

2012/10/3 Igor Lvovsky ilvov...@redhat.com



 - Original Message -
  From: Kevin Maziere Aubry kevin.mazi...@alterway.fr
  To: users@ovirt.org
  Sent: Wednesday, October 3, 2012 9:58:44 AM
  Subject: [Users] Question about network card order
 
 
  Hello
 
 
  My first email on the mailing, I hope the first one of a longue
  serie.
 

 Welcome to community :)

 
  The email concerns a Network Card order issue for which I like to
  report a have your advises.
 
 
  I've made a template which contains 2 network cards, Each one bridge
  on different vlan network.
  When I create a VM from this template, the network card mac address
  assignation is random... so that by default my fedora assign eth0 to
  the smallest mac address, and so on for eth1/eth2 ...
 
 
  But no way to define network card order in template, so that sometime
  the smallest mac address is on eth0, sometime on eth1 (and in fact
  more often on eth1), sometime my VM works, sometime I have to
  destroy and recreate network interfaces.
 
 

 Unfortunately it's a known issue that related mostly to guest kernel
 rather than RHEV-M.
 This is not related to order of NICs during VM start, because RHEV-M keep
 the MAC assignment
 and even PCI address assignment same over VM reboots.

 The real reason hidden somewhere deep in kernel/udev  behaviour.
 We need to understand why 2 NICs with same MAC addresses and same PCI
 addresses may get different names (eth0/eth1)
 over machine reboot.

 Michael, any ideas?

  Is there a workaround or anything else ?
 

 No, I am not aware of any workaround


 Regards,
 Igor

 
  Thanks
 
 
  Kévin
 
 
  --
 
  Kevin Mazière
  Responsable Infrastructure
  Alter Way – Hosting
  1 rue Royal - 227 Bureaux de la Colline
  92213 Saint-Cloud Cedex
  Tél : +33 (0)1 41 16 38 41
  Mob : +33 (0)7 62 55 57 05
  http://www.alterway.fr
 
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 




-- 

Kevin Mazière
Responsable Infrastructure
Alter Way – Hosting
 1 rue Royal - 227 Bureaux de la Colline
92213 Saint-Cloud Cedex
Tél : +33 (0)1 41 16 38 41
Mob : +33 (0)7 62 55 57 05
 http://www.alterway.fr
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Question about network card order

2012-10-03 Thread Antoni Segura Puimedon


- Original Message -
 From: Kevin Maziere Aubry kevin.mazi...@alterway.fr
 To: Igor Lvovsky ilvov...@redhat.com
 Cc: Dor Laor dl...@redhat.com, Michael Tsirkin mtsir...@redhat.com, 
 users@ovirt.org
 Sent: Wednesday, October 3, 2012 8:35:04 AM
 Subject: Re: [Users] Question about network card order
 
 
 Thanks for your reply.
 
 
 
 2012/10/3 Igor Lvovsky  ilvov...@redhat.com 
 
 
 
 
 
 - Original Message -
  From: Kevin Maziere Aubry  kevin.mazi...@alterway.fr 
  To: users@ovirt.org
  Sent: Wednesday, October 3, 2012 9:58:44 AM
  Subject: [Users] Question about network card order
  
  
  Hello
  
  
  My first email on the mailing, I hope the first one of a longue
  serie.
  
 
 Welcome to community :)
 
 
  
  The email concerns a Network Card order issue for which I like to
  report a have your advises.
  
  
  I've made a template which contains 2 network cards, Each one
  bridge
  on different vlan network.
  When I create a VM from this template, the network card mac address
  assignation is random... so that by default my fedora assign eth0
  to
  the smallest mac address, and so on for eth1/eth2 ...
  
  
  But no way to define network card order in template, so that
  sometime
  the smallest mac address is on eth0, sometime on eth1 (and in fact
  more often on eth1), sometime my VM works, sometime I have to
  destroy and recreate network interfaces.
  
  
 
 Unfortunately it's a known issue that related mostly to guest kernel
 rather than RHEV-M.
 This is not related to order of NICs during VM start, because RHEV-M
 keep the MAC assignment
 and even PCI address assignment same over VM reboots.
 
 The real reason hidden somewhere deep in kernel/udev behaviour.
 We need to understand why 2 NICs with same MAC addresses and same PCI
 addresses may get different names (eth0/eth1)
 over machine reboot.
 
 Michael, any ideas?
 
 
  Is there a workaround or anything else ?
  
 
 No, I am not aware of any workaround

Wouldn't it be possible to write some matching rules in:
/etc/udev/rules.d/70-persistent-net.rules

for your template so you would have it consistent. Mind that you should
not rename to the kernel eth namespace.

Best,

Toni

 
 
 Regards,
 Igor
 
 
  
  Thanks
  
  
  Kévin
  
  
  --
  
  Kevin Mazière
  Responsable Infrastructure
  Alter Way – Hosting
  1 rue Royal - 227 Bureaux de la Colline
  92213 Saint-Cloud Cedex
  Tél : +33 (0)1 41 16 38 41
  Mob : +33 (0)7 62 55 57 05
  http://www.alterway.fr
  
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
  
 
 
 
 
 --
 
 Kevin Mazière
 Responsable Infrastructure
 Alter Way – Hosting
 1 rue Royal - 227 Bureaux de la Colline
 92213 Saint-Cloud Cedex
 Tél : +33 (0)1 41 16 38 41
 Mob : +33 (0)7 62 55 57 05
 http://www.alterway.fr
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Question about network card order

2012-10-03 Thread Simon Grinberg
- Original Message -

 From: Kevin Maziere Aubry kevin.mazi...@alterway.fr
 To: Antoni Segura Puimedon asegu...@redhat.com
 Cc: Michael Tsirkin mtsir...@redhat.com, Dor Laor
 dl...@redhat.com, users@ovirt.org
 Sent: Wednesday, October 3, 2012 3:02:47 PM
 Subject: Re: [Users] Question about network card order

 The mac adress are unknown until the creation of the VM, so udev
 won't help here :(

But after the creation of the VM and before the first run you have the option 
to edit/view those MAC addresses. 
Give whatever you want before the first run. 

 2012/10/3 Antoni Segura Puimedon  asegu...@redhat.com 

  - Original Message -
 
   From: Kevin Maziere Aubry  kevin.mazi...@alterway.fr 
 

   To: Igor Lvovsky  ilvov...@redhat.com 
 
   Cc: Dor Laor  dl...@redhat.com , Michael Tsirkin 
   mtsir...@redhat.com , users@ovirt.org
 
   Sent: Wednesday, October 3, 2012 8:35:04 AM
 
   Subject: Re: [Users] Question about network card order
 
  
 
  
 
   Thanks for your reply.
 
  
 
  
 
  
 
   2012/10/3 Igor Lvovsky  ilvov...@redhat.com 
 
  
 
  
 
  
 
  
 
  
 
   - Original Message -
 
From: Kevin Maziere Aubry  kevin.mazi...@alterway.fr 
 
To: users@ovirt.org
 
Sent: Wednesday, October 3, 2012 9:58:44 AM
 
Subject: [Users] Question about network card order
 
   
 
   
 
Hello
 
   
 
   
 
My first email on the mailing, I hope the first one of a longue
 
serie.
 
   
 
  
 
   Welcome to community :)
 
  
 
  
 
   
 
The email concerns a Network Card order issue for which I like
to
 
report a have your advises.
 
   
 
   
 
I've made a template which contains 2 network cards, Each one
 
bridge
 
on different vlan network.
 
When I create a VM from this template, the network card mac
address
 
assignation is random... so that by default my fedora assign
eth0
 
to
 
the smallest mac address, and so on for eth1/eth2 ...
 
   
 
   
 
But no way to define network card order in template, so that
 
sometime
 
the smallest mac address is on eth0, sometime on eth1 (and in
fact
 
more often on eth1), sometime my VM works, sometime I have to
 
destroy and recreate network interfaces.
 
   
 
   
 
  
 
   Unfortunately it's a known issue that related mostly to guest
   kernel
 
   rather than RHEV-M.
 
   This is not related to order of NICs during VM start, because
   RHEV-M
 
   keep the MAC assignment
 
   and even PCI address assignment same over VM reboots.
 
  
 
   The real reason hidden somewhere deep in kernel/udev behaviour.
 
   We need to understand why 2 NICs with same MAC addresses and same
   PCI
 
   addresses may get different names (eth0/eth1)
 
   over machine reboot.
 
  
 
   Michael, any ideas?
 
  
 
  
 
Is there a workaround or anything else ?
 
   
 
  
 
   No, I am not aware of any workaround
 

  Wouldn't it be possible to write some matching rules in:
 
  /etc/udev/rules.d/70-persistent-net.rules
 

  for your template so you would have it consistent. Mind that you
  should
 
  not rename to the kernel eth namespace.
 

  Best,
 

  Toni
 

  
 
  
 
   Regards,
 
   Igor
 
  
 
  
 
   
 
Thanks
 
   
 
   
 
Kévin
 
   
 
   
 
--
 
   
 
Kevin Mazière
 
Responsable Infrastructure
 
Alter Way – Hosting
 
1 rue Royal - 227 Bureaux de la Colline
 
92213 Saint-Cloud Cedex
 
Tél : +33 (0)1 41 16 38 41
 
Mob : +33 (0)7 62 55 57 05
 
http://www.alterway.fr
 
   
 
___
 
Users mailing list
 
Users@ovirt.org
 
http://lists.ovirt.org/mailman/listinfo/users
 
   
 
  
 
  
 
  
 
  
 
   --
 
  
 
   Kevin Mazière
 
   Responsable Infrastructure
 
   Alter Way – Hosting
 
   1 rue Royal - 227 Bureaux de la Colline
 
   92213 Saint-Cloud Cedex
 
   Tél : +33 (0)1 41 16 38 41
 
   Mob : +33 (0)7 62 55 57 05
 
   http://www.alterway.fr
 
  
 
   ___
 
   Users mailing list
 
   Users@ovirt.org
 
   http://lists.ovirt.org/mailman/listinfo/users
 
  
 

 --

 Kevin Mazière
 Responsable Infrastructure
 Alter Way – Hosting
 1 rue Royal - 227 Bureaux de la Colline
 92213 Saint-Cloud Cedex
 Tél : +33 (0)1 41 16 38 41
 Mob : +33 (0)7 62 55 57 05
 http://www.alterway.fr

 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Question about network card order

2012-10-03 Thread Itamar Heim

On 10/03/2012 02:51 PM, Antoni Segura Puimedon wrote:
...


Hello


My first email on the mailing, I hope the first one of a longue
serie.



Welcome to community :)




The email concerns a Network Card order issue for which I like to
report a have your advises.


I've made a template which contains 2 network cards, Each one
bridge
on different vlan network.
When I create a VM from this template, the network card mac address
assignation is random... so that by default my fedora assign eth0
to
the smallest mac address, and so on for eth1/eth2 ...


But no way to define network card order in template, so that
sometime
the smallest mac address is on eth0, sometime on eth1 (and in fact
more often on eth1), sometime my VM works, sometime I have to
destroy and recreate network interfaces.




Unfortunately it's a known issue that related mostly to guest kernel
rather than RHEV-M.
This is not related to order of NICs during VM start, because RHEV-M
keep the MAC assignment
and even PCI address assignment same over VM reboots.

The real reason hidden somewhere deep in kernel/udev behaviour.
We need to understand why 2 NICs with same MAC addresses and same PCI
addresses may get different names (eth0/eth1)
over machine reboot.

Michael, any ideas?



Is there a workaround or anything else ?



No, I am not aware of any workaround


Wouldn't it be possible to write some matching rules in:
 /etc/udev/rules.d/70-persistent-net.rules


maybe leverage cloud-init (or your own script) and payload as a 
workaround for now.

maybe worth solving this via cloud-init/payload as a built in solution.



for your template so you would have it consistent. Mind that you should
not rename to the kernel eth namespace.

Best,

Toni




Regards,
Igor




Thanks


Kévin


--

Kevin Mazière
Responsable Infrastructure
Alter Way – Hosting
1 rue Royal - 227 Bureaux de la Colline
92213 Saint-Cloud Cedex
Tél : +33 (0)1 41 16 38 41
Mob : +33 (0)7 62 55 57 05
http://www.alterway.fr

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users






--

Kevin Mazière
Responsable Infrastructure
Alter Way – Hosting
1 rue Royal - 227 Bureaux de la Colline
92213 Saint-Cloud Cedex
Tél : +33 (0)1 41 16 38 41
Mob : +33 (0)7 62 55 57 05
http://www.alterway.fr

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] fw: Migrating ovirt-engine to new server

2012-10-03 Thread Itamar Heim

On 10/03/2012 03:28 PM, Neil wrote:

Sorry to repost(kind of) but I'm really battling here and I need to
get the VM's up and running again.

I've given up on the idea of migrating the database across due to all
sorts of problems encountered and I've instead done a completely NEW
fresh ovirt-engine install and I've got one host added and running.

How do I go about detaching my storage datacentre from my old
ovirt-engine, and re-attaching it to my new ovirt-engine running on my
new server? I'm running FC with multiple LUNS all assigned in one
datacentre called linux and I can't detach it because it says
Cannot detach Storage Domain while VMs/Templates reside on it. My
entire datacentre is in maintenance at the moment which consists of
only 3 VM's(4TB's in total), my host only has a 16GB SDD in it, so
exporting to a local NFS mount is not an option.

Can I safely remove the VM's(as instructed above) by going to VM's
and clicking remove and then will I be able to detach the datacentre
and then re-attach it to the new ovirt-engine? Bearing in mind that
I'm running a fresh install of ovirt-engine and I haven't copied the
database across? Is it a matter of re-attaching the LUNS that are
currently assigned because I can see the LUNS when I try and add a new
FC storage domain, however the LUNS currently attached to the old
ovirt-engine are greyed out and can't be selected.


do you need to keep the VMs, or just move the LUNs to create a new one?
if you just want to create a new one, you just need to clear the LUNs 
(DD over them) so engine will let you use them (or remove them from 
first engine which will format them for same end goal.




Any assistance is greatly appreciated.

Thank you.

Regards.

Neil Wilson.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] fw: Migrating ovirt-engine to new server

2012-10-03 Thread Neil
On Wed, Oct 3, 2012 at 4:06 PM, Itamar Heim ih...@redhat.com wrote:
 On 10/03/2012 04:04 PM, Neil wrote:

 Thanks for coming back to me.

 On Wed, Oct 3, 2012 at 4:00 PM, Itamar Heim ih...@redhat.com wrote:

 do you need to keep the VMs, or just move the LUNs to create a new one?
 if you just want to create a new one, you just need to clear the LUNs (DD
 over them) so engine will let you use them (or remove them from first
 engine
 which will format them for same end goal.


 I need to keep the VM's unfortunately.
 Logically speaking all I need to do is detach the main data domain
 from one ovirt-engine and re-attach it to the new ovirt-engine.

 sadly, not that easy yet (though just discussed today the need to push this
 feature).

 easiest would be to export them to an nfs export domain, re-purpose the
 LUNs, and import to the new system.

 if not feasible, need to hack a bit probably.

Oh crumbs! I thought that was wishful thinking though :)

Exporting the VM's to NFS will take too long due to the total size
being 4TB and the VM's are a mail, proxy and pdc servers so getting
that much downtime won't be possible. Is attempting the upgrade
path(http://wiki.ovirt.org/wiki/OVirt_3.0_to_3.1_upgrade) again my
only option then?
Even if I manage to get the upgrade working will I still need to
export/import the VM's via NFS or will the datacentre move across once
it can be detached?

Thanks!

Neil
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Question about network card order

2012-10-03 Thread Kevin Maziere Aubry
Sur :)

2012/10/3 Simon Grinberg si...@redhat.com



 --

 *From: *Kevin Maziere Aubry kevin.mazi...@alterway.fr
 *To: *Antoni Segura Puimedon asegu...@redhat.com
 *Cc: *Michael Tsirkin mtsir...@redhat.com, Dor Laor 
 dl...@redhat.com, users@ovirt.org
 *Sent: *Wednesday, October 3, 2012 3:02:47 PM

 *Subject: *Re: [Users] Question about network card order

 The mac adress are unknown until the creation of the VM, so udev won't
 help here :(


 But after the creation of the VM and before the first run you have the
 option to edit/view those MAC addresses.
 Give whatever you want before the first run.


 2012/10/3 Antoni Segura Puimedon asegu...@redhat.com



 - Original Message -
  From: Kevin Maziere Aubry kevin.mazi...@alterway.fr
  To: Igor Lvovsky ilvov...@redhat.com
  Cc: Dor Laor dl...@redhat.com, Michael Tsirkin 
 mtsir...@redhat.com, users@ovirt.org
  Sent: Wednesday, October 3, 2012 8:35:04 AM
  Subject: Re: [Users] Question about network card order
 
 
  Thanks for your reply.
 
 
 
  2012/10/3 Igor Lvovsky  ilvov...@redhat.com 
 
 
 
 
 
  - Original Message -
   From: Kevin Maziere Aubry  kevin.mazi...@alterway.fr 
   To: users@ovirt.org
   Sent: Wednesday, October 3, 2012 9:58:44 AM
   Subject: [Users] Question about network card order
  
  
   Hello
  
  
   My first email on the mailing, I hope the first one of a longue
   serie.
  
 
  Welcome to community :)
 
 
  
   The email concerns a Network Card order issue for which I like to
   report a have your advises.
  
  
   I've made a template which contains 2 network cards, Each one
   bridge
   on different vlan network.
   When I create a VM from this template, the network card mac address
   assignation is random... so that by default my fedora assign eth0
   to
   the smallest mac address, and so on for eth1/eth2 ...
  
  
   But no way to define network card order in template, so that
   sometime
   the smallest mac address is on eth0, sometime on eth1 (and in fact
   more often on eth1), sometime my VM works, sometime I have to
   destroy and recreate network interfaces.
  
  
 
  Unfortunately it's a known issue that related mostly to guest kernel
  rather than RHEV-M.
  This is not related to order of NICs during VM start, because RHEV-M
  keep the MAC assignment
  and even PCI address assignment same over VM reboots.
 
  The real reason hidden somewhere deep in kernel/udev behaviour.
  We need to understand why 2 NICs with same MAC addresses and same PCI
  addresses may get different names (eth0/eth1)
  over machine reboot.
 
  Michael, any ideas?
 
 
   Is there a workaround or anything else ?
  
 
  No, I am not aware of any workaround

 Wouldn't it be possible to write some matching rules in:
 /etc/udev/rules.d/70-persistent-net.rules

 for your template so you would have it consistent. Mind that you should
 not rename to the kernel eth namespace.

 Best,

 Toni

 
 
  Regards,
  Igor
 
 
  
   Thanks
  
  
   Kévin
  
  
   --
  
   Kevin Mazière
   Responsable Infrastructure
   Alter Way – Hosting
   1 rue Royal - 227 Bureaux de la Colline
   92213 Saint-Cloud Cedex
   Tél : +33 (0)1 41 16 38 41
   Mob : +33 (0)7 62 55 57 05
   http://www.alterway.fr
  
   ___
   Users mailing list
   Users@ovirt.org
   http://lists.ovirt.org/mailman/listinfo/users
  
 
 
 
 
  --
 
  Kevin Mazière
  Responsable Infrastructure
  Alter Way – Hosting
  1 rue Royal - 227 Bureaux de la Colline
  92213 Saint-Cloud Cedex
  Tél : +33 (0)1 41 16 38 41
  Mob : +33 (0)7 62 55 57 05
  http://www.alterway.fr
 
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 




 --

 Kevin Mazière
 Responsable Infrastructure
 Alter Way – Hosting
  1 rue Royal - 227 Bureaux de la Colline
 92213 Saint-Cloud Cedex
 Tél : +33 (0)1 41 16 38 41
 Mob : +33 (0)7 62 55 57 05
  http://www.alterway.fr

 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users





-- 

Kevin Mazière
Responsable Infrastructure
Alter Way – Hosting
 1 rue Royal - 227 Bureaux de la Colline
92213 Saint-Cloud Cedex
Tél : +33 (0)1 41 16 38 41
Mob : +33 (0)7 62 55 57 05
 http://www.alterway.fr
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] fw: Migrating ovirt-engine to new server

2012-10-03 Thread Itamar Heim

On 10/03/2012 04:17 PM, Neil wrote:

On Wed, Oct 3, 2012 at 4:06 PM, Itamar Heim ih...@redhat.com wrote:

On 10/03/2012 04:04 PM, Neil wrote:


Thanks for coming back to me.

On Wed, Oct 3, 2012 at 4:00 PM, Itamar Heim ih...@redhat.com wrote:


do you need to keep the VMs, or just move the LUNs to create a new one?
if you just want to create a new one, you just need to clear the LUNs (DD
over them) so engine will let you use them (or remove them from first
engine
which will format them for same end goal.



I need to keep the VM's unfortunately.
Logically speaking all I need to do is detach the main data domain
from one ovirt-engine and re-attach it to the new ovirt-engine.


sadly, not that easy yet (though just discussed today the need to push this
feature).

easiest would be to export them to an nfs export domain, re-purpose the
LUNs, and import to the new system.

if not feasible, need to hack a bit probably.


Oh crumbs! I thought that was wishful thinking though :)

Exporting the VM's to NFS will take too long due to the total size
being 4TB and the VM's are a mail, proxy and pdc servers so getting
that much downtime won't be possible. Is attempting the upgrade
path(http://wiki.ovirt.org/wiki/OVirt_3.0_to_3.1_upgrade) again my
only option then?
Even if I manage to get the upgrade working will I still need to
export/import the VM's via NFS or will the datacentre move across once
it can be detached?



if you upgrade it the DC should be preserved.
juan - i remember there was a specific issue around upgrade to check, 
but don't remember if was handled or not?

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] fw: Migrating ovirt-engine to new server

2012-10-03 Thread Neil
On Wed, Oct 3, 2012 at 4:20 PM, Itamar Heim ih...@redhat.com wrote:
 On 10/03/2012 04:17 PM, Neil wrote:

 On Wed, Oct 3, 2012 at 4:06 PM, Itamar Heim ih...@redhat.com wrote:

 On 10/03/2012 04:04 PM, Neil wrote:


 Thanks for coming back to me.

 On Wed, Oct 3, 2012 at 4:00 PM, Itamar Heim ih...@redhat.com wrote:

 do you need to keep the VMs, or just move the LUNs to create a new one?
 if you just want to create a new one, you just need to clear the LUNs
 (DD
 over them) so engine will let you use them (or remove them from first
 engine
 which will format them for same end goal.



 I need to keep the VM's unfortunately.
 Logically speaking all I need to do is detach the main data domain
 from one ovirt-engine and re-attach it to the new ovirt-engine.


 sadly, not that easy yet (though just discussed today the need to push
 this
 feature).

 easiest would be to export them to an nfs export domain, re-purpose the
 LUNs, and import to the new system.

 if not feasible, need to hack a bit probably.


 Oh crumbs! I thought that was wishful thinking though :)

 Exporting the VM's to NFS will take too long due to the total size
 being 4TB and the VM's are a mail, proxy and pdc servers so getting
 that much downtime won't be possible. Is attempting the upgrade
 path(http://wiki.ovirt.org/wiki/OVirt_3.0_to_3.1_upgrade) again my
 only option then?
 Even if I manage to get the upgrade working will I still need to
 export/import the VM's via NFS or will the datacentre move across once
 it can be detached?


 if you upgrade it the DC should be preserved.
 juan - i remember there was a specific issue around upgrade to check, but
 don't remember if was handled or not?

Okay that is good news at least, very glad to hear!
I am upgrading from a very early 3.1 release to the latest 3.1 using
the dreyou repo, but encountered an issue after importing my DB I
re-ran engine-setup and it kept asking for the engine password when it
got to the point of upgrading schema.

An idea I've just thought of which might work, is if I allocate
additional LUNS(as I have spare drives inside the SAN) and mount it
locally on the new system, and then share this via NFS to the old
system as an export domain, then export the machines, then re-purpose
the old LUNS and add these as a new storage domain. Does this sound
like it might work? Logically it means the data is just copying from
one set of LUNS to the other but still remaining on the SAN.

Thanks!

Neil
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] fw: Migrating ovirt-engine to new server

2012-10-03 Thread Itamar Heim

On 10/03/2012 04:37 PM, Neil wrote:

On Wed, Oct 3, 2012 at 4:20 PM, Itamar Heim ih...@redhat.com wrote:

On 10/03/2012 04:17 PM, Neil wrote:


On Wed, Oct 3, 2012 at 4:06 PM, Itamar Heim ih...@redhat.com wrote:


On 10/03/2012 04:04 PM, Neil wrote:



Thanks for coming back to me.

On Wed, Oct 3, 2012 at 4:00 PM, Itamar Heim ih...@redhat.com wrote:


do you need to keep the VMs, or just move the LUNs to create a new one?
if you just want to create a new one, you just need to clear the LUNs
(DD
over them) so engine will let you use them (or remove them from first
engine
which will format them for same end goal.




I need to keep the VM's unfortunately.
Logically speaking all I need to do is detach the main data domain
from one ovirt-engine and re-attach it to the new ovirt-engine.



sadly, not that easy yet (though just discussed today the need to push
this
feature).

easiest would be to export them to an nfs export domain, re-purpose the
LUNs, and import to the new system.

if not feasible, need to hack a bit probably.



Oh crumbs! I thought that was wishful thinking though :)

Exporting the VM's to NFS will take too long due to the total size
being 4TB and the VM's are a mail, proxy and pdc servers so getting
that much downtime won't be possible. Is attempting the upgrade
path(http://wiki.ovirt.org/wiki/OVirt_3.0_to_3.1_upgrade) again my
only option then?
Even if I manage to get the upgrade working will I still need to
export/import the VM's via NFS or will the datacentre move across once
it can be detached?



if you upgrade it the DC should be preserved.
juan - i remember there was a specific issue around upgrade to check, but
don't remember if was handled or not?


Okay that is good news at least, very glad to hear!
I am upgrading from a very early 3.1 release to the latest 3.1 using
the dreyou repo, but encountered an issue after importing my DB I
re-ran engine-setup and it kept asking for the engine password when it
got to the point of upgrading schema.



oh, not sure - that depends on the various versions dreyou used.
so this is 3.1--3.1 (dreyou), not 3.0--3.1?


An idea I've just thought of which might work, is if I allocate
additional LUNS(as I have spare drives inside the SAN) and mount it
locally on the new system, and then share this via NFS to the old
system as an export domain, then export the machines, then re-purpose
the old LUNS and add these as a new storage domain. Does this sound
like it might work? Logically it means the data is just copying from
one set of LUNS to the other but still remaining on the SAN.


this should work.
though you are still copying all the data via the host, regardless of 
being on same SAN.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] fw: Migrating ovirt-engine to new server

2012-10-03 Thread Neil
On Wed, Oct 3, 2012 at 4:41 PM, Itamar Heim ih...@redhat.com wrote:
 On 10/03/2012 04:37 PM, Neil wrote:

 On Wed, Oct 3, 2012 at 4:20 PM, Itamar Heim ih...@redhat.com wrote:

 On 10/03/2012 04:17 PM, Neil wrote:


 On Wed, Oct 3, 2012 at 4:06 PM, Itamar Heim ih...@redhat.com wrote:


 On 10/03/2012 04:04 PM, Neil wrote:



 Thanks for coming back to me.

 On Wed, Oct 3, 2012 at 4:00 PM, Itamar Heim ih...@redhat.com wrote:

 do you need to keep the VMs, or just move the LUNs to create a new
 one?
 if you just want to create a new one, you just need to clear the LUNs
 (DD
 over them) so engine will let you use them (or remove them from first
 engine
 which will format them for same end goal.




 I need to keep the VM's unfortunately.
 Logically speaking all I need to do is detach the main data domain
 from one ovirt-engine and re-attach it to the new ovirt-engine.



 sadly, not that easy yet (though just discussed today the need to push
 this
 feature).

 easiest would be to export them to an nfs export domain, re-purpose the
 LUNs, and import to the new system.

 if not feasible, need to hack a bit probably.



 Oh crumbs! I thought that was wishful thinking though :)

 Exporting the VM's to NFS will take too long due to the total size
 being 4TB and the VM's are a mail, proxy and pdc servers so getting
 that much downtime won't be possible. Is attempting the upgrade
 path(http://wiki.ovirt.org/wiki/OVirt_3.0_to_3.1_upgrade) again my
 only option then?
 Even if I manage to get the upgrade working will I still need to
 export/import the VM's via NFS or will the datacentre move across once
 it can be detached?


 if you upgrade it the DC should be preserved.
 juan - i remember there was a specific issue around upgrade to check, but
 don't remember if was handled or not?


 Okay that is good news at least, very glad to hear!
 I am upgrading from a very early 3.1 release to the latest 3.1 using
 the dreyou repo, but encountered an issue after importing my DB I
 re-ran engine-setup and it kept asking for the engine password when it
 got to the point of upgrading schema.


 oh, not sure - that depends on the various versions dreyou used.
 so this is 3.1--3.1 (dreyou), not 3.0--3.1?

Correct, it's 3.1.0_0001-1.8.el6.x86_64  -- 3.1.0-3.19.el6.noarch,
and has no upgrade path. I'm also trying to separate my engine from
one of the hosts, as this was installed on one of the hosts as a test
and then we foolishy went live with it.

 An idea I've just thought of which might work, is if I allocate
 additional LUNS(as I have spare drives inside the SAN) and mount it
 locally on the new system, and then share this via NFS to the old
 system as an export domain, then export the machines, then re-purpose
 the old LUNS and add these as a new storage domain. Does this sound
 like it might work? Logically it means the data is just copying from
 one set of LUNS to the other but still remaining on the SAN.

 this should work.
 though you are still copying all the data via the host, regardless of being
 on same SAN.

True! Sounds like the upgrade path is the best route. I have mailed
the developer of dreyou as I see there is a
patch(http://www.dreyou.org/ovirt/engine31.patch) which looks like it
corrects the issues encountered, but not sure how to apply it.

The only guide I've got to work with is the
OVirt_3.0_to_3.1_upgrade not sure if this applies to me though
considering I'm going from 3.1 to 3.1.

These are the steps I've tried.

1.) Install fresh ovirt-engine
2.) Run through engine-setup using same parameters as old server
3.) drop and import DB (http://wiki.ovirt.org/wiki/Backup_engine_db)
4.) Restore previous keystore and preserve .sh scripts
http://wiki.ovirt.org/wiki/OVirt_3.0_to_3.1_upgrade; but not removing
the new ovirt-engine because it's a new install on separate system.
5.) Re-run engine-setup keeping same parameters as before, which is
where it stops and keeps asking for the engine password when it tries
to upgrade the DB schema, despite the passwords being correct.

Presumably once that works I can then shutdown my old DC, but do I
need to remove the VM's once it's in maitenance? When I tried before
it said Cannot detach Storage Domain while VMs/Templates reside on
it.

Thank you!

Neil
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] oVirt Weekly Sync Meeting Minutes -- 2012-10-03

2012-10-03 Thread Mike Burns
Minutes:http://ovirt.org/meetings/ovirt/2012/ovirt.2012-10-03-14.00.html
Minutes (text): http://ovirt.org/meetings/ovirt/2012/ovirt.2012-10-03-14.00.txt
Log:
http://ovirt.org/meetings/ovirt/2012/ovirt.2012-10-03-14.00.log.html

=
#ovirt: oVirt Weekly Sync
=


Meeting started by mburns at 14:00:12 UTC. The full logs are available
at http://ovirt.org/meetings/ovirt/2012/ovirt.2012-10-03-14.00.log.html
.



Meeting summary
---
* agenda and roll call  (mburns, 14:00:20)

* Release status  (mburns, 14:04:48)
  * targeting a mid november Beta and Mid December release for 3.2
(mburns, 14:05:21)
  * primarily a bug fix release  (mburns, 14:05:31)
  * feature list is still pending  (mburns, 14:05:59)
  * ACTION: mburns to follow up with maintainers to figure out feature
lists for 3.2 -- DUE by 10-Oct  (mburns, 14:07:50)
  * LINK: http://wiki.ovirt.org/wiki/Project_Proposal_-_MOM -- very old
(aglitke, 14:13:06)
  * LINK: http://wiki.ovirt.org/wiki/Category:SLA   (aglitke, 14:13:22)
  * UI plugins should be ready  (mburns, 14:13:48)
  * make network a main tab should be included  (mburns, 14:14:11)
  * import of existing gluster clusters  (mburns, 14:14:24)
  * bootstrap improvements  (mburns, 14:14:33)
  * SLA is a target for inclusion (MOM)  (mburns, 14:14:51)
  * cac support in user portal for spice  (mburns, 14:15:16)
  * vm creation base on pre-defined profiles (instance types)  (mburns,
14:16:54)
  * other small improvements, bug fixes  (mburns, 14:17:08)
  * (preview) libvdsm  (mburns, 14:17:28)
  * (abaron) storage live migration  (mburns, 14:17:41)
  * sync network, nwfilter already merged  (mburns, 14:18:17)
  * ACTION: mburns to pull all of these features into the 3.2 release
summary page  (mburns, 14:20:00)
  * libvdsm has ack from danken, but needs ack from smizrahi  (mburns,
14:23:34)
  * libvdsm also still needs upstream acceptance  (mburns, 14:29:09)
  * tentative Test day date:  2012-11-19  (mburns, 14:30:41)

* Sub Project Status -- infra  (mburns, 14:32:57)
  * proposal for hosting from Alter Way has been sent to the board for
review  (mburns, 14:35:49)
  * no real need for other sub-project reports, covered in release
status  (mburns, 14:36:52)

* workshops  (mburns, 14:36:57)

* workshops - Bangalore  (mburns, 14:37:09)
  * working through the attendee gift details  (mburns, 14:39:45)
  * 60 slots, all sold out  (mburns, 14:39:51)
  * expected gifts:  16GB USB keys with a bootable all-in-one oVirt
image ready to go that people can use in a hands-on workshop
(mburns, 14:41:01)
  * agenda being finalized now  (mburns, 14:41:57)

* workshop - Barcelona (KVM Forum/LinuxCon EU)  (mburns, 14:43:37)
  * booth setup is being finalized  (mburns, 14:43:55)
  * hardware for demo is being discussed with sponsor companies (no
commitments yet)  (mburns, 14:44:13)

* other topics  (mburns, 14:47:33)
  * looking at re-opening registration for day 2 in bangalore  (mburns,
14:59:17)
  * open items for barcelona:  booth layout, attendee gifts; announce
schedule  (mburns, 15:00:30)
  * ACTION: mburns to look into upgrade flow  (mburns, 15:07:35)

Meeting ended at 15:20:24 UTC.




Action Items

* mburns to follow up with maintainers to figure out feature lists for
  3.2 -- DUE by 10-Oct
* mburns to pull all of these features into the 3.2 release summary page
* mburns to look into upgrade flow




Action Items, by person
---
* mburns
  * mburns to follow up with maintainers to figure out feature lists for
3.2 -- DUE by 10-Oct
  * mburns to pull all of these features into the 3.2 release summary
page
  * mburns to look into upgrade flow
* **UNASSIGNED**
  * (none)




People Present (lines said)
---
* mburns (143)
* dneary (64)
* lh (22)
* ovirtbot (19)
* itamar (14)
* aglitke (13)
* oschreib (9)
* danken (7)
* mgoldboi (7)
* RobertM (5)
* jbrooks (5)
* quaid (2)
* sgordon (2)




Generated by `MeetBot`_ 0.1.4

.. _`MeetBot`: http://wiki.debian.org/MeetBot

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Authentication for REST APIs?

2012-10-03 Thread Brian Vetter
On Oct 3, 2012, at 12:55 AM, Itamar Heim wrote:

 snip ...
 
 So based upon what I see in this log file, it would seem that the
 connect API wants to make sure that I am an admin and not a regular user.
 
 Which gets me back to my original question: Do the REST API and the
 ovirt-shell require admin privileges or is there a separate uri
 namespace for regular users to make requests? Or perhaps more direct,
 should https://$ovirt-server/api/vms be accessible to non-admins or is
 there a different url a non-admin should use?
 
 Brian
 
 
 which version of the sdk are you using?
 michael - maybe user level api made it into upstream post ovirt 3.1 feature 
 freeze (brian, in that case, it will be in ovirt 3.2, slated for freeze in 
 novemeber/release in december)
 


oVirt Engine version is 3.1.0-2.fc17
oVirt API/shell/tool version from yum is 3.1.0.6-1.fc17

Results from 'info' command in ovirt-shell:
[oVirt shell (connected)]# info

backend version: 3.1
sdk version: 3.1.0.4
cli version: 3.1.0.6
python version : 2.7.3.final.0

If the user level api isn't in 3.1, then I presume it would be in the nightly 
builds. Are there instructions for pulling the nightly builds and/or upgrading 
them. I saw the build instructions, but was hoping to save some time while 
evaluating things.

Brian

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Authentication for REST APIs?

2012-10-03 Thread Itamar Heim

On 10/03/2012 07:56 PM, Brian Vetter wrote:

On Oct 3, 2012, at 12:55 AM, Itamar Heim wrote:


snip ...

So based upon what I see in this log file, it would seem that the
connect API wants to make sure that I am an admin and not a regular user.

Which gets me back to my original question: Do the REST API and the
ovirt-shell require admin privileges or is there a separate uri
namespace for regular users to make requests? Or perhaps more direct,
should https://$ovirt-server/api/vms be accessible to non-admins or is
there a different url a non-admin should use?

Brian



which version of the sdk are you using?
michael - maybe user level api made it into upstream post ovirt 3.1
feature freeze (brian, in that case, it will be in ovirt 3.2, slated
for freeze in novemeber/release in december)



oVirt Engine version is 3.1.0-2.fc17
oVirt API/shell/tool version from yum is 3.1.0.6-1.fc17

Results from 'info' command in ovirt-shell:

[oVirt shell (connected)]# info

backend version: 3.1
sdk version: 3.1.0.4
cli version: 3.1.0.6
python version : 2.7.3.final.0


If the user level api isn't in 3.1, then I presume it would be in the
nightly builds. Are there instructions for pulling the nightly builds
and/or upgrading them. I saw the build instructions, but was hoping to
save some time while evaluating things.

Brian



true, nightly builds should have them.
ofer - any wiki on how best to use the nightly builds?

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] fw: Migrating ovirt-engine to new server

2012-10-03 Thread Juan Hernandez
On 10/03/2012 05:00 PM, Neil wrote:
 On Wed, Oct 3, 2012 at 4:41 PM, Itamar Heim ih...@redhat.com wrote:
 On 10/03/2012 04:37 PM, Neil wrote:

 On Wed, Oct 3, 2012 at 4:20 PM, Itamar Heim ih...@redhat.com wrote:

 On 10/03/2012 04:17 PM, Neil wrote:


 On Wed, Oct 3, 2012 at 4:06 PM, Itamar Heim ih...@redhat.com wrote:


 On 10/03/2012 04:04 PM, Neil wrote:



 Thanks for coming back to me.

 On Wed, Oct 3, 2012 at 4:00 PM, Itamar Heim ih...@redhat.com wrote:

 do you need to keep the VMs, or just move the LUNs to create a new
 one?
 if you just want to create a new one, you just need to clear the LUNs
 (DD
 over them) so engine will let you use them (or remove them from first
 engine
 which will format them for same end goal.




 I need to keep the VM's unfortunately.
 Logically speaking all I need to do is detach the main data domain
 from one ovirt-engine and re-attach it to the new ovirt-engine.



 sadly, not that easy yet (though just discussed today the need to push
 this
 feature).

 easiest would be to export them to an nfs export domain, re-purpose the
 LUNs, and import to the new system.

 if not feasible, need to hack a bit probably.



 Oh crumbs! I thought that was wishful thinking though :)

 Exporting the VM's to NFS will take too long due to the total size
 being 4TB and the VM's are a mail, proxy and pdc servers so getting
 that much downtime won't be possible. Is attempting the upgrade
 path(http://wiki.ovirt.org/wiki/OVirt_3.0_to_3.1_upgrade) again my
 only option then?
 Even if I manage to get the upgrade working will I still need to
 export/import the VM's via NFS or will the datacentre move across once
 it can be detached?


 if you upgrade it the DC should be preserved.
 juan - i remember there was a specific issue around upgrade to check, but
 don't remember if was handled or not?


 Okay that is good news at least, very glad to hear!
 I am upgrading from a very early 3.1 release to the latest 3.1 using
 the dreyou repo, but encountered an issue after importing my DB I
 re-ran engine-setup and it kept asking for the engine password when it
 got to the point of upgrading schema.


 oh, not sure - that depends on the various versions dreyou used.
 so this is 3.1--3.1 (dreyou), not 3.0--3.1?
 
 Correct, it's 3.1.0_0001-1.8.el6.x86_64  -- 3.1.0-3.19.el6.noarch,
 and has no upgrade path. I'm also trying to separate my engine from
 one of the hosts, as this was installed on one of the hosts as a test
 and then we foolishy went live with it.
 
 An idea I've just thought of which might work, is if I allocate
 additional LUNS(as I have spare drives inside the SAN) and mount it
 locally on the new system, and then share this via NFS to the old
 system as an export domain, then export the machines, then re-purpose
 the old LUNS and add these as a new storage domain. Does this sound
 like it might work? Logically it means the data is just copying from
 one set of LUNS to the other but still remaining on the SAN.

 this should work.
 though you are still copying all the data via the host, regardless of being
 on same SAN.
 
 True! Sounds like the upgrade path is the best route. I have mailed
 the developer of dreyou as I see there is a
 patch(http://www.dreyou.org/ovirt/engine31.patch) which looks like it
 corrects the issues encountered, but not sure how to apply it.
 
 The only guide I've got to work with is the
 OVirt_3.0_to_3.1_upgrade not sure if this applies to me though
 considering I'm going from 3.1 to 3.1.
 
 These are the steps I've tried.
 
 1.) Install fresh ovirt-engine
 2.) Run through engine-setup using same parameters as old server

Here make sure that the ovirt-engine service is stopped:

service ovirt-engine stop

 3.) drop and import DB (http://wiki.ovirt.org/wiki/Backup_engine_db)

Here, between step 3 and 4, you will need to update the database schema,
as there were probably a lot of changes between the two versions that
you are using. Go to the /usr/share/ovirt-engine/dbscripts directory and
try to run the following script:

./upgrade.sh -U postgres

Does that work?

 4.) Restore previous keystore and preserve .sh scripts
 http://wiki.ovirt.org/wiki/OVirt_3.0_to_3.1_upgrade; but not removing
 the new ovirt-engine because it's a new install on separate system.

Correct, that should work.

 5.) Re-run engine-setup keeping same parameters as before, which is
 where it stops and keeps asking for the engine password when it tries
 to upgrade the DB schema, despite the passwords being correct.

No, you should not run engine-setup again, just start the ovirt-engine
service again:

service ovirt-engine start

 Presumably once that works I can then shutdown my old DC, but do I
 need to remove the VM's once it's in maitenance? When I tried before
 it said Cannot detach Storage Domain while VMs/Templates reside on
 it.

Please report back your results or ping me in the #ovirt channel if you
have issues.


-- 
Dirección Comercial: C/Jose Bardasano Baos, 9, Edif. Gorbea 3, planta

Re: [Users] oVirt Weekly Sync Meeting Minutes -- 2012-10-03

2012-10-03 Thread Itamar Heim

On 10/03/2012 05:23 PM, Mike Burns wrote:
...


* Release status  (mburns, 14:04:48)
   * targeting a mid november Beta and Mid December release for 3.2
 (mburns, 14:05:21)
   * primarily a bug fix release  (mburns, 14:05:31)
   * feature list is still pending  (mburns, 14:05:59)
   * ACTION: mburns to follow up with maintainers to figure out feature
 lists for 3.2 -- DUE by 10-Oct  (mburns, 14:07:50)
   * LINK: http://wiki.ovirt.org/wiki/Project_Proposal_-_MOM -- very old
 (aglitke, 14:13:06)
   * LINK: http://wiki.ovirt.org/wiki/Category:SLA   (aglitke, 14:13:22)
   * UI plugins should be ready  (mburns, 14:13:48)
   * make network a main tab should be included  (mburns, 14:14:11)
   * import of existing gluster clusters  (mburns, 14:14:24)
   * bootstrap improvements  (mburns, 14:14:33)
   * SLA is a target for inclusion (MOM)  (mburns, 14:14:51)
   * cac support in user portal for spice  (mburns, 14:15:16)
   * vm creation base on pre-defined profiles (instance types)  (mburns,
 14:16:54)
   * other small improvements, bug fixes  (mburns, 14:17:08)
   * (preview) libvdsm  (mburns, 14:17:28)
   * (abaron) storage live migration  (mburns, 14:17:41)
   * sync network, nwfilter already merged  (mburns, 14:18:17)
   * ACTION: mburns to pull all of these features into the 3.2 release
 summary page  (mburns, 14:20:00)
   * libvdsm has ack from danken, but needs ack from smizrahi  (mburns,
 14:23:34)
   * libvdsm also still needs upstream acceptance  (mburns, 14:29:09)
   * tentative Test day date:  2012-11-19  (mburns, 14:30:41)



i did a review of the ~1500(!) commits in engine since 3.1 was branched 
- these are items i thought worthy changes noting wrt 3.2 till now[1], 
though obviously there are many other changes as well


- port mirroring
- user level api
- automatic storage domain upgrade
- japanese localization

the list below[2] is a fast filter by me, doing an injustice to the 
number of bugs and cleanups pushed.
anything i missed (it was a manual review filter), committers are free 
to add :)


michael - can you also outline new features in sdk/cli since 3.1 till 
now (we can update the gap with more changes closer to the release).
only core sdk/cli changes, no need to detail those which are just 
covering features added to engine.


[1] commit 1df1baa3366189043eae765bb4e61726e156892e
[2]
Alex Lourie (31):
  packaging: Updated default encryption key size to 2048

Alexey Chub (9):

Allon Mureinik (140):

Alon Bar-Lev (64):
  bootstrap: send complete bootstrap from engine
  bootstrap: introduce soft timeouts for operations
  bootstrap: allow retrieve ssh key fingerprint of server
  pki: use PKCS#12 format to store keys

Alona Kaplan (72):
  webadmin: Add sync network to setup networks
  webadmin: port mirroring should be enabled just for 3.1 and upper 
(#846025)


Amador Pahim (1):

Asaf Shakarchi (27):

Daniel Erez (88):
  webadmin: VM Snapshots sub-tab re-design
  webadmin: Disks main-tab and sub-tab redesign (#833101)
  webadmin: resizable dialogs support (#846327)
  webadmin: enable resizing of storage dialogs (#846327)

Dhandapani (4):
  engine: Added Enable / Disable CIFS option feature
  engine: Added new column 'sshKeyFingerprint' in the vds_static table
  engine: Get Gluster Servers query

Doron Fediuck (1):

Douglas Schilling Landgraf (1):

Einav Cohen (5):
  userportal,webadmin: Japanese locale support

Eli Mesika (36):
  core: adding host time-drift alert(#848862)
  core: adding host time-drift notification(#848862)

Eyal Edri (1):

Federico Simoncelli (8):
  core: expose the V3 domain format in dc 3.1

Gilad Chaplik (74):
  webadmin: quota fix-ups in webadmin
  webadmin: separate storage  cluster quota
  userportal: allow to set quota in UP
  webadmin: Import VM dialog redesign (#845947)
  webadmin: Import Template Redesign (#840280)

Greg Padgett (12):

Juan Hernandez (51):
  packaging: Run with OpenJDK 7 regardless of what is installed
  packaging: Disable HTTP and HTTPS when using Apache as proxy
  packaging: Move defaults out of service script
  core, packaging: Don't use JBoss default ports (#831193)
  packaging: Find and check JVM during setup (#834436)
  packaging, tools: Use JVM selected during setup (#834436)

Kanagaraj M (21):
  webadmin: Adding CIFS checkbox to Volume Popup
  webadmin: Gluster Volume - Optimize for Virt Store action
  engine: making gluster peer probe optional while adding Host
  webadmin: Import gluster cluster

Kiril Nesenko (12):

Laszlo Hornyak (60):

Liron Aravot (26):

Maor Lipchuk (26):

Matthias Schmitz (1):

Michael Kublin (48):
  core:  Auto-Recovery should check whether getVdsStats returns 
'lastCheck60' before it proclaims host as up (#844438)


Michael Pasternak (22):

Mike Kolesnik (69):
  engine: Add sync networks to Setup Networks (#838300)

Moti Asayag (63):

Muli Salem (32):


Re: [Users] oVirt Weekly Sync Meeting Minutes -- 2012-10-03

2012-10-03 Thread Mike Burns
On Wed, 2012-10-03 at 21:10 +0200, Itamar Heim wrote:
 On 10/03/2012 05:23 PM, Mike Burns wrote:
 ...
 
  * Release status  (mburns, 14:04:48)
 * targeting a mid november Beta and Mid December release for 3.2
   (mburns, 14:05:21)
 * primarily a bug fix release  (mburns, 14:05:31)
 * feature list is still pending  (mburns, 14:05:59)
 * ACTION: mburns to follow up with maintainers to figure out feature
   lists for 3.2 -- DUE by 10-Oct  (mburns, 14:07:50)
 * LINK: http://wiki.ovirt.org/wiki/Project_Proposal_-_MOM -- very old
   (aglitke, 14:13:06)
 * LINK: http://wiki.ovirt.org/wiki/Category:SLA   (aglitke, 14:13:22)
 * UI plugins should be ready  (mburns, 14:13:48)
 * make network a main tab should be included  (mburns, 14:14:11)
 * import of existing gluster clusters  (mburns, 14:14:24)
 * bootstrap improvements  (mburns, 14:14:33)
 * SLA is a target for inclusion (MOM)  (mburns, 14:14:51)
 * cac support in user portal for spice  (mburns, 14:15:16)
 * vm creation base on pre-defined profiles (instance types)  (mburns,
   14:16:54)
 * other small improvements, bug fixes  (mburns, 14:17:08)
 * (preview) libvdsm  (mburns, 14:17:28)
 * (abaron) storage live migration  (mburns, 14:17:41)
 * sync network, nwfilter already merged  (mburns, 14:18:17)
 * ACTION: mburns to pull all of these features into the 3.2 release
   summary page  (mburns, 14:20:00)
 * libvdsm has ack from danken, but needs ack from smizrahi  (mburns,
   14:23:34)
 * libvdsm also still needs upstream acceptance  (mburns, 14:29:09)
 * tentative Test day date:  2012-11-19  (mburns, 14:30:41)
 
 
 i did a review of the ~1500(!) commits in engine since 3.1 was branched 
 - these are items i thought worthy changes noting wrt 3.2 till now[1], 
 though obviously there are many other changes as well

Thanks!

 
 - port mirroring
 - user level api
 - automatic storage domain upgrade
 - japanese localization
 
 the list below[2] is a fast filter by me, doing an injustice to the 
 number of bugs and cleanups pushed.
 anything i missed (it was a manual review filter), committers are free 
 to add :)
 
 michael - can you also outline new features in sdk/cli since 3.1 till 
 now (we can update the gap with more changes closer to the release).
 only core sdk/cli changes, no need to detail those which are just 
 covering features added to engine.
 
 [1] commit 1df1baa3366189043eae765bb4e61726e156892e
 [2]
 Alex Lourie (31):
packaging: Updated default encryption key size to 2048
 
 Alexey Chub (9):
 
 Allon Mureinik (140):
 
 Alon Bar-Lev (64):
bootstrap: send complete bootstrap from engine
bootstrap: introduce soft timeouts for operations
bootstrap: allow retrieve ssh key fingerprint of server
pki: use PKCS#12 format to store keys
 
 Alona Kaplan (72):
webadmin: Add sync network to setup networks
webadmin: port mirroring should be enabled just for 3.1 and upper 
 (#846025)
 
 Amador Pahim (1):
 
 Asaf Shakarchi (27):
 
 Daniel Erez (88):
webadmin: VM Snapshots sub-tab re-design
webadmin: Disks main-tab and sub-tab redesign (#833101)
webadmin: resizable dialogs support (#846327)
webadmin: enable resizing of storage dialogs (#846327)
 
 Dhandapani (4):
engine: Added Enable / Disable CIFS option feature
engine: Added new column 'sshKeyFingerprint' in the vds_static table
engine: Get Gluster Servers query
 
 Doron Fediuck (1):
 
 Douglas Schilling Landgraf (1):
 
 Einav Cohen (5):
userportal,webadmin: Japanese locale support
 
 Eli Mesika (36):
core: adding host time-drift alert(#848862)
core: adding host time-drift notification(#848862)
 
 Eyal Edri (1):
 
 Federico Simoncelli (8):
core: expose the V3 domain format in dc 3.1
 
 Gilad Chaplik (74):
webadmin: quota fix-ups in webadmin
webadmin: separate storage  cluster quota
userportal: allow to set quota in UP
webadmin: Import VM dialog redesign (#845947)
webadmin: Import Template Redesign (#840280)
 
 Greg Padgett (12):
 
 Juan Hernandez (51):
packaging: Run with OpenJDK 7 regardless of what is installed
packaging: Disable HTTP and HTTPS when using Apache as proxy
packaging: Move defaults out of service script
core, packaging: Don't use JBoss default ports (#831193)
packaging: Find and check JVM during setup (#834436)
packaging, tools: Use JVM selected during setup (#834436)
 
 Kanagaraj M (21):
webadmin: Adding CIFS checkbox to Volume Popup
webadmin: Gluster Volume - Optimize for Virt Store action
engine: making gluster peer probe optional while adding Host
webadmin: Import gluster cluster
 
 Kiril Nesenko (12):
 
 Laszlo Hornyak (60):
 
 Liron Aravot (26):
 
 Maor Lipchuk (26):
 
 Matthias Schmitz (1):
 
 Michael Kublin (48):
core:  

[Users] oVirt-engine and UserPortal cluster

2012-10-03 Thread Brian Vetter
I've been scouring through the install notes and the architecture documents but 
didn't find my answer. Is there a way to cluster or replicate the userportal 
app or is it strictly a single instance? Any thoughts to the scale of a large 
VDI system with 10,000 desktops and their VMs and how that impacts the 
ovirt-engine and the user-portal app?

I figure it has been discussed, but using the word cluster in a google search 
of the wiki results in a lot of hits, none of them that I saw that are to do 
with clustering the server, just the virtual machine nodes.

Brian

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] [oVirt 3.1] Engine multipath

2012-10-03 Thread Andres Gonzalez
Hi,

I have an iSCSI LUN that is acceded from an NIC with an IP address. I would
like to have redundancy, but as the storage is unavailable to make a
bonding it's possible to manage like a multipathing from the oVirt Engine
and access the same LUN from 2 different IP address (and NICs) ?

Thanks!


-- 
AGD
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users