[Users] Fwd: API usage - 3.1

2013-01-13 Thread Eli Mesika


- Forwarded Message -
From: Tom Brown t...@ng23.net
To: users users@ovirt.org
Sent: Friday, January 11, 2013 6:27:03 PM
Subject: [Users] API usage - 3.1

Trying to get going adding VM's via the API and so far have managed to get 
quite far - I am however facing this

vm_template = vm
name%s/name
cluster
  nameDefault/name
/cluster
template
  nameBlank/name
/template
vm_typeserver/vm_type
memory536870912/memory
os
  boot dev=hd/
/os
/vm

The VM is created but the type ends up being a desktop and not a server - 

What did i do wrong?

Michael, can you take a look at this ? Thanks

thanks
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Fwd: API usage - 3.1

2013-01-13 Thread Michael Pasternak

Hi Tom,

  Original Message 
 Subject: [Users] API usage - 3.1
 Date: Fri, 11 Jan 2013 16:27:03 +
 From: Tom Brown t...@ng23.net
 To: users users@ovirt.org
 
 Trying to get going adding VM's via the API and so far have managed to get 
 quite far - I am however facing this
 
 vm_template = vm
 name%s/name
 cluster
   nameDefault/name
 /cluster
 template
   nameBlank/name
 /template
 vm_typeserver/vm_type
 memory536870912/memory
 os
   boot dev=hd/
 /os
 /vm
 
 The VM is created but the type ends up being a desktop and not a server -
 
 What did i do wrong?

the name of the element is type (not vm_type).

 
 thanks
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
 


-- 

Michael Pasternak
RedHat, ENG-Virtualization RD
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Testing High Availability and Power outages

2013-01-13 Thread Alexandru Vladulescu
Dear Doron,

I haven't collected the logs from the tests, but I would gladly re-do the case 
and get back to you asap. 

This feature is the main reason of which I have chosen to go with Ovirt in the 
first place, besides other virt environments.

Could you please inform me what logs should I be focusing on, besides the 
engine log; vdsm maybe or other relevant logs?

Regards,
Alex


--
Sent from phone.

On 13.01.2013, at 09:56, Doron Fediuck dfedi...@redhat.com wrote:

 
 
 From: Alexandru Vladulescu avladule...@bfproject.ro
 To: users users@ovirt.org
 Sent: Friday, January 11, 2013 2:47:38 PM
 Subject: [Users] Testing High Availability and Power outages
 
 
 Hi,
 
 
 Today, I started testing on my Ovirt 3.1 installation (from dreyou repos) 
 running on 3 x Centos 6.3 hypervisors the High Availability features and the 
 fence mechanism.
 
 As yesterday, I have reported in a previous email thread, that the migration 
 priority queue cannot be increased (bug) in this current version, I decided 
 to test what the official documentation says about the High Availability 
 cases. 
 
 This will be a disaster case scenarios to suffer from if one hypervisor has a 
 power outage/hardware problem and the VMs running on it are not migrating on 
 other spare resources.
 
 
 In the official documenation from ovirt.org it is quoted the following:
 High availability
 
 Allows critical VMs to be restarted on another host in the event of hardware 
 failure with three levels of priority, taking into account resiliency policy.
 
 Resiliency policy to control high availability VMs at the cluster level.
 Supports application-level high availability with supported fencing agents.
 
 As well as in the Architecture description:
 
 High Availability - restart guest VMs from failed hosts automatically on 
 other hosts
 
 
 
 So the testing went like this -- One VM running a linux box, having the check 
 box High Available and Priority for Run/Migration queue: set to Low. On 
 Host we have the check box to Any Host in Cluster, without Allow VM 
 migration only upon Admin specific request checked.
 
 
 
 My environment:
 
 
 Configuration :  2 x Hypervisors (same cluster/hardware configuration) ; 1 x 
 Hypervisor + acting as a NAS (NFS) server (different cluster/hardware 
 configuration)
 
 Actions: Went and cut-off the power from one of the hypervisors from the 2 
 node clusters, while the VM was running on. This would translate to a power 
 outage.
 
 Results: The hypervisor node that suffered from the outage is showing in 
 Hosts tab as Non Responsive on Status, and the VM has a question mark and 
 cannot be powered off or nothing (therefore it's stuck).
 
 In the Log console in GUI, I get: 
 
 Host Hyper01 is non-responsive.
 VM Web-Frontend01 was set to the Unknown status.
 
 There is nothing I could I could do besides clicking on the Hyper01 Confirm 
 Host as been rebooted, afterwards the VM starts on the Hyper02 with a cold 
 reboot of the VM.
 
 The Log console changes to:
 
 Vm Web-Frontend01 was shut down due to Hyper01 host reboot or manual fence
 All VMs' status on Non-Responsive Host Hyper01 were changed to 'Down' by 
 admin@internal
 Manual fencing for host Hyper01 was started.
 VM Web-Frontend01 was restarted on Host Hyper02
 
 
 I would like you approach on this problem, reading the documentation  
 features pages on the official website, I suppose that this would have been 
 an automatically mechanism working on some sort of a vdsm  engine fencing 
 action. Am I missing something regarding it ?
 
 
 Thank you for your patience reading this.
 
 
 Regards,
 Alex.
 
 
 
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 Hi Alex,
 Can you share with us the engine's log from the relevant time period?
 
 Doron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Testing High Availability and Power outages

2013-01-13 Thread Doron Fediuck
- Original Message -

 From: Alexandru Vladulescu avladule...@bfproject.ro
 To: Doron Fediuck dfedi...@redhat.com
 Cc: users users@ovirt.org
 Sent: Sunday, January 13, 2013 10:46:41 AM
 Subject: Re: [Users] Testing High Availability and Power outages

 Dear Doron,

 I haven't collected the logs from the tests, but I would gladly re-do
 the case and get back to you asap.

 This feature is the main reason of which I have chosen to go with
 Ovirt in the first place, besides other virt environments.

 Could you please inform me what logs should I be focusing on, besides
 the engine log; vdsm maybe or other relevant logs?

 Regards,
 Alex

 --
 Sent from phone.

 On 13.01.2013, at 09:56, Doron Fediuck  dfedi...@redhat.com  wrote:

  - Original Message -
 

   From: Alexandru Vladulescu  avladule...@bfproject.ro 
  
 
   To: users  users@ovirt.org 
  
 
   Sent: Friday, January 11, 2013 2:47:38 PM
  
 
   Subject: [Users] Testing High Availability and Power outages
  
 

   Hi,
  
 

   Today, I started testing on my Ovirt 3.1 installation (from
   dreyou
   repos) running on 3 x Centos 6.3 hypervisors the High
   Availability
   features and the fence mechanism.
  
 

   As yesterday, I have reported in a previous email thread, that
   the
   migration priority queue cannot be increased (bug) in this
   current
   version, I decided to test what the official documentation says
   about the High Availability cases.
  
 

   This will be a disaster case scenarios to suffer from if one
   hypervisor has a power outage/hardware problem and the VMs
   running
   on it are not migrating on other spare resources.
  
 

   In the official documenation from ovirt.org it is quoted the
   following:
  
 
   High availability
  
 

   Allows critical VMs to be restarted on another host in the event
   of
   hardware failure with three levels of priority, taking into
   account
   resiliency policy.
  
 

   * Resiliency policy to control high availability VMs at the
   cluster
   level.
  
 
   * Supports application-level high availability with supported
   fencing
   agents.
  
 

   As well as in the Architecture description:
  
 

   High Availability - restart guest VMs from failed hosts
   automatically
   on other hosts
  
 

   So the testing went like this -- One VM running a linux box,
   having
   the check box High Available and Priority for Run/Migration
   queue: set to Low. On Host we have the check box to Any Host in
   Cluster, without Allow VM migration only upon Admin specific
   request checked.
  
 

   My environment:
  
 

   Configuration : 2 x Hypervisors (same cluster/hardware
   configuration)
   ; 1 x Hypervisor + acting as a NAS (NFS) server (different
   cluster/hardware configuration)
  
 

   Actions: Went and cut-off the power from one of the hypervisors
   from
   the 2 node clusters, while the VM was running on. This would
   translate to a power outage.
  
 

   Results: The hypervisor node that suffered from the outage is
   showing
   in Hosts tab as Non Responsive on Status, and the VM has a
   question
   mark and cannot be powered off or nothing (therefore it's stuck).
  
 

   In the Log console in GUI, I get:
  
 

   Host Hyper01 is non-responsive.
  
 
   VM Web-Frontend01 was set to the Unknown status.
  
 

   There is nothing I could I could do besides clicking on the
   Hyper01
   Confirm Host as been rebooted, afterwards the VM starts on the
   Hyper02 with a cold reboot of the VM.
  
 

   The Log console changes to:
  
 

   Vm Web-Frontend01 was shut down due to Hyper01 host reboot or
   manual
   fence
  
 
   All VMs' status on Non-Responsive Host Hyper01 were changed to
   'Down'
   by admin@internal
  
 
   Manual fencing for host Hyper01 was started.
  
 
   VM Web-Frontend01 was restarted on Host Hyper02
  
 

   I would like you approach on this problem, reading the
   documentation
features pages on the official website, I suppose that this
   would
   have been an automatically mechanism working on some sort of a
   vdsm
engine fencing action. Am I missing something regarding it ?
  
 

   Thank you for your patience reading this.
  
 

   Regards,
  
 
   Alex.
  
 

   ___
  
 
   Users mailing list
  
 
   Users@ovirt.org
  
 
   http://lists.ovirt.org/mailman/listinfo/users
  
 

  Hi Alex,
 
  Can you share with us the engine's log from the relevant time
  period?
 

  Doron
 

Hi Alex, 
engine log is the important one, as it will indicate on the decision making 
process. 
VDSM logs should be kept in case something is unclear, but I suggest we begin 
with 
engine.log. 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] win 7 vm creation:

2013-01-13 Thread Livnat Peer
Hi Gianluca,
From looking in the logs it looks like you had some DB upgrade issue, or
DB clean install issue.

Caused by: org.postgresql.util.PSQLException: The column name
count_threads_as_cores was not found in this ResultSet

Adding Doron to take take a look, and I think Einav also had some db
upgrade issue last week...

After fixing you DB issues, can you tell me if you have host in status
up, I don't see any network related issue in the logs.


Livnat

On 01/05/2013 03:07 AM, Gianluca Cecchi wrote:
 Current network config on host:
 
 # ip addr list
 1: lo: LOOPBACK,UP,LOWER_UP mtu 16436 qdisc noqueue state UNKNOWN 
 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
 inet 127.0.0.1/8 http://127.0.0.1/8 scope host lo
 inet6 ::1/128 scope host 
valid_lft forever preferred_lft forever
 2: em1: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc mq state UP
 qlen 1000
 link/ether 00:1e:0b:21:b8:c4 brd ff:ff:ff:ff:ff:ff
 inet6 fe80::21e:bff:fe21:b8c4/64 scope link 
valid_lft forever preferred_lft forever
 3: em2: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc mq state UP
 qlen 1000
 link/ether 00:1e:0b:21:b8:c6 brd ff:ff:ff:ff:ff:ff
 inet6 fe80::21e:bff:fe21:b8c6/64 scope link 
valid_lft forever preferred_lft forever
 4: em3: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc mq master
 ovirtmgmt state UP qlen 1000
 link/ether 00:1c:c4:ab:3a:dd brd ff:ff:ff:ff:ff:ff
 5: em4: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc mq state UP
 qlen 1000
 link/ether 00:1c:c4:ab:3a:de brd ff:ff:ff:ff:ff:ff
 inet6 fe80::21c:c4ff:feab:3ade/64 scope link 
valid_lft forever preferred_lft forever
 6: em3.65@em3: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc noqueue
 state UP 
 link/ether 00:1c:c4:ab:3a:dd brd ff:ff:ff:ff:ff:ff
 inet 10.4.4.59/24 http://10.4.4.59/24 brd 10.4.4.255 scope global
 em3.65
 inet6 fe80::21c:c4ff:feab:3add/64 scope link 
valid_lft forever preferred_lft forever
 11: ;vdsmdummy;: BROADCAST,MULTICAST mtu 1500 qdisc noop state DOWN 
 link/ether 96:14:65:da:bc:c5 brd ff:ff:ff:ff:ff:ff
 12: bond0: BROADCAST,MULTICAST,MASTER mtu 1500 qdisc noop state DOWN 
 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
 13: bond4: BROADCAST,MULTICAST,MASTER mtu 1500 qdisc noop state DOWN 
 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
 14: bond1: BROADCAST,MULTICAST,MASTER mtu 1500 qdisc noop state DOWN 
 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
 15: ovirtmgmt: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc noqueue
 state UP 
 link/ether 00:1c:c4:ab:3a:dd brd ff:ff:ff:ff:ff:ff
 inet6 fe80::21c:c4ff:feab:3add/64 scope link 
valid_lft forever preferred_lft forever
 
 # brctl show
 bridge namebridge idSTP enabledinterfaces
 ;vdsmdummy;8000.no
 ovirtmgmt8000.001cc4ab3addnoem3
 
 Do I have perhaps to say/config that ovirtmgmt is tagged in any way?
 
 
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] What do you want to see in oVirt next?

2013-01-13 Thread Sigbjorn Lie

On 01/03/2013 05:08 PM, Itamar Heim wrote:

Hi Everyone,

as we wrap oVirt 3.2, I wanted to check with oVirt users on what they 
find good/useful in oVirt, and what they would like to see 
improved/added in coming versions?


Thanks,
   Itamar
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


I would also like to see single sign on capabilities (Kerberos) on the 
WebAdmin and UserPortal when using IPA as authentication source.



Regards,
Siggi

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Can I move local_cluster in all-in-one setup?

2013-01-13 Thread Mike Kolesnik
- Original Message -

 On Thu, Jan 10, 2013 at 5:10 PM, René Koch (ovido) wrote:

  If not, please do a resync.
 
 It seems it is not in sync, if I understand correctly the two
 arrows symbol in:
 https://docs.google.com/file/d/0BwoPbcrMv8mvUFFVaVl1TTlVVVE/edit

 My network page is instead this
 https://docs.google.com/file/d/0BwoPbcrMv8mveERiMUlKY094TVk/edit

 The problem is that I'm not able to make it synced, tried both with
 and without selecting the verify checkbox at the bottom
Hi Gianluca, 

Can you please send engine + vdsm logs from when you tried to check the Sync 
checkbox and run the Setup networks command on this host? 

It is out of sync because the engine network is sitting directly on top of the 
interface instead of the VLAN, but it should be fixed when the Setup network 
command is done .. 

Regards, 
Mike 

 it seems a dog trying to bite its tail (at least we say this way in
 Italy with similar situations... ;-)

 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] win 7 vm creation:

2013-01-13 Thread Doron Fediuck
Gianluca,
I do not see all the details, but based on Livnat's mail, it seems
that you DB needs an upgrade, and it will pick up this addition-

backend/manager/dbscripts/upgrade/03_02_0100_add_cpu_thread_columns.sql:select 
fn_db_add_column('vds_groups', 'count_threads_as_cores', 'BOOLEAN NOT NULL 
DEFAULT FALSE');

Is your setup based on RPMs or built from source?


- Original Message -
 From: Livnat Peer lp...@redhat.com
 To: Gianluca Cecchi gianluca.cec...@gmail.com
 Cc: users users@ovirt.org, Doron Fediuck dfedi...@redhat.com, Einav 
 Cohen eco...@redhat.com
 Sent: Sunday, January 13, 2013 12:27:57 PM
 Subject: Re: [Users] win 7 vm creation:
 
 Hi Gianluca,
 From looking in the logs it looks like you had some DB upgrade issue,
 or
 DB clean install issue.
 
 Caused by: org.postgresql.util.PSQLException: The column name
 count_threads_as_cores was not found in this ResultSet
 
 Adding Doron to take take a look, and I think Einav also had some db
 upgrade issue last week...
 
 After fixing you DB issues, can you tell me if you have host in
 status
 up, I don't see any network related issue in the logs.
 
 
 Livnat
 
 On 01/05/2013 03:07 AM, Gianluca Cecchi wrote:
  Current network config on host:
  
  # ip addr list
  1: lo: LOOPBACK,UP,LOWER_UP mtu 16436 qdisc noqueue state UNKNOWN
  link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  inet 127.0.0.1/8 http://127.0.0.1/8 scope host lo
  inet6 ::1/128 scope host
 valid_lft forever preferred_lft forever
  2: em1: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc mq state
  UP
  qlen 1000
  link/ether 00:1e:0b:21:b8:c4 brd ff:ff:ff:ff:ff:ff
  inet6 fe80::21e:bff:fe21:b8c4/64 scope link
 valid_lft forever preferred_lft forever
  3: em2: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc mq state
  UP
  qlen 1000
  link/ether 00:1e:0b:21:b8:c6 brd ff:ff:ff:ff:ff:ff
  inet6 fe80::21e:bff:fe21:b8c6/64 scope link
 valid_lft forever preferred_lft forever
  4: em3: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc mq master
  ovirtmgmt state UP qlen 1000
  link/ether 00:1c:c4:ab:3a:dd brd ff:ff:ff:ff:ff:ff
  5: em4: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc mq state
  UP
  qlen 1000
  link/ether 00:1c:c4:ab:3a:de brd ff:ff:ff:ff:ff:ff
  inet6 fe80::21c:c4ff:feab:3ade/64 scope link
 valid_lft forever preferred_lft forever
  6: em3.65@em3: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc
  noqueue
  state UP
  link/ether 00:1c:c4:ab:3a:dd brd ff:ff:ff:ff:ff:ff
  inet 10.4.4.59/24 http://10.4.4.59/24 brd 10.4.4.255 scope
  global
  em3.65
  inet6 fe80::21c:c4ff:feab:3add/64 scope link
 valid_lft forever preferred_lft forever
  11: ;vdsmdummy;: BROADCAST,MULTICAST mtu 1500 qdisc noop state
  DOWN
  link/ether 96:14:65:da:bc:c5 brd ff:ff:ff:ff:ff:ff
  12: bond0: BROADCAST,MULTICAST,MASTER mtu 1500 qdisc noop state
  DOWN
  link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
  13: bond4: BROADCAST,MULTICAST,MASTER mtu 1500 qdisc noop state
  DOWN
  link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
  14: bond1: BROADCAST,MULTICAST,MASTER mtu 1500 qdisc noop state
  DOWN
  link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
  15: ovirtmgmt: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc
  noqueue
  state UP
  link/ether 00:1c:c4:ab:3a:dd brd ff:ff:ff:ff:ff:ff
  inet6 fe80::21c:c4ff:feab:3add/64 scope link
 valid_lft forever preferred_lft forever
  
  # brctl show
  bridge namebridge idSTP enabledinterfaces
  ;vdsmdummy;8000.no
  ovirtmgmt8000.001cc4ab3addnoem3
  
  Do I have perhaps to say/config that ovirtmgmt is tagged in any
  way?
  
  
  
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
  
 
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] win 7 vm creation:

2013-01-13 Thread Gianluca Cecchi
On Sun, Jan 13, 2013 at 12:37 PM, Doron Fediuck wrote:


 Is your setup based on RPMs or built from source?


when I had the problem it was based on ovirt-nightly for Fedora 18 based on
3.2.0-1.20130101.git2184580
SO I don't know at that time what were the contents of the file indicated...
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] ovirt fails to attach gluster volume

2013-01-13 Thread Jithin Raju
Hi Vijai,

Its a fresh fedora installation and i didn't change selinux mode, so it
should be enforcing.
Shall I change it to permissive? or is there a particular selinux config
for vdsm/gluster so that I can keep selinux in enforcing mode.
I can update you the results after setting selinux to permissive tomorrow.
One quick question , this   storage mounting in nodes are run as vdsm user?

Thanks,
Jithin


On Fri, Jan 11, 2013 at 10:56 PM, Vijay Bellur vbel...@redhat.com wrote:

 On 01/11/2013 12:56 PM, Jithin Raju wrote:

 Traceback (most recent call last):
File /usr/share/vdsm/storage/hsm.**py, line 1929, in
 connectStorageServer
  conObj.connect()
File /usr/share/vdsm/storage/**storageServer.py, line 179, in
 connect
  self._mount.mount(self.**options, self._vfsType)
File /usr/share/vdsm/storage/**mount.py, line 190, in mount
  return self._runcmd(cmd, timeout)
File /usr/share/vdsm/storage/**mount.py, line 206, in _runcmd
  raise MountError(rc, ;.join((out, err)))
 MountError: (1, 'Mount failed. Please check the log file for more
 details.\n;ERROR: failed to create logfile
 /var/log/glusterfs/rhev-data-**center-mnt-fig:_vol1.log (Permission
 denied)\nERROR: failed to open logfile
 /var/log/glusterfs/rhev-data-**center-mnt-fig:_vol1.log\n')



 Do you have selinux in enforcing mode on the host?

 Thanks,
 Vijay

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Video from NetApp event?

2013-01-13 Thread Peter Styk
Are there any plans to shoot some video at the NetApp event? Would be
really cool to embed some youtubes on the wiki later on.

Best
polfilm
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users