Re: [Users] API read-only access / roles

2014-02-23 Thread Alon Bar-Lev


- Original Message -
 From: Yair Zaslavsky yzasl...@redhat.com
 To: Juan Hernandez jhern...@redhat.com
 Cc: Sven Kieske s.kie...@mittwald.de, Users@ovirt.org List 
 Users@ovirt.org, Itamar Heim ih...@redhat.com,
 Alon Bar-Lev alo...@redhat.com
 Sent: Sunday, February 23, 2014 8:55:07 AM
 Subject: Re: [Users] API read-only access / roles
 
 
 
 - Original Message -
  From: Juan Hernandez jhern...@redhat.com
  To: Sven Kieske s.kie...@mittwald.de, Users@ovirt.org List
  Users@ovirt.org
  Cc: Itamar Heim ih...@redhat.com, Yair Zaslavsky
  yzasl...@redhat.com
  Sent: Saturday, February 22, 2014 2:22:14 PM
  Subject: Re: [Users] API read-only access / roles
  
  On 02/20/2014 04:51 PM, Itamar Heim wrote:
   On 02/20/2014 05:24 PM, Sven Kieske wrote:
   Hi,
  
   is nobody interested in this feature at all?
   it would be a huge security gain, while lowering
   the bars for having a read only user if this could get shipped with 3.4:
   
   we are very interested, but we want to do this based on the
   authentication re-factoring, which in itself, barely made the 3.4
   timeline.
   Yair - are we pluggable yet, that someone could add such a user by
   dropping a jar somewhere, or still on going work towards 3.5?
 
 As Juan mentioned in his email, it should be possible to plug in at 3.4 as
 well.
 However, we're changing the configuration format at 3.5 as we're changing the
 mechanism to use the extensions mechanism - both Directory and Authenticator
 are extensions, the configuration for
 directory (authorization extension) and authenciator (authentication
 extension) will look a bit different.

Hello,

Until we announce public interface for aaa (authentication, authorization, 
accounting) the implementation is internal and should not be used by external 
projects.

We are heading for publishing the interface within 3.5 timeline.

Thanks,
Alon

 
 
 
 
   
  
  Pugglability of authentication already works in 3.4. By default it uses
  the previous mechanism, but the administrator can change this. In order
  to change you need to create the /etc/ovirt-engine/auth.conf.d directory
  and then create inside one or more authentication profiles
  configuration files. An authentication profile is a combination of an
  authenticator and a directory. The authenticator is used to check
  the credentials (the user name and password) and the directory is used
  to search users and their details. For example, if you want to use local
  authentication (the users, passwords, and groups of the OS) you can
  create a local.conf file with the following content:
  
#
# The name of the profile. This is what will be displayed in the
# combo box in the login page.
#
name=local
  
#
# Needed to enable the profile, by default all profiles are
# disabled.
#
enabled=true
  
#
# The configuration of the authenticator used by the profile. The
# type and the module are mandatory, the rest are optional and
# the default values are as shown below.
#
authenticator.type=ssh
authenticator.module=org.ovirt.engine.core.authentication.ssh
# authenticator.host=localhost
# authenticator.port=22
# authenticator.timeout=10
  
#
# The configuration of the directory:
#
directory.type=nss
directory.module=org.ovirt.engine.core.authentication.nss
  
  For this to work you need to install some additional modules, which
  aren't currently part of the engine. This is where plugabillity comes in
  place. This modules can be built externally. I created modules for SSH
  authentication and NSS (Name Service Switch) directory. The source is
  available here:
  
  https://github.com/jhernand/ovirt-engine-ssh-authenticator
  https://github.com/jhernand/ovirt-engine-nss-directory
  
  The NSS directory also needs JNA (Java Native Access):
  
  https://github.com/jhernand/ovirt-engine-jna-module
  
  Installing these extensions is very easy, just build from source and
  uncompress the generated .zip files to /usr/share/ovirt-engine/modules.
  In case you don't want to build from source you can use the RPMs that I
  created. The source for the .spec files is here:
  
  https://github.com/jhernand/ovirt-engine-rpms
  
  If you don't want to build form source you can use a yum repository that
  I created with binaries for Fedora 20 (should work in CentOS as well):
  
  http://jhernand.fedorapeople.org/repo
  
  So, to summarize:
  
  # cat  /etc/yum.repos.d/my.repo .
  [my]
  name=my
  baseurl=http://jhernand.fedorapeople.org/repo
  enabled=1
  gpgcheck=0
  .
  
  # yum -y install \
  ovirt-engine-ssh-authenticator \
  ovirt-engine-nss-directory
  
  # mkdir -p /etc/ovirt-engine/auth.conf.d
  
  # cat  /etc/ovirt-engine/auth.conf.d/local.conf .
  name=local
  enabled=true
  authenticator.type=ssh
  authenticator.module=org.ovirt.engine.core.authentication.ssh
  directory.type=nss
  directory.module=org.ovirt.engine.core.authentication.nss
  .
  
  # 

Re: [Users] Cannot delete storage connection

2014-02-23 Thread Meital Bourvine
Can you please open a bug?

- Original Message -
 From: sirin ar...@e-inet.ru
 To: Meital Bourvine mbour...@redhat.com
 Cc: users@ovirt.org
 Sent: Saturday, February 22, 2014 9:07:36 PM
 Subject: Re: [Users] Cannot delete storage connection
 
 Hi,
 
 in logs engine.log
 
 2014-02-22 23:02:17,446 ERROR
 [org.ovirt.engine.api.restapi.resource.AbstractBackendResource]
 (ajp-/127.0.0.1:8702-8) Operation Failed: Entity not found: null
 
 vdsm log attached
 
 
 
 
 Artem
 
 22 февр. 2014 г., в 22:38, Meital Bourvine mbour...@redhat.com написал(а):
 
  Sounds like a bug to me.
  
  Can you please attach engine.log and vdsm.log?
  
  - Original Message -
  From: sirin ar...@e-inet.ru
  To: users@ovirt.org
  Sent: Saturday, February 22, 2014 8:28:54 PM
  Subject: [Users] Cannot delete storage connection
  
  Hi all,
  
  i have next connection
  
  storage_connections
  storage_connection
  href=/api/storageconnections/d94d9341-6116-4b1a-94c5-5c3327aa1b09
  id=d94d9341-6116-4b1a-94c5-5c3327aa1b09»
  address192.168.0.171/address
  typenfs/type
  path/srv/lstore/vm/path
  /storage_connection
  
  storage_connection
  href=/api/storageconnections/67539ba5-9b6d-46df-8c96-4acd3f212f4a
  id=67539ba5-9b6d-46df-8c96-4acd3f212f4a»
  addressrhevm.cebra.lab/address
  typenfs/type
  path/var/lib/exports/iso/path
  /storage_connection
  
  storage_connection
  href=/api/storageconnections/fdc92419-b278-4b11-9eba-f68fd4914132
  id=fdc92419-b278-4b11-9eba-f68fd4914132»
  address192.168.0.171/address
  typenfs/type
  path/srv/store/vm/path
  /storage_connection
  
  storage_connection
  href=/api/storageconnections/92fc6cf3-17b1-4b69-af80-5782270137ed
  id=92fc6cf3-17b1-4b69-af80-5782270137ed»
  address192.168.0.171/address
  typenfs/type
  path/srv/bstore/vm/path
  /storage_connection
  /storage_connections
  
  
  I want to remove this connection
  id=d94d9341-6116-4b1a-94c5-5c3327aa1b09»
  but… fail
  
  [RHEVM shell (connected)]# show storageconnection
  d94d9341-6116-4b1a-94c5-5c3327aa1b09
  
  id : d94d9341-6116-4b1a-94c5-5c3327aa1b09
  address: 192.168.0.171
  path   : /srv/lstore/vm
  type   : nfs
  
  [RHEVM shell (connected)]# remove storageconnection
  d94d9341-6116-4b1a-94c5-5c3327aa1b09
  
  error:
  status: 404
  reason: Not Found
  detail: Entity not found: null
  
  [RHEVM shell (connected)]# show storageconnection
  d94d9341-6116-4b1a-94c5-5c3327aa1b09
  
  id : d94d9341-6116-4b1a-94c5-5c3327aa1b09
  address: 192.168.0.171
  path   : /srv/lstore/vm
  type   : nfs
  
  okay… i use curl with DELETE
  
  [root@rhevhst ~]# curl -X GET -H Accept: application/xml -u
  admin@internal:pass
  https://192.168.0.170/api/storageconnections/d94d9341-6116-4b1a-94c5-5c3327aa1b09
  --insecure
  ?xml version=1.0 encoding=UTF-8 standalone=yes?
  storage_connection
  href=/api/storageconnections/d94d9341-6116-4b1a-94c5-5c3327aa1b09
  id=d94d9341-6116-4b1a-94c5-5c3327aa1b09
 address192.168.0.171/address
 typenfs/type
 path/srv/lstore/vm/path
  /storage_connection
  [root@rhevhst ~]#
  
  [root@rhevhst ~]# curl -X DELETE -H Accept: application/xml -u
  admin@internal:pass
  https://192.168.0.170/api/storageconnections/d94d9341-6116-4b1a-94c5-5c3327aa1b09
  --insecure
  ?xml version=1.0 encoding=UTF-8
  standalone=yes?faultreasonOperation Failed/reasondetailEntity
  not
  found: null/detail/fault
  [root@rhevhst ~]#
  
  how i can remove connection?! this is bug?
  
  Artem
  
  
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
  
 
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Cancel Disk Migration

2014-02-23 Thread Meital Bourvine
You can try restart the services and see if it helps. 

- Original Message -

 From: Maurice James midnightst...@msn.com
 To: users@ovirt.org
 Sent: Friday, February 21, 2014 7:38:07 PM
 Subject: [Users] Cancel Disk Migration

 how do I cancel a live disk migration. I have a 13GB disk that I was trying
 to move between storage domains and looks like its hung up

 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Opinions needed: 3 node gluster replica 3 | NFS async | snapshots for consistency

2014-02-23 Thread Ayal Baron


- Original Message -
 I'm looking for some opinions on this configuration in an effort to increase
 write performance:
 
 3 storage nodes using glusterfs in replica 3, quorum.

gluster doesn't support replica 3 yet, so I'm not sure how heavily I'd rely on 
this.

 Ovirt storage domain via NFS

why NFS and not gluster?

 Volume set nfs.trusted-sync on
 On Ovirt, taking snapshots often enough to recover from a storage crash

Note that this would have negative write performance impact

 Using CTDB to manage NFS storage domain IP, moving it to another storage node
 when necessary
 
 Something along the lines of EC2's data consistency model, where only
 snapshots can be considered reliable. The Ovirt added advantage would be
 memory consistency at time of snapshot as well.
 
 Feedback appreciated, including 'you are insane for thinking this is a good
 idea' (and some supported reasoning would be great).
 
 Thanks,
 
 
 
 Steve Dainard
 IT Infrastructure Manager
 Miovision | Rethink Traffic
 
 Blog | LinkedIn | Twitter | Facebook
 Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener, ON,
 Canada | N2C 1L3
 This e-mail may contain information that is privileged or confidential. If
 you are not the intended recipient, please delete the e-mail and any
 attachments and notify us immediately.
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] spice password

2014-02-23 Thread Koen Vanoppen
Thanx. I just received this from one of the developers so... I'll ask him
tomorrow. Thanx for the effort!!
On Feb 23, 2014 7:25 AM, Yedidyah Bar David d...@redhat.com wrote:

 What wasn't asked? A password? You supply it on the command line, example
 below.

 --

 *From: *Koen Vanoppen vanoppen.k...@gmail.com
 *To: *Yedidyah Bar David d...@redhat.com, users@ovirt.org
 *Sent: *Thursday, February 20, 2014 5:57:37 PM
 *Subject: *Re: spice password

 No, it wasn't asked...
 On Feb 20, 2014 4:17 PM, Yedidyah Bar David d...@redhat.com wrote:

  *From: *Koen Vanoppen vanoppen.k...@gmail.com
 *To: *Yedidyah Bar David d...@redhat.com, users@ovirt.org
 *Sent: *Thursday, February 20, 2014 5:06:37 PM
 *Subject: *Re: [Users] (no subject)

 Thanx, for the answer. But he successfully created a ticket


 With setVmTicket? So he supplied some password there, right?

  and received a number, but when he then starts the client again, as
 asked ( Connect to the client again (again, r-v will ask for the password
 in a pop-up window): ) He has to give a password.


 Yes - the password set with setVmTicket...

 E.g.:
 vdsClient localhost setVmTicket $vmid topsecret 120
 then start the spice client and input as password: topsecret
 (and it will expire in 120 seconds).

  Maybe important. The username field is empty and can't be modified.


 Indeed.




 Kind regards,

 Koen


 2014-02-20 16:03 GMT+01:00 Yedidyah Bar David d...@redhat.com:

 *From: *Koen Vanoppen vanoppen.k...@gmail.com
 *To: *users@ovirt.org
 *Sent: *Thursday, February 20, 2014 4:56:10 PM
 *Subject: *[Users] (no subject)

 Hey Guys,

 I'm back ;-). This time I have a question from one of our programmers.
 He's trying this:

 http://www.ovirt.org/How_to_Connect_to_SPICE_Console_Without_Portal#Connecting_Using_REST_API

 But he bumps into this:

 Connect to the client again (again, r-v will ask for the password in a
 pop-up window):

  bash$ *remote-viewer --spice-ca-file ${CA_FILE} --spice-host-subject 
 ${SUBJECT} spice://${HOST}/?port=${PORT}\tls-port=${SPORT}*

 Now, the question is What's the password? Or where can I find it?


 I think you need to set it with setVmTicket - see that page for an
 example.
 --
 Didi



 --
 Didi




 --
 Didi


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Problems with Scientific Linux and ovirt-release-11.0.0

2014-02-23 Thread Meital Bourvine
Hi Jimmy, 

I sent the patch for you: 
http://gerrit.ovirt.org/#/c/24869/2 

- Original Message -

 From: James James jre...@gmail.com
 To: Jimmy Dorff jdo...@phy.duke.edu
 Cc: Sandro Bonazzola sbona...@redhat.com, Meital Bourvine
 mbour...@redhat.com, users users@ovirt.org
 Sent: Friday, February 21, 2014 10:33:06 PM
 Subject: Re: [Users] Problems with Scientific Linux and ovirt-release-11.0.0

 Hi,
 I have been using oVirt in Scientific Linux for one year and everything works
 well.

 Regards.

 2014-02-21 19:10 GMT+01:00 Jimmy Dorff  jdo...@phy.duke.edu  :

  Hi Sandro,
 

  Dave Neary's comment is good. Here is a new patch:
 

  *** a/ovirt-release.spec 2014-02-21 10:10:00.0 -0500
 
  --- b/ovirt-release.spec 2014-02-21 13:01:35.856636466 -0500
 
  ***
 
  *** 69,75 
 
  #Fedora is good for both Fedora and Generic (and probably other based on
  Fedora)
 

  #Handling EL exception only (for now)
 
  ! if grep -qFi 'CentOS' /etc/system-release; then
 
  ! DIST=EL
 
  ! elif grep -qFi 'Red Hat' /etc/system-release; then
 
  DIST=EL
 
  fi
 
  --- 69,73 
 
  #Fedora is good for both Fedora and Generic (and probably other based on
  Fedora)
 

  #Handling EL exception only (for now)
 
  ! if rpm --eval %dist | grep -qFi 'el'; then
 
  DIST=EL
 
  fi
 

  Might be faster for you to submit cause I'm not familiar with gerrit, but I
  can login with my Fedora FAS account.
 

  Cheers,
 
  Jimmy
 

  On 2/21/14, 8:17 AM, Sandro Bonazzola wrote:
 

   Il 21/02/2014 16:25, Jimmy Dorff ha scritto:
  
 

On 2/21/14, 2:31 AM, Sandro Bonazzola wrote:
   
  
 

 Il 21/02/2014 07:34, Meital Bourvine ha scritto:

   
  
 

  Hi Jimmy,
 

   
  
 

  As far as I know, scientific linux isn't supported by ovirt.
 

   
  
 

 IIUC it's based on CentOS / RHEL so it may work.

   
  
 
 Let us know if you've issues :-)

   
  
 

  But you can always try submitting a patch ;)
 

   
  
 

SL works fine with ovirt. If you want to support it, here is a patch.
   
  
 

   http://gerrit.ovirt.org/24869
  
 
   If you've an account on gerrit you can review / verify it.
  
 

*** a/ovirt-release.spec 2014-02-21 10:10:00.0 -0500
   
  
 
--- b/ovirt-release.spec 2014-02-21 10:10:55.0 -0500
   
  
 
***
   
  
 
*** 73,76 
   
  
 
--- 73,78 
   
  
 
elif grep -qFi 'Red Hat' /etc/system-release; then
   
  
 
DIST=EL
   
  
 
+ elif grep -qFi 'Scientific Linux' /etc/system-release; then
   
  
 
+ DIST=EL
   
  
 
fi
   
  
 

If you don't support Scientific Linux, then I would recommend not
defaulting
the DIST to Fedora and instead searching for the specific supported
   
  
 
releases and error out otherwise.
   
  
 

Future-wise, Scientific Linux *may* become a CentOS variant in Red
Hat's
CentOS.
   
  
 

Cheers,
   
  
 
Jimmy
   
  
 

  ___
 
  Users mailing list
 
  Users@ovirt.org
 
  http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Nodes lose storage at random

2014-02-23 Thread Nir Soffer
- Original Message -
 From: Johan Kooijman m...@johankooijman.com
 To: Nir Soffer nsof...@redhat.com
 Cc: users users@ovirt.org
 Sent: Sunday, February 23, 2014 3:48:25 AM
 Subject: Re: [Users] Nodes lose storage at random
 
 Been reinstalling to stocj CentOS 6.5 last night, all successful. Until
 roughly midnight GMT, 2 out of 4 hosts were showing the same errors.
 
 Any more suggestions?

Lets see the logs from these hosts?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Cancel Disk Migration

2014-02-23 Thread Maurice James
Thanks

 

From: Meital Bourvine [mailto:mbour...@redhat.com] 
Sent: Sunday, February 23, 2014 4:02 AM
To: Maurice James
Cc: users@ovirt.org
Subject: Re: [Users] Cancel Disk Migration

 

You can try restart the services and see if it helps.

 

 

  _  

From: Maurice James midnightst...@msn.com mailto:midnightst...@msn.com 
To: users@ovirt.org mailto:users@ovirt.org 
Sent: Friday, February 21, 2014 7:38:07 PM
Subject: [Users] Cancel Disk Migration

 

how do I cancel a live disk migration. I have a 13GB disk that I was trying to 
move between storage domains and looks like its hung up


___
Users mailing list
Users@ovirt.org mailto:Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users

 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Stack trace caused by FreeBSD client

2014-02-23 Thread Johan Kooijman
Hi all,

Interesting thing I found out this afternoon. I have a FreeBSD 10 guest
with virtio drivers, both disk and net.

The VM works fine, but when I connect over SSH to the VM, I see this stack
trace in messages on the node:

Feb 23 19:19:42 hv3 kernel: [ cut here ]
Feb 23 19:19:42 hv3 kernel: WARNING: at net/core/dev.c:1907
skb_warn_bad_offload+0xc2/0xf0() (Tainted: GW  ---   )
Feb 23 19:19:42 hv3 kernel: Hardware name: X9DR3-F
Feb 23 19:19:42 hv3 kernel: igb: caps=(0x12114bb3, 0x0) len=5686
data_len=5620 ip_summed=0
Feb 23 19:19:42 hv3 kernel: Modules linked in: ebt_arp nfs lockd fscache
auth_rpcgss nfs_acl sunrpc bonding 8021q garp ebtable_nat ebtables bridge
stp llc xt_physdev ipt_REJECT nf_conntrack_ipv4 nf_defrag_ipv4 xt_multiport
iptable_filter ip_tables ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6
xt_state nf_conntrack ip6table_filter ip6_tables ipv6 dm_round_robin
dm_multipath vhost_net macvtap macvlan tun kvm_intel kvm iTCO_wdt
iTCO_vendor_support sg ixgbe mdio sb_edac edac_core lpc_ich mfd_core
i2c_i801 ioatdma igb dca i2c_algo_bit i2c_core ptp pps_core ext4 jbd2
mbcache sd_mod crc_t10dif 3w_sas ahci isci libsas scsi_transport_sas
dm_mirror dm_region_hash dm_log dm_mod [last unloaded: scsi_wait_scan]
Feb 23 19:19:42 hv3 kernel: Pid: 15280, comm: vhost-15276 Tainted: G
 W  ---2.6.32-431.5.1.el6.x86_64 #1
Feb 23 19:19:42 hv3 kernel: Call Trace:
Feb 23 19:19:42 hv3 kernel: IRQ  [81071e27] ?
warn_slowpath_common+0x87/0xc0
Feb 23 19:19:42 hv3 kernel: [81071f16] ?
warn_slowpath_fmt+0x46/0x50
Feb 23 19:19:42 hv3 kernel: [a016c862] ?
igb_get_drvinfo+0x82/0xe0 [igb]
Feb 23 19:19:42 hv3 kernel: [8145b1d2] ?
skb_warn_bad_offload+0xc2/0xf0
Feb 23 19:19:42 hv3 kernel: [814602c1] ?
__skb_gso_segment+0x71/0xc0
Feb 23 19:19:42 hv3 kernel: [81460323] ? skb_gso_segment+0x13/0x20
Feb 23 19:19:42 hv3 kernel: [814603cb] ?
dev_hard_start_xmit+0x9b/0x480
Feb 23 19:19:42 hv3 kernel: [8147bf5a] ?
sch_direct_xmit+0x15a/0x1c0
Feb 23 19:19:42 hv3 kernel: [81460a58] ?
dev_queue_xmit+0x228/0x320
Feb 23 19:19:42 hv3 kernel: [a035a898] ?
br_dev_queue_push_xmit+0x88/0xc0 [bridge]
Feb 23 19:19:42 hv3 kernel: [a035a928] ?
br_forward_finish+0x58/0x60 [bridge]
Feb 23 19:19:42 hv3 kernel: [a035a9da] ? __br_forward+0xaa/0xd0
[bridge]
Feb 23 19:19:42 hv3 kernel: [814897b6] ? nf_hook_slow+0x76/0x120
Feb 23 19:19:42 hv3 kernel: [a035aa5d] ? br_forward+0x5d/0x70
[bridge]
Feb 23 19:19:42 hv3 kernel: [a035ba6b] ?
br_handle_frame_finish+0x17b/0x2a0 [bridge]
Feb 23 19:19:42 hv3 kernel: [a035bd3a] ?
br_handle_frame+0x1aa/0x250 [bridge]
Feb 23 19:19:42 hv3 kernel: [8145b7c9] ?
__netif_receive_skb+0x529/0x750
Feb 23 19:19:42 hv3 kernel: [8145ba8a] ?
process_backlog+0x9a/0x100
Feb 23 19:19:42 hv3 kernel: [81460d43] ? net_rx_action+0x103/0x2f0
Feb 23 19:19:42 hv3 kernel: [8107a8e1] ? __do_softirq+0xc1/0x1e0
Feb 23 19:19:42 hv3 kernel: [8100c30c] ? call_softirq+0x1c/0x30
Feb 23 19:19:42 hv3 kernel: EOI  [8100fa75] ?
do_softirq+0x65/0xa0
Feb 23 19:19:42 hv3 kernel: [814611c8] ? netif_rx_ni+0x28/0x30
Feb 23 19:19:42 hv3 kernel: [a01a0749] ? tun_sendmsg+0x229/0x4ec
[tun]
Feb 23 19:19:42 hv3 kernel: [a027bcf5] ? handle_tx+0x275/0x5e0
[vhost_net]
Feb 23 19:19:42 hv3 kernel: [a027c095] ? handle_tx_kick+0x15/0x20
[vhost_net]
Feb 23 19:19:42 hv3 kernel: [a027955c] ? vhost_worker+0xbc/0x140
[vhost_net]
Feb 23 19:19:42 hv3 kernel: [a02794a0] ? vhost_worker+0x0/0x140
[vhost_net]
Feb 23 19:19:42 hv3 kernel: [8109aee6] ? kthread+0x96/0xa0
Feb 23 19:19:42 hv3 kernel: [8100c20a] ? child_rip+0xa/0x20
Feb 23 19:19:42 hv3 kernel: [8109ae50] ? kthread+0x0/0xa0
Feb 23 19:19:42 hv3 kernel: [8100c200] ? child_rip+0x0/0x20
Feb 23 19:19:42 hv3 kernel: ---[ end trace e93142595d6ecfc7 ]---

This is 100% reproducable, every time. The login itself works just fine.
Some more info:

[root@hv3 ~]# uname -a
Linux hv3.ovirt.gs.cloud.lan 2.6.32-431.5.1.el6.x86_64 #1 SMP Wed Feb 12
00:41:43 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
[root@hv3 ~]# rpm -qa | grep vdsm
vdsm-4.13.3-3.el6.x86_64
vdsm-xmlrpc-4.13.3-3.el6.noarch
vdsm-python-4.13.3-3.el6.x86_64
vdsm-cli-4.13.3-3.el6.noarch

-- 
Met vriendelijke groeten / With kind regards,
Johan Kooijman

E m...@johankooijman.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Opinions needed: 3 node gluster replica 3 | NFS async | snapshots for consistency

2014-02-23 Thread Steve Dainard
On Sun, Feb 23, 2014 at 4:27 AM, Ayal Baron aba...@redhat.com wrote:



 - Original Message -
  I'm looking for some opinions on this configuration in an effort to
 increase
  write performance:
 
  3 storage nodes using glusterfs in replica 3, quorum.

 gluster doesn't support replica 3 yet, so I'm not sure how heavily I'd
 rely on this.


Glusterfs or RHSS doesn't support rep 3? How could I create a quorum
without 3+ hosts?



  Ovirt storage domain via NFS

 why NFS and not gluster?


Gluster via posix SD doesn't have any performance gains over NFS, maybe the
opposite.

Gluster 'native' SD's are broken on EL6.5 so I have been unable to test
performance. I have heard performance can be upwards of 3x NFS for raw
write.

Gluster doesn't have an async write option, so its doubtful it will ever be
close to NFS async speeds.



  Volume set nfs.trusted-sync on
  On Ovirt, taking snapshots often enough to recover from a storage crash

 Note that this would have negative write performance impact


The difference between NFS sync (50MB/s) and async (300MB/s on 10g) write
speeds should more than compensate for the performance hit of taking
snapshots more often. And that's just raw speed. If we take into
consideration IOPS (guest small writes) async is leaps and bounds ahead.


If we assume the site has backup UPS and generator power and we can build a
highly available storage system with 3 nodes in quorum, are there any
potential issues other than a write performance hit?

The issue I thought might be most prevalent is if an ovirt host goes down
and the VM's are automatically brought back up on another host, they could
incur disk corruption and need to be brought back down and restored to the
last snapshot state. This basically means the HA feature should be disabled.

Even worse, if the gluster node with CTDB NFS IP goes down, it may not have
written out and replicated to its peers.  -- I think I may have just
answered my own question.


Thanks,
Steve
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Stack trace caused by FreeBSD client

2014-02-23 Thread Nir Soffer
- Original Message -
 From: Johan Kooijman m...@johankooijman.com
 To: users users@ovirt.org
 Sent: Sunday, February 23, 2014 8:22:41 PM
 Subject: [Users] Stack trace caused by FreeBSD client
 
 Hi all,
 
 Interesting thing I found out this afternoon. I have a FreeBSD 10 guest with
 virtio drivers, both disk and net.
 
 The VM works fine, but when I connect over SSH to the VM, I see this stack
 trace in messages on the node:

This warning may be interesting to qemu/kvm/kernel developers, ccing Ronen.

 
 Feb 23 19:19:42 hv3 kernel: [ cut here ]
 Feb 23 19:19:42 hv3 kernel: WARNING: at net/core/dev.c:1907
 skb_warn_bad_offload+0xc2/0xf0() (Tainted: G W --- )
 Feb 23 19:19:42 hv3 kernel: Hardware name: X9DR3-F
 Feb 23 19:19:42 hv3 kernel: igb: caps=(0x12114bb3, 0x0) len=5686
 data_len=5620 ip_summed=0
 Feb 23 19:19:42 hv3 kernel: Modules linked in: ebt_arp nfs lockd fscache
 auth_rpcgss nfs_acl sunrpc bonding 8021q garp ebtable_nat ebtables bridge
 stp llc xt_physdev ipt_REJECT nf_conntrack_ipv4 nf_defrag_ipv4 xt_multiport
 iptable_filter ip_tables ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6
 xt_state nf_conntrack ip6table_filter ip6_tables ipv6 dm_round_robin
 dm_multipath vhost_net macvtap macvlan tun kvm_intel kvm iTCO_wdt
 iTCO_vendor_support sg ixgbe mdio sb_edac edac_core lpc_ich mfd_core
 i2c_i801 ioatdma igb dca i2c_algo_bit i2c_core ptp pps_core ext4 jbd2
 mbcache sd_mod crc_t10dif 3w_sas ahci isci libsas scsi_transport_sas
 dm_mirror dm_region_hash dm_log dm_mod [last unloaded: scsi_wait_scan]
 Feb 23 19:19:42 hv3 kernel: Pid: 15280, comm: vhost-15276 Tainted: G W
 --- 2.6.32-431.5.1.el6.x86_64 #1
 Feb 23 19:19:42 hv3 kernel: Call Trace:
 Feb 23 19:19:42 hv3 kernel: IRQ [81071e27] ?
 warn_slowpath_common+0x87/0xc0
 Feb 23 19:19:42 hv3 kernel: [81071f16] ?
 warn_slowpath_fmt+0x46/0x50
 Feb 23 19:19:42 hv3 kernel: [a016c862] ? igb_get_drvinfo+0x82/0xe0
 [igb]
 Feb 23 19:19:42 hv3 kernel: [8145b1d2] ?
 skb_warn_bad_offload+0xc2/0xf0
 Feb 23 19:19:42 hv3 kernel: [814602c1] ?
 __skb_gso_segment+0x71/0xc0
 Feb 23 19:19:42 hv3 kernel: [81460323] ? skb_gso_segment+0x13/0x20
 Feb 23 19:19:42 hv3 kernel: [814603cb] ?
 dev_hard_start_xmit+0x9b/0x480
 Feb 23 19:19:42 hv3 kernel: [8147bf5a] ?
 sch_direct_xmit+0x15a/0x1c0
 Feb 23 19:19:42 hv3 kernel: [81460a58] ? dev_queue_xmit+0x228/0x320
 Feb 23 19:19:42 hv3 kernel: [a035a898] ?
 br_dev_queue_push_xmit+0x88/0xc0 [bridge]
 Feb 23 19:19:42 hv3 kernel: [a035a928] ?
 br_forward_finish+0x58/0x60 [bridge]
 Feb 23 19:19:42 hv3 kernel: [a035a9da] ? __br_forward+0xaa/0xd0
 [bridge]
 Feb 23 19:19:42 hv3 kernel: [814897b6] ? nf_hook_slow+0x76/0x120
 Feb 23 19:19:42 hv3 kernel: [a035aa5d] ? br_forward+0x5d/0x70
 [bridge]
 Feb 23 19:19:42 hv3 kernel: [a035ba6b] ?
 br_handle_frame_finish+0x17b/0x2a0 [bridge]
 Feb 23 19:19:42 hv3 kernel: [a035bd3a] ?
 br_handle_frame+0x1aa/0x250 [bridge]
 Feb 23 19:19:42 hv3 kernel: [8145b7c9] ?
 __netif_receive_skb+0x529/0x750
 Feb 23 19:19:42 hv3 kernel: [8145ba8a] ? process_backlog+0x9a/0x100
 Feb 23 19:19:42 hv3 kernel: [81460d43] ? net_rx_action+0x103/0x2f0
 Feb 23 19:19:42 hv3 kernel: [8107a8e1] ? __do_softirq+0xc1/0x1e0
 Feb 23 19:19:42 hv3 kernel: [8100c30c] ? call_softirq+0x1c/0x30
 Feb 23 19:19:42 hv3 kernel: EOI [8100fa75] ? do_softirq+0x65/0xa0
 Feb 23 19:19:42 hv3 kernel: [814611c8] ? netif_rx_ni+0x28/0x30
 Feb 23 19:19:42 hv3 kernel: [a01a0749] ? tun_sendmsg+0x229/0x4ec
 [tun]
 Feb 23 19:19:42 hv3 kernel: [a027bcf5] ? handle_tx+0x275/0x5e0
 [vhost_net]
 Feb 23 19:19:42 hv3 kernel: [a027c095] ? handle_tx_kick+0x15/0x20
 [vhost_net]
 Feb 23 19:19:42 hv3 kernel: [a027955c] ? vhost_worker+0xbc/0x140
 [vhost_net]
 Feb 23 19:19:42 hv3 kernel: [a02794a0] ? vhost_worker+0x0/0x140
 [vhost_net]
 Feb 23 19:19:42 hv3 kernel: [8109aee6] ? kthread+0x96/0xa0
 Feb 23 19:19:42 hv3 kernel: [8100c20a] ? child_rip+0xa/0x20
 Feb 23 19:19:42 hv3 kernel: [8109ae50] ? kthread+0x0/0xa0
 Feb 23 19:19:42 hv3 kernel: [8100c200] ? child_rip+0x0/0x20
 Feb 23 19:19:42 hv3 kernel: ---[ end trace e93142595d6ecfc7 ]---
 
 This is 100% reproducable, every time. The login itself works just fine. Some
 more info:
 
 [root@hv3 ~]# uname -a
 Linux hv3.ovirt.gs.cloud.lan 2.6.32-431.5.1.el6.x86_64 #1 SMP Wed Feb 12
 00:41:43 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
 [root@hv3 ~]# rpm -qa | grep vdsm
 vdsm-4.13.3-3.el6.x86_64
 vdsm-xmlrpc-4.13.3-3.el6.noarch
 vdsm-python-4.13.3-3.el6.x86_64
 vdsm-cli-4.13.3-3.el6.noarch
 
 --
 Met vriendelijke groeten / With kind regards,
 Johan Kooijman
 
 E m...@johankooijman.com
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 

Re: [Users] Opinions needed: 3 node gluster replica 3 | NFS async | snapshots for consistency

2014-02-23 Thread Ayal Baron


- Original Message -
 On Sun, Feb 23, 2014 at 4:27 AM, Ayal Baron aba...@redhat.com wrote:
 
 
 
  - Original Message -
   I'm looking for some opinions on this configuration in an effort to
  increase
   write performance:
  
   3 storage nodes using glusterfs in replica 3, quorum.
 
  gluster doesn't support replica 3 yet, so I'm not sure how heavily I'd
  rely on this.
 
 
 Glusterfs or RHSS doesn't support rep 3? How could I create a quorum
 without 3+ hosts?

glusterfs has the capability but it hasn't been widely tested with oVirt yet 
and we've already found a couple of issues there.
afaiu gluster has the ability to define a tie breaker (a third node which is 
part of the quorum but does not provide a third replica of the data).

 
 
 
   Ovirt storage domain via NFS
 
  why NFS and not gluster?
 
 
 Gluster via posix SD doesn't have any performance gains over NFS, maybe the
 opposite.

gluster via posix is mounting it using the gluster fuse client which should 
provide better performance + availability than NFS.

 
 Gluster 'native' SD's are broken on EL6.5 so I have been unable to test
 performance. I have heard performance can be upwards of 3x NFS for raw
 write.

Broken how?

 
 Gluster doesn't have an async write option, so its doubtful it will ever be
 close to NFS async speeds.t
 
 
 
   Volume set nfs.trusted-sync on
   On Ovirt, taking snapshots often enough to recover from a storage crash
 
  Note that this would have negative write performance impact
 
 
 The difference between NFS sync (50MB/s) and async (300MB/s on 10g) write
 speeds should more than compensate for the performance hit of taking
 snapshots more often. And that's just raw speed. If we take into
 consideration IOPS (guest small writes) async is leaps and bounds ahead.

I would test this, since qemu is already doing async I/O (using threads when 
native AIO is not supported) and oVirt runs it with cache=none (direct I/O) so 
sync ops should not happen that often (depends on guest).  You may be still 
enjoying performance boost, but I've seen UPS systems fail before bringing down 
multiple nodes at once.
In addition, if you do not guarantee your data is safe when you create a 
snapshot (and it doesn't seem like you are) then I see no reason to think your 
snapshots are any better off than latest state on disk.

 
 
 If we assume the site has backup UPS and generator power and we can build a
 highly available storage system with 3 nodes in quorum, are there any
 potential issues other than a write performance hit?
 
 The issue I thought might be most prevalent is if an ovirt host goes down
 and the VM's are automatically brought back up on another host, they could
 incur disk corruption and need to be brought back down and restored to the
 last snapshot state. This basically means the HA feature should be disabled.

I'm not sure I understand what your concern is here, what would cause the data 
corruption? if your node crashed then there is no I/O in flight.  So starting 
up the VM should be perfectly safe.

 
 Even worse, if the gluster node with CTDB NFS IP goes down, it may not have
 written out and replicated to its peers.  -- I think I may have just
 answered my own question.

If 'trusted-sync' means that the CTDB NFS node acks the I/O before it reached 
quorum then I'd say that's a gluster bug.  It should ack the I/O before data 
hits the disc, but it should not ack it before it has quorum.
However, the configuration we feel comfortable using gluster is with both 
server and client quorum (gluster has 2 different configs and you need to 
configure both to work safely).


 
 
 Thanks,
 Steve
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Permissions

2014-02-23 Thread Eli Mesika


- Original Message -
 From: Maurice James midnightst...@msn.com
 To: users@ovirt.org
 Sent: Friday, February 21, 2014 9:25:12 PM
 Subject: [Users] Permissions
 
 I have an LDAP user with Power User and Super User permissions at the Data
 Center level. Why dont I have permission to migrate disks between storage
 domains?

Hi Maurice 

Can you elaborate please and attach a screen-shot of the error you got and the 
relevant engine.log 

 
 oVirt Engine Version: 3.3.3-2.el6
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Opinions needed: 3 node gluster replica 3 | NFS async | snapshots for consistency

2014-02-23 Thread Steve Dainard
On Sun, Feb 23, 2014 at 3:20 PM, Ayal Baron aba...@redhat.com wrote:



 - Original Message -
  On Sun, Feb 23, 2014 at 4:27 AM, Ayal Baron aba...@redhat.com wrote:
 
  
  
   - Original Message -
I'm looking for some opinions on this configuration in an effort to
   increase
write performance:
   
3 storage nodes using glusterfs in replica 3, quorum.
  
   gluster doesn't support replica 3 yet, so I'm not sure how heavily I'd
   rely on this.
  
 
  Glusterfs or RHSS doesn't support rep 3? How could I create a quorum
  without 3+ hosts?

 glusterfs has the capability but it hasn't been widely tested with oVirt
 yet and we've already found a couple of issues there.
 afaiu gluster has the ability to define a tie breaker (a third node which
 is part of the quorum but does not provide a third replica of the data).


Good to know, I'll dig into this.



 
 
  
Ovirt storage domain via NFS
  
   why NFS and not gluster?
  
 
  Gluster via posix SD doesn't have any performance gains over NFS, maybe
 the
  opposite.

 gluster via posix is mounting it using the gluster fuse client which
 should provide better performance + availability than NFS.


Availability for sure, but performance is seriously questionable. I've run
in both scenarios and haven't seen a performance improvement, the general
consensus seems to be fuse is adding overhead and therefore decreasing
performance vs. NFS.



 
  Gluster 'native' SD's are broken on EL6.5 so I have been unable to test
  performance. I have heard performance can be upwards of 3x NFS for raw
  write.

 Broken how?


Ongoing issues, libgfapi support wasn't available, and then was disabled
because snapshot support wasn't built into the kvm packages which was a
dependency. There are a few threads in reference to this, and some effort
to get CentOS builds to enable snapshot support in kvm.

I have installed rebuilt qemu packages with the RHEV snapshot flag enabled,
and was just able to create a native gluster SD, maybe I missed something
during a previous attempt. I'll test performance and see if its close to
what I'm looking for.



 
  Gluster doesn't have an async write option, so its doubtful it will ever
 be
  close to NFS async speeds.t
 
 
  
Volume set nfs.trusted-sync on
On Ovirt, taking snapshots often enough to recover from a storage
 crash
  
   Note that this would have negative write performance impact
  
 
  The difference between NFS sync (50MB/s) and async (300MB/s on 10g)
 write
  speeds should more than compensate for the performance hit of taking
  snapshots more often. And that's just raw speed. If we take into
  consideration IOPS (guest small writes) async is leaps and bounds ahead.

 I would test this, since qemu is already doing async I/O (using threads
 when native AIO is not supported) and oVirt runs it with cache=none (direct
 I/O) so sync ops should not happen that often (depends on guest).  You may
 be still enjoying performance boost, but I've seen UPS systems fail before
 bringing down multiple nodes at once.
 In addition, if you do not guarantee your data is safe when you create a
 snapshot (and it doesn't seem like you are) then I see no reason to think
 your snapshots are any better off than latest state on disk.


My logic here was if a snapshot is run, then the disk and system state
should be consistent at time of snapshot once its been written to storage.
If the host failed during snapshot then the snapshot would be incomplete,
and the last complete snapshot would need to be used for recovery.



 
 
  If we assume the site has backup UPS and generator power and we can
 build a
  highly available storage system with 3 nodes in quorum, are there any
  potential issues other than a write performance hit?
 
  The issue I thought might be most prevalent is if an ovirt host goes down
  and the VM's are automatically brought back up on another host, they
 could
  incur disk corruption and need to be brought back down and restored to
 the
  last snapshot state. This basically means the HA feature should be
 disabled.

 I'm not sure I understand what your concern is here, what would cause the
 data corruption? if your node crashed then there is no I/O in flight.  So
 starting up the VM should be perfectly safe.


Good point, that makes sense.



 
  Even worse, if the gluster node with CTDB NFS IP goes down, it may not
 have
  written out and replicated to its peers.  -- I think I may have just
  answered my own question.

 If 'trusted-sync' means that the CTDB NFS node acks the I/O before it
 reached quorum then I'd say that's a gluster bug.


http://gluster.org/community/documentation/index.php/Gluster_3.2:_Setting_Volume_Options#nfs.trusted-syncSpecifically
mentions data won't be guaranteed to be on disk, but doesn't
mention if data would be replicated in memory between gluster nodes.
Technically async breaks the NFS protocol standard anyways but this seems
like a question for the gluster guys, I'll 

Re: [Users] Problems with Scientific Linux and ovirt-release-11.0.0

2014-02-23 Thread Sandro Bonazzola
Il 21/02/2014 19:10, Jimmy Dorff ha scritto:
 Hi Sandro,
 
 Dave Neary's comment is good. Here is a new patch:
 
 *** a/ovirt-release.spec2014-02-21 10:10:00.0 -0500
 --- b/ovirt-release.spec2014-02-21 13:01:35.856636466 -0500
 ***
 *** 69,75 
   #Fedora is good for both Fedora and Generic (and probably other based on 
 Fedora)
   #Handling EL exception only (for now)
 ! if grep -qFi 'CentOS' /etc/system-release; then
 ! DIST=EL
 ! elif grep -qFi 'Red Hat' /etc/system-release; then
   DIST=EL
   fi
 --- 69,73 
   #Fedora is good for both Fedora and Generic (and probably other based on 
 Fedora)
   #Handling EL exception only (for now)
 ! if rpm --eval %dist | grep -qFi 'el'; then

Missing escape: %%dist
pushed a new patchset, please review it on gerrit:http://gerrit.ovirt.org/24869
and verify it on Scientific Linux :
 
http://jenkins.ovirt.org/job/ovirt-release_gerrit/30/artifact/exported-artifacts/ovirt-release-11.0.1-1.noarch.rpm

   DIST=EL
   fi
 
 Might be faster for you to submit cause I'm not familiar with gerrit, but I 
 can login with my Fedora FAS account.
 
 Cheers,
 Jimmy
 
 
 
 On 2/21/14, 8:17 AM, Sandro Bonazzola wrote:
 Il 21/02/2014 16:25, Jimmy Dorff ha scritto:
 On 2/21/14, 2:31 AM, Sandro Bonazzola wrote:
 Il 21/02/2014 07:34, Meital Bourvine ha scritto:
 Hi Jimmy,

 As far as I know, scientific linux isn't supported by ovirt.

 IIUC it's based on CentOS / RHEL so it may work.
 Let us know if you've issues :-)



 But you can always try submitting a patch ;)


 SL works fine with ovirt. If you want to support it, here is a patch.

 http://gerrit.ovirt.org/24869
 If you've an account on gerrit you can review / verify it.


 *** a/ovirt-release.spec2014-02-21 10:10:00.0 -0500
 --- b/ovirt-release.spec2014-02-21 10:10:55.0 -0500
 ***
 *** 73,76 
 --- 73,78 
elif grep -qFi 'Red Hat' /etc/system-release; then
DIST=EL
 + elif grep -qFi 'Scientific Linux' /etc/system-release; then
 + DIST=EL
fi


 If you don't support Scientific Linux, then I would recommend not 
 defaulting the DIST to Fedora and instead searching for the specific 
 supported
 releases and error out otherwise.

 Future-wise, Scientific Linux *may* become a CentOS variant in Red Hat's 
 CentOS.

 Cheers,
 Jimmy



 


-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users