[Users] repos for ovirt.org is in a bad shape this morning

2014-02-22 Thread Ricky Schneberger

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

http://resources.ovirt.org/releases/3.3/ stops responding in a
fashinable time earlier this morning.

Is there someone who can push things back on track again?


Regards //Ricky ...stucked in a middle of an upgrade
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.14 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEARECAAYFAlMIk1gACgkQOap81biMC2NN4QCfS5Jfp9vIJ+4DTrlqa2Hk807Z
thAAn1XP5uiqoxydOKmjvv+g3Aaf+G+N
=Ef3/
-END PGP SIGNATURE-



0xB88C0B63.asc
Description: application/pgp-keys
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] API read-only access / roles

2014-02-22 Thread Juan Hernandez
On 02/20/2014 04:51 PM, Itamar Heim wrote:
 On 02/20/2014 05:24 PM, Sven Kieske wrote:
 Hi,

 is nobody interested in this feature at all?
 it would be a huge security gain, while lowering
 the bars for having a read only user if this could get shipped with 3.4:
 
 we are very interested, but we want to do this based on the 
 authentication re-factoring, which in itself, barely made the 3.4 timeline.
 Yair - are we pluggable yet, that someone could add such a user by 
 dropping a jar somewhere, or still on going work towards 3.5?
 

Pugglability of authentication already works in 3.4. By default it uses
the previous mechanism, but the administrator can change this. In order
to change you need to create the /etc/ovirt-engine/auth.conf.d directory
and then create inside one or more authentication profiles
configuration files. An authentication profile is a combination of an
authenticator and a directory. The authenticator is used to check
the credentials (the user name and password) and the directory is used
to search users and their details. For example, if you want to use local
authentication (the users, passwords, and groups of the OS) you can
create a local.conf file with the following content:

  #
  # The name of the profile. This is what will be displayed in the
  # combo box in the login page.
  #
  name=local

  #
  # Needed to enable the profile, by default all profiles are
  # disabled.
  #
  enabled=true

  #
  # The configuration of the authenticator used by the profile. The
  # type and the module are mandatory, the rest are optional and
  # the default values are as shown below.
  #
  authenticator.type=ssh
  authenticator.module=org.ovirt.engine.core.authentication.ssh
  # authenticator.host=localhost
  # authenticator.port=22
  # authenticator.timeout=10

  #
  # The configuration of the directory:
  #
  directory.type=nss
  directory.module=org.ovirt.engine.core.authentication.nss

For this to work you need to install some additional modules, which
aren't currently part of the engine. This is where plugabillity comes in
place. This modules can be built externally. I created modules for SSH
authentication and NSS (Name Service Switch) directory. The source is
available here:

https://github.com/jhernand/ovirt-engine-ssh-authenticator
https://github.com/jhernand/ovirt-engine-nss-directory

The NSS directory also needs JNA (Java Native Access):

https://github.com/jhernand/ovirt-engine-jna-module

Installing these extensions is very easy, just build from source and
uncompress the generated .zip files to /usr/share/ovirt-engine/modules.
In case you don't want to build from source you can use the RPMs that I
created. The source for the .spec files is here:

https://github.com/jhernand/ovirt-engine-rpms

If you don't want to build form source you can use a yum repository that
I created with binaries for Fedora 20 (should work in CentOS as well):

http://jhernand.fedorapeople.org/repo

So, to summarize:

# cat  /etc/yum.repos.d/my.repo .
[my]
name=my
baseurl=http://jhernand.fedorapeople.org/repo
enabled=1
gpgcheck=0
.

# yum -y install \
ovirt-engine-ssh-authenticator \
ovirt-engine-nss-directory

# mkdir -p /etc/ovirt-engine/auth.conf.d

# cat  /etc/ovirt-engine/auth.conf.d/local.conf .
name=local
enabled=true
authenticator.type=ssh
authenticator.module=org.ovirt.engine.core.authentication.ssh
directory.type=nss
directory.module=org.ovirt.engine.core.authentication.nss
.

# systemctl restart ovirt-engine

Then you can login with admin@internal, add some local users and
permissions, and then use them to login to the GUI or the API.

Take into account that I created these modules as a way to test the new
authentication infrastructure, so they may have limitations or issues. I
appreciate any feedback.


 Am 19.02.2014 15:32, schrieb Sven Kieske: I just looked into my test vm
 with the 3.4 beta
 and I can't see such an user there.

 I created an RFE at: https://bugzilla.redhat.com/show_bug.cgi?id=1067036


 I really hope this can get included in 3.4 (I know it's late)
 as it should be a very very minor change at engine-setup.

 Thanks

 Am 19.02.2014 14:55, schrieb Sven Kieske:
 Hi,

 reiterating on this somewhat old mail:

 Is there a read only user integrated in 3.4?

 Because it's a huge overhead to install somewhere
 e.g. a freeipa server just to get read only access.

 Am 21.11.2013 09:52, schrieb Sander Grendelman:
 Hi Doron,

 The user I've defined in [1] works for me.
 A built-in login-/read-only role would be nice,
 but it's quite easy to define a custom role so
 more of a nice-to-have instead of a must-have.

 Thanks for asking!

 Sander.

 On Wed, Nov 20, 2013 at 5:40 PM, Doron Fediuck dfedi...@redhat.com
 wrote:
 Hi Sander,
 We're closing the ovirt 3.4 scope, and wondering if you're handling
 Zabbix based on [1].
 If so please let me know and I'll update the 3.4 features list.

 Thanks,
 Doron

 [1] http://lists.ovirt.org/pipermail/users/2013-November/017946.html


 
 

Re: [Users] Yum Installation Problems (ovirt 3.3, CentOS 6.5)

2014-02-22 Thread Jon Forrest



On 2/21/2014 2:54 AM, Sandro Bonazzola wrote:

Il 18/02/2014 22:32, Jon Forrest ha scritto:

I have a brand new CentOS 6.5 x86_64 installation with all the updates
as of today installed.
I want to create a test All-In-One node, so I follow the documentation and run

  yum install epel-release-6-8.noarch.rpm
  yum localinstall http://ovirt.org/releases/ovirt-release-el.noarch.rpm

These work fine. I then run

yum install ovirt-engine-setup-plugin-allinone -y


Can you please post the output of
yum repolist enabled ?


Just in case you didn't notice, I recently posted a message
saying that this problem is due to me fooling around with
my yum settings. For reasons I still don't understand,
have my [base] repo point to a local mirror causes the
problem I described, but pointing to the standard mirror
list makes the problem go away. I'm trying to figure out
why this is happening.

Jon

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Cannot delete storage connection

2014-02-22 Thread sirin
Hi all,

i have next connection

storage_connections
storage_connection 
href=/api/storageconnections/d94d9341-6116-4b1a-94c5-5c3327aa1b09 
id=d94d9341-6116-4b1a-94c5-5c3327aa1b09»
address192.168.0.171/address
typenfs/type
path/srv/lstore/vm/path
/storage_connection

storage_connection 
href=/api/storageconnections/67539ba5-9b6d-46df-8c96-4acd3f212f4a 
id=67539ba5-9b6d-46df-8c96-4acd3f212f4a»
addressrhevm.cebra.lab/address
typenfs/type
path/var/lib/exports/iso/path
/storage_connection

storage_connection 
href=/api/storageconnections/fdc92419-b278-4b11-9eba-f68fd4914132 
id=fdc92419-b278-4b11-9eba-f68fd4914132»
address192.168.0.171/address
typenfs/type
path/srv/store/vm/path
/storage_connection

storage_connection 
href=/api/storageconnections/92fc6cf3-17b1-4b69-af80-5782270137ed 
id=92fc6cf3-17b1-4b69-af80-5782270137ed»
address192.168.0.171/address
typenfs/type
path/srv/bstore/vm/path
/storage_connection
/storage_connections


I want to remove this connection  id=d94d9341-6116-4b1a-94c5-5c3327aa1b09» 
but… fail

[RHEVM shell (connected)]# show storageconnection 
d94d9341-6116-4b1a-94c5-5c3327aa1b09

id : d94d9341-6116-4b1a-94c5-5c3327aa1b09
address: 192.168.0.171
path   : /srv/lstore/vm
type   : nfs

[RHEVM shell (connected)]# remove storageconnection 
d94d9341-6116-4b1a-94c5-5c3327aa1b09

error:
status: 404
reason: Not Found
detail: Entity not found: null

[RHEVM shell (connected)]# show storageconnection 
d94d9341-6116-4b1a-94c5-5c3327aa1b09

id : d94d9341-6116-4b1a-94c5-5c3327aa1b09
address: 192.168.0.171
path   : /srv/lstore/vm
type   : nfs

okay… i use curl with DELETE

[root@rhevhst ~]# curl -X GET -H Accept: application/xml -u 
admin@internal:pass 
https://192.168.0.170/api/storageconnections/d94d9341-6116-4b1a-94c5-5c3327aa1b09
 --insecure
?xml version=1.0 encoding=UTF-8 standalone=yes?
storage_connection 
href=/api/storageconnections/d94d9341-6116-4b1a-94c5-5c3327aa1b09 
id=d94d9341-6116-4b1a-94c5-5c3327aa1b09
address192.168.0.171/address
typenfs/type
path/srv/lstore/vm/path
/storage_connection
[root@rhevhst ~]#

[root@rhevhst ~]# curl -X DELETE -H Accept: application/xml -u 
admin@internal:pass 
https://192.168.0.170/api/storageconnections/d94d9341-6116-4b1a-94c5-5c3327aa1b09
 --insecure
?xml version=1.0 encoding=UTF-8 standalone=yes?faultreasonOperation 
Failed/reasondetailEntity not found: null/detail/fault
[root@rhevhst ~]#

how i can remove connection?! this is bug? 

Artem


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Cannot delete storage connection

2014-02-22 Thread Meital Bourvine
Sounds like a bug to me.

Can you please attach engine.log and vdsm.log?

- Original Message -
 From: sirin ar...@e-inet.ru
 To: users@ovirt.org
 Sent: Saturday, February 22, 2014 8:28:54 PM
 Subject: [Users] Cannot delete storage connection
 
 Hi all,
 
 i have next connection
 
 storage_connections
 storage_connection
 href=/api/storageconnections/d94d9341-6116-4b1a-94c5-5c3327aa1b09
 id=d94d9341-6116-4b1a-94c5-5c3327aa1b09»
 address192.168.0.171/address
 typenfs/type
 path/srv/lstore/vm/path
 /storage_connection
 
 storage_connection
 href=/api/storageconnections/67539ba5-9b6d-46df-8c96-4acd3f212f4a
 id=67539ba5-9b6d-46df-8c96-4acd3f212f4a»
 addressrhevm.cebra.lab/address
 typenfs/type
 path/var/lib/exports/iso/path
 /storage_connection
 
 storage_connection
 href=/api/storageconnections/fdc92419-b278-4b11-9eba-f68fd4914132
 id=fdc92419-b278-4b11-9eba-f68fd4914132»
 address192.168.0.171/address
 typenfs/type
 path/srv/store/vm/path
 /storage_connection
 
 storage_connection
 href=/api/storageconnections/92fc6cf3-17b1-4b69-af80-5782270137ed
 id=92fc6cf3-17b1-4b69-af80-5782270137ed»
 address192.168.0.171/address
 typenfs/type
 path/srv/bstore/vm/path
 /storage_connection
 /storage_connections
 
 
 I want to remove this connection  id=d94d9341-6116-4b1a-94c5-5c3327aa1b09»
 but… fail
 
 [RHEVM shell (connected)]# show storageconnection
 d94d9341-6116-4b1a-94c5-5c3327aa1b09
 
 id : d94d9341-6116-4b1a-94c5-5c3327aa1b09
 address: 192.168.0.171
 path   : /srv/lstore/vm
 type   : nfs
 
 [RHEVM shell (connected)]# remove storageconnection
 d94d9341-6116-4b1a-94c5-5c3327aa1b09
 
 error:
 status: 404
 reason: Not Found
 detail: Entity not found: null
 
 [RHEVM shell (connected)]# show storageconnection
 d94d9341-6116-4b1a-94c5-5c3327aa1b09
 
 id : d94d9341-6116-4b1a-94c5-5c3327aa1b09
 address: 192.168.0.171
 path   : /srv/lstore/vm
 type   : nfs
 
 okay… i use curl with DELETE
 
 [root@rhevhst ~]# curl -X GET -H Accept: application/xml -u
 admin@internal:pass
 https://192.168.0.170/api/storageconnections/d94d9341-6116-4b1a-94c5-5c3327aa1b09
 --insecure
 ?xml version=1.0 encoding=UTF-8 standalone=yes?
 storage_connection
 href=/api/storageconnections/d94d9341-6116-4b1a-94c5-5c3327aa1b09
 id=d94d9341-6116-4b1a-94c5-5c3327aa1b09
 address192.168.0.171/address
 typenfs/type
 path/srv/lstore/vm/path
 /storage_connection
 [root@rhevhst ~]#
 
 [root@rhevhst ~]# curl -X DELETE -H Accept: application/xml -u
 admin@internal:pass
 https://192.168.0.170/api/storageconnections/d94d9341-6116-4b1a-94c5-5c3327aa1b09
 --insecure
 ?xml version=1.0 encoding=UTF-8
 standalone=yes?faultreasonOperation Failed/reasondetailEntity not
 found: null/detail/fault
 [root@rhevhst ~]#
 
 how i can remove connection?! this is bug?
 
 Artem
 
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Cannot delete storage connection

2014-02-22 Thread sirin
Hi,

in logs engine.log 

2014-02-22 23:02:17,446 ERROR 
[org.ovirt.engine.api.restapi.resource.AbstractBackendResource] 
(ajp-/127.0.0.1:8702-8) Operation Failed: Entity not found: null

vdsm log attached



vdsm.log
Description: Binary data


Artem

22 февр. 2014 г., в 22:38, Meital Bourvine mbour...@redhat.com написал(а):

 Sounds like a bug to me.
 
 Can you please attach engine.log and vdsm.log?
 
 - Original Message -
 From: sirin ar...@e-inet.ru
 To: users@ovirt.org
 Sent: Saturday, February 22, 2014 8:28:54 PM
 Subject: [Users] Cannot delete storage connection
 
 Hi all,
 
 i have next connection
 
 storage_connections
 storage_connection
 href=/api/storageconnections/d94d9341-6116-4b1a-94c5-5c3327aa1b09
 id=d94d9341-6116-4b1a-94c5-5c3327aa1b09»
 address192.168.0.171/address
 typenfs/type
 path/srv/lstore/vm/path
 /storage_connection
 
 storage_connection
 href=/api/storageconnections/67539ba5-9b6d-46df-8c96-4acd3f212f4a
 id=67539ba5-9b6d-46df-8c96-4acd3f212f4a»
 addressrhevm.cebra.lab/address
 typenfs/type
 path/var/lib/exports/iso/path
 /storage_connection
 
 storage_connection
 href=/api/storageconnections/fdc92419-b278-4b11-9eba-f68fd4914132
 id=fdc92419-b278-4b11-9eba-f68fd4914132»
 address192.168.0.171/address
 typenfs/type
 path/srv/store/vm/path
 /storage_connection
 
 storage_connection
 href=/api/storageconnections/92fc6cf3-17b1-4b69-af80-5782270137ed
 id=92fc6cf3-17b1-4b69-af80-5782270137ed»
 address192.168.0.171/address
 typenfs/type
 path/srv/bstore/vm/path
 /storage_connection
 /storage_connections
 
 
 I want to remove this connection  id=d94d9341-6116-4b1a-94c5-5c3327aa1b09»
 but… fail
 
 [RHEVM shell (connected)]# show storageconnection
 d94d9341-6116-4b1a-94c5-5c3327aa1b09
 
 id : d94d9341-6116-4b1a-94c5-5c3327aa1b09
 address: 192.168.0.171
 path   : /srv/lstore/vm
 type   : nfs
 
 [RHEVM shell (connected)]# remove storageconnection
 d94d9341-6116-4b1a-94c5-5c3327aa1b09
 
 error:
 status: 404
 reason: Not Found
 detail: Entity not found: null
 
 [RHEVM shell (connected)]# show storageconnection
 d94d9341-6116-4b1a-94c5-5c3327aa1b09
 
 id : d94d9341-6116-4b1a-94c5-5c3327aa1b09
 address: 192.168.0.171
 path   : /srv/lstore/vm
 type   : nfs
 
 okay… i use curl with DELETE
 
 [root@rhevhst ~]# curl -X GET -H Accept: application/xml -u
 admin@internal:pass
 https://192.168.0.170/api/storageconnections/d94d9341-6116-4b1a-94c5-5c3327aa1b09
 --insecure
 ?xml version=1.0 encoding=UTF-8 standalone=yes?
 storage_connection
 href=/api/storageconnections/d94d9341-6116-4b1a-94c5-5c3327aa1b09
 id=d94d9341-6116-4b1a-94c5-5c3327aa1b09
address192.168.0.171/address
typenfs/type
path/srv/lstore/vm/path
 /storage_connection
 [root@rhevhst ~]#
 
 [root@rhevhst ~]# curl -X DELETE -H Accept: application/xml -u
 admin@internal:pass
 https://192.168.0.170/api/storageconnections/d94d9341-6116-4b1a-94c5-5c3327aa1b09
 --insecure
 ?xml version=1.0 encoding=UTF-8
 standalone=yes?faultreasonOperation Failed/reasondetailEntity not
 found: null/detail/fault
 [root@rhevhst ~]#
 
 how i can remove connection?! this is bug?
 
 Artem
 
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Nodes lose storage at random

2014-02-22 Thread Nir Soffer
- Original Message -
 From: Johan Kooijman m...@johankooijman.com
 To: Nir Soffer nsof...@redhat.com
 Cc: users users@ovirt.org
 Sent: Wednesday, February 19, 2014 2:34:36 PM
 Subject: Re: [Users] Nodes lose storage at random
 
 Messages: https://t-x.dignus.nl/messages.txt
 Sanlock: https://t-x.dignus.nl/sanlock.log.txt

We can see in /var/log/messages, that sanlock failed to write to 
the ids lockspace [1], which after 80 seconds [2], caused vdsm to loose 
its host id lease. In this case, sanlock kill vdsm [3], which die after 11
retries [4]. Then vdsm is respawned again [5]. This is expected.

We don't know why sanlock failed to write to the storage, but in [6] the
kernel tell us that the nfs server is not responding. Since the nfs server
is accessible from other machines, it means you have some issue with this host.

Later the machine reboots [7], and nfs server is still not accessible. Then
you have lot of WARN_ON call traces [8], that looks related to network code.

We can see that you are not running most recent kernel [7]. We experienced 
various
nfs issues during the 6.5 beta.

I would try to get help from kernel folks about this.

[1] Feb 18 10:47:46 hv5 sanlock[14753]: 2014-02-18 10:47:46+ 1251833 
[21345]: s2 delta_renew read rv -202 offset 0 
/rhev/data-center/mnt/10.0.24.1:_santank_ovirt-data/e9f70496-f181-4c9b-9ecb-d7f780772b04/dom_md/ids

[2] Feb 18 10:48:35 hv5 sanlock[14753]: 2014-02-18 10:48:35+ 1251882 
[14753]: s2 check_our_lease failed 80

[3] Feb 18 10:48:35 hv5 sanlock[14753]: 2014-02-18 10:48:35+ 1251882 
[14753]: s2 kill 19317 sig 15 count 1

[4] Feb 18 10:48:45 hv5 sanlock[14753]: 2014-02-18 10:48:45+ 1251892 
[14753]: dead 19317 ci 3 count 11

[5] Feb 18 10:48:45 hv5 respawn: slave '/usr/share/vdsm/vdsm' died, respawning 
slave

[6] Feb 18 10:57:36 hv5 kernel: nfs: server 10.0.24.1 not responding, timed out

[7]
Feb 18 11:03:01 hv5 kernel: imklog 5.8.10, log source = /proc/kmsg started.
Feb 18 11:03:01 hv5 kernel: Linux version 2.6.32-358.18.1.el6.x86_64 
(mockbu...@c6b10.bsys.dev.centos.org) (gcc version 4.4.7 20120313 (Red Hat 
4.4.7-3) (GCC) ) #1 SMP Wed Aug 28 17:19:38 UTC 2013

[8]
Feb 18 18:29:53 hv5 kernel: [ cut here ]
Feb 18 18:29:53 hv5 kernel: WARNING: at net/core/dev.c:1759 
skb_gso_segment+0x1df/0x2b0() (Not tainted)
Feb 18 18:29:53 hv5 kernel: Hardware name: X9DRW
Feb 18 18:29:53 hv5 kernel: igb: caps=(0x12114bb3, 0x0) len=1596 data_len=0 
ip_summed=0
Feb 18 18:29:53 hv5 kernel: Modules linked in: ebt_arp nfs fscache auth_rpcgss 
nfs_acl bonding softdog ebtable_nat ebtables bnx2fc fcoe libfcoe libfc 
scsi_transport_fc scsi_tgt
 lockd sunrpc bridge ipt_REJECT nf_conntrack_ipv4 nf_defrag_ipv4 iptable_filter 
ip_tables xt_physdev ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6 xt_state 
nf_conntrack xt_multi
port ip6table_filter ip6_tables ext4 jbd2 8021q garp stp llc sha256_generic cbc 
cryptoloop dm_crypt aesni_intel cryptd aes_x86_64 aes_generic vhost_net macvtap 
macvlan tun kvm_
intel kvm sg sb_edac edac_core iTCO_wdt iTCO_vendor_support ioatdma shpchp 
dm_snapshot squashfs ext2 mbcache dm_round_robin sd_mod crc_t10dif isci libsas 
scsi_transport_sas 3w_
sas ahci ixgbe igb dca ptp pps_core dm_multipath dm_mirror dm_region_hash 
dm_log dm_mod be2iscsi bnx2i cnic uio ipv6 cxgb4i cxgb4 cxgb3i libcxgbi cxgb3 
mdio libiscsi_tcp qla4xx
x iscsi_boot_sysfs libiscsi scsi_transport_iscsi [last unloaded: scsi_wait_scan]
Feb 18 18:29:53 hv5 kernel: Pid: 5462, comm: vhost-5458 Not tainted 
2.6.32-358.18.1.el6.x86_64 #1
Feb 18 18:29:53 hv5 kernel: Call Trace:
Feb 18 18:29:53 hv5 kernel: IRQ  [8106e3e7] ? 
warn_slowpath_common+0x87/0xc0
Feb 18 18:29:53 hv5 kernel: [8106e4d6] ? warn_slowpath_fmt+0x46/0x50
Feb 18 18:29:53 hv5 kernel: [a020bd62] ? igb_get_drvinfo+0x82/0xe0 
[igb]
Feb 18 18:29:53 hv5 kernel: [81448e7f] ? skb_gso_segment+0x1df/0x2b0
Feb 18 18:29:53 hv5 kernel: [81449260] ? 
dev_hard_start_xmit+0x1b0/0x530
Feb 18 18:29:53 hv5 kernel: [8146773a] ? sch_direct_xmit+0x15a/0x1c0
Feb 18 18:29:53 hv5 kernel: [8144d0c0] ? dev_queue_xmit+0x3b0/0x550
Feb 18 18:29:53 hv5 kernel: [a04af65c] ? 
br_dev_queue_push_xmit+0x6c/0xa0 [bridge]
Feb 18 18:29:53 hv5 kernel: [a04af6e8] ? br_forward_finish+0x58/0x60 
[bridge]
Feb 18 18:29:53 hv5 kernel: [a04af79a] ? __br_forward+0xaa/0xd0 
[bridge]
Feb 18 18:29:53 hv5 kernel: [81474f34] ? nf_hook_slow+0x74/0x110
Feb 18 18:29:53 hv5 kernel: [a04af81d] ? br_forward+0x5d/0x70 [bridge]
Feb 18 18:29:53 hv5 kernel: [a04b0609] ? 
br_handle_frame_finish+0x179/0x2a0 [bridge]
Feb 18 18:29:53 hv5 kernel: [a04b08da] ? br_handle_frame+0x1aa/0x250 
[bridge]
Feb 18 18:29:53 hv5 kernel: [a0331690] ? pit_timer_fn+0x0/0x80 [kvm]
Feb 18 18:29:53 hv5 kernel: [81448929] ? 
__netif_receive_skb+0x529/0x750
Feb 18 18:29:53 hv5 kernel: [81448bea] ? process_backlog+0x9a/0x100
Feb 18 

Re: [Users] Nodes lose storage at random

2014-02-22 Thread Johan Kooijman
Thanks for looking into it. I've been running the ovirt ISO untill now,
will switch to stock C6.5 to see if it makes a difference.


On Sat, Feb 22, 2014 at 8:57 PM, Nir Soffer nsof...@redhat.com wrote:

 - Original Message -
  From: Johan Kooijman m...@johankooijman.com
  To: Nir Soffer nsof...@redhat.com
  Cc: users users@ovirt.org
  Sent: Wednesday, February 19, 2014 2:34:36 PM
  Subject: Re: [Users] Nodes lose storage at random
 
  Messages: https://t-x.dignus.nl/messages.txt
  Sanlock: https://t-x.dignus.nl/sanlock.log.txt

 We can see in /var/log/messages, that sanlock failed to write to
 the ids lockspace [1], which after 80 seconds [2], caused vdsm to loose
 its host id lease. In this case, sanlock kill vdsm [3], which die after 11
 retries [4]. Then vdsm is respawned again [5]. This is expected.

 We don't know why sanlock failed to write to the storage, but in [6] the
 kernel tell us that the nfs server is not responding. Since the nfs server
 is accessible from other machines, it means you have some issue with this
 host.

 Later the machine reboots [7], and nfs server is still not accessible. Then
 you have lot of WARN_ON call traces [8], that looks related to network
 code.

 We can see that you are not running most recent kernel [7]. We experienced
 various
 nfs issues during the 6.5 beta.

 I would try to get help from kernel folks about this.

 [1] Feb 18 10:47:46 hv5 sanlock[14753]: 2014-02-18 10:47:46+ 1251833
 [21345]: s2 delta_renew read rv -202 offset 0
 /rhev/data-center/mnt/10.0.24.1:
 _santank_ovirt-data/e9f70496-f181-4c9b-9ecb-d7f780772b04/dom_md/ids

 [2] Feb 18 10:48:35 hv5 sanlock[14753]: 2014-02-18 10:48:35+ 1251882
 [14753]: s2 check_our_lease failed 80

 [3] Feb 18 10:48:35 hv5 sanlock[14753]: 2014-02-18 10:48:35+ 1251882
 [14753]: s2 kill 19317 sig 15 count 1

 [4] Feb 18 10:48:45 hv5 sanlock[14753]: 2014-02-18 10:48:45+ 1251892
 [14753]: dead 19317 ci 3 count 11

 [5] Feb 18 10:48:45 hv5 respawn: slave '/usr/share/vdsm/vdsm' died,
 respawning slave

 [6] Feb 18 10:57:36 hv5 kernel: nfs: server 10.0.24.1 not responding,
 timed out

 [7]
 Feb 18 11:03:01 hv5 kernel: imklog 5.8.10, log source = /proc/kmsg started.
 Feb 18 11:03:01 hv5 kernel: Linux version 2.6.32-358.18.1.el6.x86_64 (
 mockbu...@c6b10.bsys.dev.centos.org) (gcc version 4.4.7 20120313 (Red Hat
 4.4.7-3) (GCC) ) #1 SMP Wed Aug 28 17:19:38 UTC 2013

 [8]
 Feb 18 18:29:53 hv5 kernel: [ cut here ]
 Feb 18 18:29:53 hv5 kernel: WARNING: at net/core/dev.c:1759
 skb_gso_segment+0x1df/0x2b0() (Not tainted)
 Feb 18 18:29:53 hv5 kernel: Hardware name: X9DRW
 Feb 18 18:29:53 hv5 kernel: igb: caps=(0x12114bb3, 0x0) len=1596
 data_len=0 ip_summed=0
 Feb 18 18:29:53 hv5 kernel: Modules linked in: ebt_arp nfs fscache
 auth_rpcgss nfs_acl bonding softdog ebtable_nat ebtables bnx2fc fcoe
 libfcoe libfc scsi_transport_fc scsi_tgt
  lockd sunrpc bridge ipt_REJECT nf_conntrack_ipv4 nf_defrag_ipv4
 iptable_filter ip_tables xt_physdev ip6t_REJECT nf_conntrack_ipv6
 nf_defrag_ipv6 xt_state nf_conntrack xt_multi
 port ip6table_filter ip6_tables ext4 jbd2 8021q garp stp llc
 sha256_generic cbc cryptoloop dm_crypt aesni_intel cryptd aes_x86_64
 aes_generic vhost_net macvtap macvlan tun kvm_
 intel kvm sg sb_edac edac_core iTCO_wdt iTCO_vendor_support ioatdma shpchp
 dm_snapshot squashfs ext2 mbcache dm_round_robin sd_mod crc_t10dif isci
 libsas scsi_transport_sas 3w_
 sas ahci ixgbe igb dca ptp pps_core dm_multipath dm_mirror dm_region_hash
 dm_log dm_mod be2iscsi bnx2i cnic uio ipv6 cxgb4i cxgb4 cxgb3i libcxgbi
 cxgb3 mdio libiscsi_tcp qla4xx
 x iscsi_boot_sysfs libiscsi scsi_transport_iscsi [last unloaded:
 scsi_wait_scan]
 Feb 18 18:29:53 hv5 kernel: Pid: 5462, comm: vhost-5458 Not tainted
 2.6.32-358.18.1.el6.x86_64 #1
 Feb 18 18:29:53 hv5 kernel: Call Trace:
 Feb 18 18:29:53 hv5 kernel: IRQ  [8106e3e7] ?
 warn_slowpath_common+0x87/0xc0
 Feb 18 18:29:53 hv5 kernel: [8106e4d6] ?
 warn_slowpath_fmt+0x46/0x50
 Feb 18 18:29:53 hv5 kernel: [a020bd62] ?
 igb_get_drvinfo+0x82/0xe0 [igb]
 Feb 18 18:29:53 hv5 kernel: [81448e7f] ?
 skb_gso_segment+0x1df/0x2b0
 Feb 18 18:29:53 hv5 kernel: [81449260] ?
 dev_hard_start_xmit+0x1b0/0x530
 Feb 18 18:29:53 hv5 kernel: [8146773a] ?
 sch_direct_xmit+0x15a/0x1c0
 Feb 18 18:29:53 hv5 kernel: [8144d0c0] ?
 dev_queue_xmit+0x3b0/0x550
 Feb 18 18:29:53 hv5 kernel: [a04af65c] ?
 br_dev_queue_push_xmit+0x6c/0xa0 [bridge]
 Feb 18 18:29:53 hv5 kernel: [a04af6e8] ?
 br_forward_finish+0x58/0x60 [bridge]
 Feb 18 18:29:53 hv5 kernel: [a04af79a] ? __br_forward+0xaa/0xd0
 [bridge]
 Feb 18 18:29:53 hv5 kernel: [81474f34] ? nf_hook_slow+0x74/0x110
 Feb 18 18:29:53 hv5 kernel: [a04af81d] ? br_forward+0x5d/0x70
 [bridge]
 Feb 18 18:29:53 hv5 kernel: [a04b0609] ?
 br_handle_frame_finish+0x179/0x2a0 [bridge]
 Feb 18 18:29:53 hv5 kernel: [a04b08da] ?
 

Re: [Users] SD Disk's Logical Volume not visible/activated on some nodes

2014-02-22 Thread Nir Soffer
- Original Message -
 From: Boyan Tabakov bl...@alslayer.net
 To: Nir Soffer nsof...@redhat.com
 Cc: users@ovirt.org
 Sent: Wednesday, February 19, 2014 7:18:36 PM
 Subject: Re: [Users] SD Disk's Logical Volume not visible/activated on some 
 nodes
 
 Hello,
 
 On 19.2.2014, 17:09, Nir Soffer wrote:
  - Original Message -
  From: Boyan Tabakov bl...@alslayer.net
  To: users@ovirt.org
  Sent: Tuesday, February 18, 2014 3:34:49 PM
  Subject: [Users] SD Disk's Logical Volume not visible/activated on some
  nodes
 
  Hello,
 
  I have ovirt 3.3 installed on on FC 19 hosts with vdsm 4.13.3-2.fc19.
  
  Which version of ovirt 3.3 is this? (3.3.2? 3.3.3?)
 
 ovirt-engine is 3.3.2-1.fc19
 
  One of the hosts (host1) is engine + node + SPM and the other host2 is
  just a node. I have an iSCSI storage domain configured and accessible
  from both nodes.
 
  When creating a new disk in the SD, the underlying logical volume gets
  properly created (seen in vgdisplay output on host1), but doesn't seem
  to be automatically picked by host2.
  
  How do you know it is not seen on host2?
 
 It's not present in the output of vgdisplay -v nor vgs.
 
  
  Consequently, when creating/booting
  a VM with the said disk attached, the VM fails to start on host2,
  because host2 can't see the LV. Similarly, if the VM is started on
  host1, it fails to migrate to host2. Extract from host2 log is in the
  end. The LV in question is 6b35673e-7062-4716-a6c8-d5bf72fe3280.
 
  As far as I could track quickly the vdsm code, there is only call to lvs
  and not to lvscan or lvchange so the host2 LVM doesn't fully refresh.
  The only workaround so far has been to restart VDSM on host2, which
  makes it refresh all LVM data properly.
 
  When is host2 supposed to pick up any newly created LVs in the SD VG?
  Any suggestions where the problem might be?
  
  When you create a new lv on the shared storage, the new lv should be
  visible on the other host. Lets start by verifying that you do see
  the new lv after a disk was created.
  
  Try this:
  
  1. Create a new disk, and check the disk uuid in the engine ui
  2. On another machine, run this command:
  
  lvs -o vg_name,lv_name,tags
  
  You can identify the new lv using tags, which should contain the new disk
  uuid.
  
  If you don't see the new lv from the other host, please provide
  /var/log/messages
  and /var/log/sanlock.log.
 
 Just tried that. The disk is not visible on the non-SPM node.

This means that storage is not accessible from this host.

 
 On the SPM node (where the LV is visible in vgs output):
 
 Feb 19 19:10:43 host1 vdsm root WARNING File:
 /rhev/data-center/61f15cc0-8bba-482d-8a81-cd636a581b58/3307f6fa-dd58-43db-ab23-b1fb299006c7/images/4d15543c-4c45-4c23-bbe3-f10b9084472a/3e0ce8cb-3740-49d7-908e-d025875ac9a2
 already removed
 Feb 19 19:10:45 host1 multipathd: dm-65: remove map (uevent)
 Feb 19 19:10:45 host1 multipathd: dm-65: devmap not registered, can't remove
 Feb 19 19:10:45 host1 multipathd: dm-65: remove map (uevent)
 Feb 19 19:10:54 host1 kernel: [1652684.864746] dd: sending ioctl
 80306d02 to a partition!
 Feb 19 19:10:54 host1 kernel: [1652684.963931] dd: sending ioctl
 80306d02 to a partition!
 
 No recent entries in sanlock.log on the SPM node.
 
 On the non-SPM node (the one that doesn't show the LV in vgs output),
 there are no relevant entries in /var/log/messages.

Strange - sanlock errors are logged to /var/log/messages. It would be helpful if
you attach this log - we may find something in it.

 Here's the full
 sanlock.log for that host:
 
 2014-01-30 16:28:09+0200 1324 [2335]: sanlock daemon started 2.8 host
 18bd0a27-c280-4007-98f2-d2e7e73cd8b5.xenon.futu
 2014-01-30 16:59:51+0200 5 [609]: sanlock daemon started 2.8 host
 4a7627e2-296a-4e48-a7e2-f6bcecac07ab.xenon.futu
 2014-01-31 09:51:43+0200 60717 [614]: s1 lockspace
 3307f6fa-dd58-43db-ab23-b1fb299006c7:2:/dev/3307f6fa-dd58-43db-ab23-b1fb299006c7/ids:0
 2014-01-31 16:03:51+0200 83045 [613]: s1:r1 resource
 3307f6fa-dd58-43db-ab23-b1fb299006c7:SDM:/dev/3307f6fa-dd58-43db-ab23-b1fb299006c7/leases:1048576
 for 8,16,30268
 2014-01-31 16:18:01+0200 83896 [614]: s1:r2 resource
 3307f6fa-dd58-43db-ab23-b1fb299006c7:SDM:/dev/3307f6fa-dd58-43db-ab23-b1fb299006c7/leases:1048576
 for 8,16,30268
 2014-02-06 05:24:10+0200 563065 [31453]: 3307f6fa aio timeout 0
 0x7fc37c0008c0:0x7fc37c0008d0:0x7fc391f5f000 ioto 10 to_count 1
 2014-02-06 05:24:10+0200 563065 [31453]: s1 delta_renew read rv -202
 offset 0 /dev/3307f6fa-dd58-43db-ab23-b1fb299006c7/ids

Sanlock cannot write to the ids lockspace

 2014-02-06 05:24:10+0200 563065 [31453]: s1 renewal error -202
 delta_length 10 last_success 563034
 2014-02-06 05:24:21+0200 563076 [31453]: 3307f6fa aio timeout 0
 0x7fc37c000910:0x7fc37c000920:0x7fc391d5c000 ioto 10 to_count 2
 2014-02-06 05:24:21+0200 563076 [31453]: s1 delta_renew read rv -202
 offset 0 /dev/3307f6fa-dd58-43db-ab23-b1fb299006c7/ids
 2014-02-06 05:24:21+0200 563076 [31453]: s1 

Re: [Users] Nodes lose storage at random

2014-02-22 Thread Johan Kooijman
Been reinstalling to stocj CentOS 6.5 last night, all successful. Until
roughly midnight GMT, 2 out of 4 hosts were showing the same errors.

Any more suggestions?


On Sat, Feb 22, 2014 at 8:57 PM, Nir Soffer nsof...@redhat.com wrote:

 - Original Message -
  From: Johan Kooijman m...@johankooijman.com
  To: Nir Soffer nsof...@redhat.com
  Cc: users users@ovirt.org
  Sent: Wednesday, February 19, 2014 2:34:36 PM
  Subject: Re: [Users] Nodes lose storage at random
 
  Messages: https://t-x.dignus.nl/messages.txt
  Sanlock: https://t-x.dignus.nl/sanlock.log.txt

 We can see in /var/log/messages, that sanlock failed to write to
 the ids lockspace [1], which after 80 seconds [2], caused vdsm to loose
 its host id lease. In this case, sanlock kill vdsm [3], which die after 11
 retries [4]. Then vdsm is respawned again [5]. This is expected.

 We don't know why sanlock failed to write to the storage, but in [6] the
 kernel tell us that the nfs server is not responding. Since the nfs server
 is accessible from other machines, it means you have some issue with this
 host.

 Later the machine reboots [7], and nfs server is still not accessible. Then
 you have lot of WARN_ON call traces [8], that looks related to network
 code.

 We can see that you are not running most recent kernel [7]. We experienced
 various
 nfs issues during the 6.5 beta.

 I would try to get help from kernel folks about this.

 [1] Feb 18 10:47:46 hv5 sanlock[14753]: 2014-02-18 10:47:46+ 1251833
 [21345]: s2 delta_renew read rv -202 offset 0
 /rhev/data-center/mnt/10.0.24.1:
 _santank_ovirt-data/e9f70496-f181-4c9b-9ecb-d7f780772b04/dom_md/ids

 [2] Feb 18 10:48:35 hv5 sanlock[14753]: 2014-02-18 10:48:35+ 1251882
 [14753]: s2 check_our_lease failed 80

 [3] Feb 18 10:48:35 hv5 sanlock[14753]: 2014-02-18 10:48:35+ 1251882
 [14753]: s2 kill 19317 sig 15 count 1

 [4] Feb 18 10:48:45 hv5 sanlock[14753]: 2014-02-18 10:48:45+ 1251892
 [14753]: dead 19317 ci 3 count 11

 [5] Feb 18 10:48:45 hv5 respawn: slave '/usr/share/vdsm/vdsm' died,
 respawning slave

 [6] Feb 18 10:57:36 hv5 kernel: nfs: server 10.0.24.1 not responding,
 timed out

 [7]
 Feb 18 11:03:01 hv5 kernel: imklog 5.8.10, log source = /proc/kmsg started.
 Feb 18 11:03:01 hv5 kernel: Linux version 2.6.32-358.18.1.el6.x86_64 (
 mockbu...@c6b10.bsys.dev.centos.org) (gcc version 4.4.7 20120313 (Red Hat
 4.4.7-3) (GCC) ) #1 SMP Wed Aug 28 17:19:38 UTC 2013

 [8]
 Feb 18 18:29:53 hv5 kernel: [ cut here ]
 Feb 18 18:29:53 hv5 kernel: WARNING: at net/core/dev.c:1759
 skb_gso_segment+0x1df/0x2b0() (Not tainted)
 Feb 18 18:29:53 hv5 kernel: Hardware name: X9DRW
 Feb 18 18:29:53 hv5 kernel: igb: caps=(0x12114bb3, 0x0) len=1596
 data_len=0 ip_summed=0
 Feb 18 18:29:53 hv5 kernel: Modules linked in: ebt_arp nfs fscache
 auth_rpcgss nfs_acl bonding softdog ebtable_nat ebtables bnx2fc fcoe
 libfcoe libfc scsi_transport_fc scsi_tgt
  lockd sunrpc bridge ipt_REJECT nf_conntrack_ipv4 nf_defrag_ipv4
 iptable_filter ip_tables xt_physdev ip6t_REJECT nf_conntrack_ipv6
 nf_defrag_ipv6 xt_state nf_conntrack xt_multi
 port ip6table_filter ip6_tables ext4 jbd2 8021q garp stp llc
 sha256_generic cbc cryptoloop dm_crypt aesni_intel cryptd aes_x86_64
 aes_generic vhost_net macvtap macvlan tun kvm_
 intel kvm sg sb_edac edac_core iTCO_wdt iTCO_vendor_support ioatdma shpchp
 dm_snapshot squashfs ext2 mbcache dm_round_robin sd_mod crc_t10dif isci
 libsas scsi_transport_sas 3w_
 sas ahci ixgbe igb dca ptp pps_core dm_multipath dm_mirror dm_region_hash
 dm_log dm_mod be2iscsi bnx2i cnic uio ipv6 cxgb4i cxgb4 cxgb3i libcxgbi
 cxgb3 mdio libiscsi_tcp qla4xx
 x iscsi_boot_sysfs libiscsi scsi_transport_iscsi [last unloaded:
 scsi_wait_scan]
 Feb 18 18:29:53 hv5 kernel: Pid: 5462, comm: vhost-5458 Not tainted
 2.6.32-358.18.1.el6.x86_64 #1
 Feb 18 18:29:53 hv5 kernel: Call Trace:
 Feb 18 18:29:53 hv5 kernel: IRQ  [8106e3e7] ?
 warn_slowpath_common+0x87/0xc0
 Feb 18 18:29:53 hv5 kernel: [8106e4d6] ?
 warn_slowpath_fmt+0x46/0x50
 Feb 18 18:29:53 hv5 kernel: [a020bd62] ?
 igb_get_drvinfo+0x82/0xe0 [igb]
 Feb 18 18:29:53 hv5 kernel: [81448e7f] ?
 skb_gso_segment+0x1df/0x2b0
 Feb 18 18:29:53 hv5 kernel: [81449260] ?
 dev_hard_start_xmit+0x1b0/0x530
 Feb 18 18:29:53 hv5 kernel: [8146773a] ?
 sch_direct_xmit+0x15a/0x1c0
 Feb 18 18:29:53 hv5 kernel: [8144d0c0] ?
 dev_queue_xmit+0x3b0/0x550
 Feb 18 18:29:53 hv5 kernel: [a04af65c] ?
 br_dev_queue_push_xmit+0x6c/0xa0 [bridge]
 Feb 18 18:29:53 hv5 kernel: [a04af6e8] ?
 br_forward_finish+0x58/0x60 [bridge]
 Feb 18 18:29:53 hv5 kernel: [a04af79a] ? __br_forward+0xaa/0xd0
 [bridge]
 Feb 18 18:29:53 hv5 kernel: [81474f34] ? nf_hook_slow+0x74/0x110
 Feb 18 18:29:53 hv5 kernel: [a04af81d] ? br_forward+0x5d/0x70
 [bridge]
 Feb 18 18:29:53 hv5 kernel: [a04b0609] ?
 br_handle_frame_finish+0x179/0x2a0 [bridge]
 Feb 18 18:29:53 hv5 kernel: 

Re: [Users] spice password

2014-02-22 Thread Yedidyah Bar David
What wasn't asked? A password? You supply it on the command line, example 
below. 

- Original Message -

 From: Koen Vanoppen vanoppen.k...@gmail.com
 To: Yedidyah Bar David d...@redhat.com, users@ovirt.org
 Sent: Thursday, February 20, 2014 5:57:37 PM
 Subject: Re: spice password

 No, it wasn't asked...
 On Feb 20, 2014 4:17 PM, Yedidyah Bar David  d...@redhat.com  wrote:

   From: Koen Vanoppen  vanoppen.k...@gmail.com 
  
 
   To: Yedidyah Bar David  d...@redhat.com , users@ovirt.org
  
 
   Sent: Thursday, February 20, 2014 5:06:37 PM
  
 
   Subject: Re: [Users] (no subject)
  
 

   Thanx, for the answer. But he successfully created a ticket
  
 

  With setVmTicket? So he supplied some password there, right?
 

   and received a number, but when he then starts the client again, as asked
   (
   Connect to the client again (again, r-v will ask for the password in a
   pop-up window): ) He has to give a password.
  
 

  Yes - the password set with setVmTicket...
 

  E.g.:
 
  vdsClient localhost setVmTicket $vmid topsecret 120
 
  then start the spice client and input as password: topsecret
 
  (and it will expire in 120 seconds).
 

   Maybe important. The username field is empty and can't be modified.
  
 

  Indeed.
 

   Kind regards,
  
 

   Koen
  
 

   2014-02-20 16:03 GMT+01:00 Yedidyah Bar David  d...@redhat.com  :
  
 

 From: Koen Vanoppen  vanoppen.k...@gmail.com 

   
  
 
 To: users@ovirt.org

   
  
 
 Sent: Thursday, February 20, 2014 4:56:10 PM

   
  
 
 Subject: [Users] (no subject)

   
  
 

 Hey Guys,

   
  
 

 I'm back ;-). This time I have a question from one of our
 programmers.

   
  
 
 He's trying this:

   
  
 
 http://www.ovirt.org/How_to_Connect_to_SPICE_Console_Without_Portal#Connecting_Using_REST_API

   
  
 

 But he bumps into this:

   
  
 

 Connect to the client again (again, r-v will ask for the password in
 a
 pop-up
 window):

   
  
 
 bash$ remote-viewer --spice-ca-file ${CA_FILE} --spice-host-subject
 ${SUBJECT} spice://${HOST}/?port=${PORT}\tls-port=${SPORT}

   
  
 
 Now, the question is What's the password? Or where can I find it?

   
  
 

I think you need to set it with setVmTicket - see that page for an
example.
   
  
 
--
   
  
 
Didi
   
  
 

  --
 
  Didi
 

-- 
Didi 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Last Updated shows current time instead of last update

2014-02-22 Thread Yedidyah Bar David
- Original Message -
 From: Nir Soffer nsof...@redhat.com
 To: Yedidyah Bar David d...@redhat.com
 Cc: infra in...@ovirt.org, users users@ovirt.org
 Sent: Thursday, February 20, 2014 8:02:42 PM
 Subject: Re: [Users] Last Updated shows current time instead of last update
 
 - Original Message -
  From: Yedidyah Bar David d...@redhat.com
  To: Nir Soffer nsof...@redhat.com
  Cc: infra in...@ovirt.org, users users@ovirt.org
  Sent: Thursday, February 20, 2014 4:27:21 PM
  Subject: Re: [Users] Last Updated shows current time instead of last
  update
  
  - Original Message -
   From: Nir Soffer nsof...@redhat.com
   To: Yedidyah Bar David d...@redhat.com
   Cc: infra in...@ovirt.org, users users@ovirt.org
   Sent: Thursday, February 20, 2014 12:15:42 PM
   Subject: Re: [Users] Last Updated shows current time instead of last
   update
   
   - Original Message -
From: Yedidyah Bar David d...@redhat.com
To: infra in...@ovirt.org, users users@ovirt.org
Sent: Monday, February 17, 2014 2:43:42 PM
Subject: [Users] Last Updated shows current time instead of last
update

Hi all,

Sorry for cross-posting. Not sure what's the correct address to discuss
this.

Almost all of the pages on the ovirt.org wiki that have Last
Updated:,
actually have:

Last updated: {{CURRENTMONTHNAME}} {{CURRENTDAY}},
{{CURRENTYEAR}}!--This
is
markup for current date, do not change--

thus showing the current date and not the last change date. When
searching
google you see the last time its robot happened to fetch the page.

See e.g. [1] for a (possibly non-complete) list.

Not sure if that's intended or not, but I personally find it useless
and
misleading.
   
   This is indeed useless and wrong and should be removed.
   
   Does this work?
   
   Last updated on {{REVISIONDAY}}/{{REVISIONMONTH}}/{{REVISIONYEAR}} by
   {{REVISIONUSER}}
  
  If I Edit, then Show preview, it's already updated, even before Save page.
  I personally find it a bit weird. Anyway, I did this on a test page and it
  seems to work, but I have to wait a day to make sure the date does not
  change...
  
  In any case, it does affect performance. Not sure it's very significant,
  though.
  
  I now changed the following pages (picked up randomly):
  
  http://www.ovirt.org/Features/Minimum_guaranteed_memory
  http://www.ovirt.org/Features/MultiHostNetworkConfiguration
  http://www.ovirt.org/Features/Node/PluginLiveInstall
  http://www.ovirt.org/Features/Node/PackageRefactoring
  http://www.ovirt.org/Features/Automatic_scaling
  
  I changed it to:
  
  Last updated on {{REVISIONYEAR}}-{{REVISIONMONTH}}-{{REVISIONDAY}} by
  {{REVISIONUSER}}
  
  That is, ISO date.
 
 Better!
 
 But see Dave version, which uses zero-padded day {{REVISIONDAY2}}

Now changed the above pages to this.

 
  
  If we do not see a significant impact in a few days, we should probably
  edit
  all pages, probably with some bot.
 
 I don't see any reason why this should effect performance. The expensive
 warning
 is about showing revision meta data for another page.

Well, I did meter it and it's somewhat slower. Not sure why.

So: anyone with sufficient rights/abilities cares to do the replacement for
all other pages?
-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] API read-only access / roles

2014-02-22 Thread Yair Zaslavsky


- Original Message -
 From: Juan Hernandez jhern...@redhat.com
 To: Sven Kieske s.kie...@mittwald.de, Users@ovirt.org List 
 Users@ovirt.org
 Cc: Itamar Heim ih...@redhat.com, Yair Zaslavsky yzasl...@redhat.com
 Sent: Saturday, February 22, 2014 2:22:14 PM
 Subject: Re: [Users] API read-only access / roles
 
 On 02/20/2014 04:51 PM, Itamar Heim wrote:
  On 02/20/2014 05:24 PM, Sven Kieske wrote:
  Hi,
 
  is nobody interested in this feature at all?
  it would be a huge security gain, while lowering
  the bars for having a read only user if this could get shipped with 3.4:
  
  we are very interested, but we want to do this based on the
  authentication re-factoring, which in itself, barely made the 3.4 timeline.
  Yair - are we pluggable yet, that someone could add such a user by
  dropping a jar somewhere, or still on going work towards 3.5?

As Juan mentioned in his email, it should be possible to plug in at 3.4 as well.
However, we're changing the configuration format at 3.5 as we're changing the 
mechanism to use the extensions mechanism - both Directory and Authenticator 
are extensions, the configuration for
directory (authorization extension) and authenciator (authentication extension) 
will look a bit different.




  
 
 Pugglability of authentication already works in 3.4. By default it uses
 the previous mechanism, but the administrator can change this. In order
 to change you need to create the /etc/ovirt-engine/auth.conf.d directory
 and then create inside one or more authentication profiles
 configuration files. An authentication profile is a combination of an
 authenticator and a directory. The authenticator is used to check
 the credentials (the user name and password) and the directory is used
 to search users and their details. For example, if you want to use local
 authentication (the users, passwords, and groups of the OS) you can
 create a local.conf file with the following content:
 
   #
   # The name of the profile. This is what will be displayed in the
   # combo box in the login page.
   #
   name=local
 
   #
   # Needed to enable the profile, by default all profiles are
   # disabled.
   #
   enabled=true
 
   #
   # The configuration of the authenticator used by the profile. The
   # type and the module are mandatory, the rest are optional and
   # the default values are as shown below.
   #
   authenticator.type=ssh
   authenticator.module=org.ovirt.engine.core.authentication.ssh
   # authenticator.host=localhost
   # authenticator.port=22
   # authenticator.timeout=10
 
   #
   # The configuration of the directory:
   #
   directory.type=nss
   directory.module=org.ovirt.engine.core.authentication.nss
 
 For this to work you need to install some additional modules, which
 aren't currently part of the engine. This is where plugabillity comes in
 place. This modules can be built externally. I created modules for SSH
 authentication and NSS (Name Service Switch) directory. The source is
 available here:
 
 https://github.com/jhernand/ovirt-engine-ssh-authenticator
 https://github.com/jhernand/ovirt-engine-nss-directory
 
 The NSS directory also needs JNA (Java Native Access):
 
 https://github.com/jhernand/ovirt-engine-jna-module
 
 Installing these extensions is very easy, just build from source and
 uncompress the generated .zip files to /usr/share/ovirt-engine/modules.
 In case you don't want to build from source you can use the RPMs that I
 created. The source for the .spec files is here:
 
 https://github.com/jhernand/ovirt-engine-rpms
 
 If you don't want to build form source you can use a yum repository that
 I created with binaries for Fedora 20 (should work in CentOS as well):
 
 http://jhernand.fedorapeople.org/repo
 
 So, to summarize:
 
 # cat  /etc/yum.repos.d/my.repo .
 [my]
 name=my
 baseurl=http://jhernand.fedorapeople.org/repo
 enabled=1
 gpgcheck=0
 .
 
 # yum -y install \
 ovirt-engine-ssh-authenticator \
 ovirt-engine-nss-directory
 
 # mkdir -p /etc/ovirt-engine/auth.conf.d
 
 # cat  /etc/ovirt-engine/auth.conf.d/local.conf .
 name=local
 enabled=true
 authenticator.type=ssh
 authenticator.module=org.ovirt.engine.core.authentication.ssh
 directory.type=nss
 directory.module=org.ovirt.engine.core.authentication.nss
 .
 
 # systemctl restart ovirt-engine
 
 Then you can login with admin@internal, add some local users and
 permissions, and then use them to login to the GUI or the API.
 
 Take into account that I created these modules as a way to test the new
 authentication infrastructure, so they may have limitations or issues. I
 appreciate any feedback.
 
 
  Am 19.02.2014 15:32, schrieb Sven Kieske: I just looked into my test vm
  with the 3.4 beta
  and I can't see such an user there.
 
  I created an RFE at: https://bugzilla.redhat.com/show_bug.cgi?id=1067036
 
 
  I really hope this can get included in 3.4 (I know it's late)
  as it should be a very very minor change at engine-setup.
 
  Thanks
 
  Am 19.02.2014 14:55, schrieb Sven Kieske:
  

Re: [Users] API read-only access / roles

2014-02-22 Thread Yair Zaslavsky


- Original Message -
 From: Yair Zaslavsky yzasl...@redhat.com
 To: Juan Hernandez jhern...@redhat.com
 Cc: Users@ovirt.org List Users@ovirt.org
 Sent: Sunday, February 23, 2014 8:55:07 AM
 Subject: Re: [Users] API read-only access / roles
 
 
 
 - Original Message -
  From: Juan Hernandez jhern...@redhat.com
  To: Sven Kieske s.kie...@mittwald.de, Users@ovirt.org List
  Users@ovirt.org
  Cc: Itamar Heim ih...@redhat.com, Yair Zaslavsky
  yzasl...@redhat.com
  Sent: Saturday, February 22, 2014 2:22:14 PM
  Subject: Re: [Users] API read-only access / roles
  
  On 02/20/2014 04:51 PM, Itamar Heim wrote:
   On 02/20/2014 05:24 PM, Sven Kieske wrote:
   Hi,
  
   is nobody interested in this feature at all?
   it would be a huge security gain, while lowering
   the bars for having a read only user if this could get shipped with 3.4:
   
   we are very interested, but we want to do this based on the
   authentication re-factoring, which in itself, barely made the 3.4
   timeline.
   Yair - are we pluggable yet, that someone could add such a user by
   dropping a jar somewhere, or still on going work towards 3.5?
 
 As Juan mentioned in his email, it should be possible to plug in at 3.4 as
 well.
 However, we're changing the configuration format at 3.5 as we're changing the
 mechanism to use the extensions mechanism - both Directory and Authenticator
 are extensions, the configuration for
 directory (authorization extension) and authenciator (authentication
 extension) will look a bit different.

CC'ed Sven as well, 
In addition bare in mind as Directory and Authenticator will be extensions, 
there will be some interface change.

Yair

 
 
 
 
   
  
  Pugglability of authentication already works in 3.4. By default it uses
  the previous mechanism, but the administrator can change this. In order
  to change you need to create the /etc/ovirt-engine/auth.conf.d directory
  and then create inside one or more authentication profiles
  configuration files. An authentication profile is a combination of an
  authenticator and a directory. The authenticator is used to check
  the credentials (the user name and password) and the directory is used
  to search users and their details. For example, if you want to use local
  authentication (the users, passwords, and groups of the OS) you can
  create a local.conf file with the following content:
  
#
# The name of the profile. This is what will be displayed in the
# combo box in the login page.
#
name=local
  
#
# Needed to enable the profile, by default all profiles are
# disabled.
#
enabled=true
  
#
# The configuration of the authenticator used by the profile. The
# type and the module are mandatory, the rest are optional and
# the default values are as shown below.
#
authenticator.type=ssh
authenticator.module=org.ovirt.engine.core.authentication.ssh
# authenticator.host=localhost
# authenticator.port=22
# authenticator.timeout=10
  
#
# The configuration of the directory:
#
directory.type=nss
directory.module=org.ovirt.engine.core.authentication.nss
  
  For this to work you need to install some additional modules, which
  aren't currently part of the engine. This is where plugabillity comes in
  place. This modules can be built externally. I created modules for SSH
  authentication and NSS (Name Service Switch) directory. The source is
  available here:
  
  https://github.com/jhernand/ovirt-engine-ssh-authenticator
  https://github.com/jhernand/ovirt-engine-nss-directory
  
  The NSS directory also needs JNA (Java Native Access):
  
  https://github.com/jhernand/ovirt-engine-jna-module
  
  Installing these extensions is very easy, just build from source and
  uncompress the generated .zip files to /usr/share/ovirt-engine/modules.
  In case you don't want to build from source you can use the RPMs that I
  created. The source for the .spec files is here:
  
  https://github.com/jhernand/ovirt-engine-rpms
  
  If you don't want to build form source you can use a yum repository that
  I created with binaries for Fedora 20 (should work in CentOS as well):
  
  http://jhernand.fedorapeople.org/repo
  
  So, to summarize:
  
  # cat  /etc/yum.repos.d/my.repo .
  [my]
  name=my
  baseurl=http://jhernand.fedorapeople.org/repo
  enabled=1
  gpgcheck=0
  .
  
  # yum -y install \
  ovirt-engine-ssh-authenticator \
  ovirt-engine-nss-directory
  
  # mkdir -p /etc/ovirt-engine/auth.conf.d
  
  # cat  /etc/ovirt-engine/auth.conf.d/local.conf .
  name=local
  enabled=true
  authenticator.type=ssh
  authenticator.module=org.ovirt.engine.core.authentication.ssh
  directory.type=nss
  directory.module=org.ovirt.engine.core.authentication.nss
  .
  
  # systemctl restart ovirt-engine
  
  Then you can login with admin@internal, add some local users and
  permissions, and then use them to login to the GUI or the API.
  
  Take into account that I created these