Re: [ovirt-users] Installation oVirt node and engine on the same machine

2017-07-17 Thread wc hung
So if i need to deploy serval vms on a machine, which is going to install
ovirt engine, and serval vms on another machine, should i do the following
procedure?
1.Install self-hosted engine
2.Install some vms
3.Install ovirt node on other machine
4.Install some vms on ovirt node
5.Connect ovirt node to the self-hosted engine

2017-07-17 14:48 GMT+08:00 Vinícius Ferrão :

> Since you’re deploying a new oVirt Pool I would recommend deleting
> everything and restarting.
>
> Moving the Management Engine to Hosted Engine is too much work for a pool
> that isn’t in production yet.
>
> V.
>
>
> On 17 Jul 2017, at 00:10, wc hung  wrote:
>
> Thank you guys. Those information is useful for me.
> It seems like we will configure VMs in the Self-hosted engine.
> As I am a newcomer to oVirt system, I would like to ask another question.
>
>
> Since I have already installed one oVirt engine on one physical machine
> and one oVirt node on another physical machine.
> If I need to install self-hosted engine, do I need to delete the original
> oVirt engine and install the self-hosted engine , build up some new hosts
> and connect to the original oVirt node?
>
>
> Thank you all of you so much.
>
> 2017-07-16 13:59 GMT+08:00 Yedidyah Bar David :
>
>> On Fri, Jul 14, 2017 at 7:40 PM, Vinícius Ferrão 
>> wrote:
>> > WC, here is one guide. The procedure is what Alexander said: self-hosted
>> > engine.
>> >
>> > http://www.ovirt.org/documentation/self-hosted/Self-Hosted_
>> Engine_Guide/
>>
>> Indeed.
>>
>> There is one important difference: hosted-engine requires some shared
>> storage.
>> In principle you can use shared storage exported from the same machine.
>> You can do that with NFS or iSCSI. NFS is considered somewhat more risky,
>> but
>> somewhat easier to setup (at least, if you have no experience with iSCSI).
>> You can search the list archives with things like 'hosted-engine
>> all-in-one
>> same machine nfs iscsi'.
>>
>> Another option, depending on your needs, is lago + ovirt-system-tests.
>>
>> Best,
>>
>> >
>> > V.
>> >
>> > Sent from my iPhone
>> >
>> > On 14 Jul 2017, at 10:21, wc hung  wrote:
>> >
>> > Hi all,
>> >
>> > Could we install oVirt node and engine on the same machine? If we can do
>> > that, is there any guide to show the step?
>> >
>> > ___
>> > Users mailing list
>> > Users@ovirt.org
>> > http://lists.ovirt.org/mailman/listinfo/users
>> >
>> >
>> > ___
>> > Users mailing list
>> > Users@ovirt.org
>> > http://lists.ovirt.org/mailman/listinfo/users
>> >
>>
>>
>>
>> --
>> Didi
>>
>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Installation oVirt node and engine on the same machine

2017-07-17 Thread wc hung
Thank you guys. Those information is useful for me.

It seems like we will configure VMs in the Self-hosted engine.

As I am a newcomer to oVirt system, I would like to ask another question.



Since I have already installed one oVirt engine on one physical machine and
one oVirt node on another physical machine.

If I need to install self-hosted engine, do I need to delete the original
oVirt engine and install the self-hosted engine , build up some new hosts
and connect to the original oVirt node?



Thank you all of you so much.

2017-07-16 13:59 GMT+08:00 Yedidyah Bar David :

> On Fri, Jul 14, 2017 at 7:40 PM, Vinícius Ferrão 
> wrote:
> > WC, here is one guide. The procedure is what Alexander said: self-hosted
> > engine.
> >
> > http://www.ovirt.org/documentation/self-hosted/Self-Hosted_Engine_Guide/
>
> Indeed.
>
> There is one important difference: hosted-engine requires some shared
> storage.
> In principle you can use shared storage exported from the same machine.
> You can do that with NFS or iSCSI. NFS is considered somewhat more risky,
> but
> somewhat easier to setup (at least, if you have no experience with iSCSI).
> You can search the list archives with things like 'hosted-engine all-in-one
> same machine nfs iscsi'.
>
> Another option, depending on your needs, is lago + ovirt-system-tests.
>
> Best,
>
> >
> > V.
> >
> > Sent from my iPhone
> >
> > On 14 Jul 2017, at 10:21, wc hung  wrote:
> >
> > Hi all,
> >
> > Could we install oVirt node and engine on the same machine? If we can do
> > that, is there any guide to show the step?
> >
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> >
> >
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> >
>
>
>
> --
> Didi
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Fwd: ISCSI storage with multiple nics on same subnet disabled on host activation

2017-07-17 Thread Yaniv Kaul
On Mon, Jul 17, 2017 at 10:56 AM, Nelson Lameiras <
nelson.lamei...@lyra-network.com> wrote:

> Hello, Can any one please help us with the problem described below?
>
> Nir, I'm including you since a quick search on the internet led me to
> think that you have worked on this part of the project. Please forgive me
> if I'm off topic.
>
> (I incorrectly used below the expression "patch" when I meant "configure".
> it's corrected now)
>

VDSM may indeed change the IP filter. From the function that sets it[1]:

def setRpFilterIfNeeded(netIfaceName, hostname, loose_mode):
"""
Set rp_filter to loose or strict mode if there's no session using the
netIfaceName device and it's not the device used by the OS to reach the
'hostname'.
loose mode is needed to allow multiple iSCSI connections in a multiple
NIC
per subnet configuration. strict mode is needed to avoid the security
breach where an untrusted VM can DoS the host by sending it packets with
spoofed random sources.

Arguments:
netIfaceName: the device used by the iSCSI session
target: iSCSI target object cointaining the portal hostname
loose_mode: boolean




I think it sets it to strict mode when disconnecting or removing an iSCSI
session.
Perhaps something in the check we are doing is incorrect? Do you have other
sessions open?
Y.

[1]
https://github.com/oVirt/vdsm/blob/321233bea649fb6d1e72baa1b1164c8c1bc852af/lib/vdsm/storage/iscsi.py#L556


> cordialement, regards,
>
> 
> Nelson LAMEIRAS
> Ingénieur Systèmes et Réseaux / Systems and Networks engineer
> Tel: +33 5 32 09 09 70 <+33%205%2032%2009%2009%2070>
> nelson.lamei...@lyra-network.com
> www.lyra-network.com | www.payzen.eu 
> 
> 
> 
> 
> --
> Lyra Network, 109 rue de l'innovation, 31670 Labège, FRANCE
>
>
> --
> *De: *"Nelson Lameiras" 
> *À: *"ovirt users" 
> *Envoyé: *Mercredi 7 Juin 2017 14:59:48
> *Objet: *[ovirt-users] ISCSI storage with multiple nics on same subnet
> disabled on host activation
>
> Hello,
>
> In our oVirt hosts, we are using DELL equallogic SAN with each server
> connecting to SAN via 2 physical interfaces. Since both interfaces share
> the same network (Equalogic limitation) we must configure sysctl to to
> allow iSCSI multipath with multiple NICs in the same subnet :
>
> 
> 
>
> net.ipv4.conf.p2p1.arp_ignore=1
> net.ipv4.conf.p2p1.arp_announce=2
> net.ipv4.conf.p2p1.rp_filter=2
>
> net.ipv4.conf.p2p2.arp_ignore=1
> net.ipv4.conf.p2p2.arp_announce=2
> net.ipv4.conf.p2p2.rp_filter=2
>
> 
> 
>
> This works great in most setups, but for a strange reason, on some of our
> setups, the sysctl configuration is updated by VDSM when activating a host
> and the second interface stops working immeadiatly :
> 
> 
> vdsm.log
>
> 2017-06-07 11:51:51,063+0200 INFO  (jsonrpc/5) [storage.ISCSI] Setting strict 
> mode rp_filter for device 'p2p2'. (iscsi:602)
> 2017-06-07 11:51:51,064+0200 ERROR (jsonrpc/5) [storage.HSM] Could not 
> connect to storageServer (hsm:2392)
> Traceback (most recent call last):
>   File "/usr/share/vdsm/storage/hsm.py", line 2389, in connectStorageServer
> conObj.connect()
>   File "/usr/share/vdsm/storage/storageServer.py", line 433, in connect
> iscsi.addIscsiNode(self._iface, self._target, self._cred)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/iscsi.py", line 232, in 
> addIscsiNode
> iscsiadm.node_login(iface.name, target.address, target.iqn)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/iscsiadm.py", line 337, 
> in node_login
> raise IscsiNodeError(rc, out, err)
>
>
>
> 
> 
>
> "strict mode" is enforced for second interface, and it no longuer works...
> Which means - at least - that there is no redundancy in case of hardware
> faillure and this is not acceptable for our production needs.
>
> What is really strange is that we have another "twin" site on another
> geographic region with simillar hardware configuration and same oVirt
> installation, and this problem does not happen.
> Can this be really random?
> What can be the root cause of this behaviour? How can I correct it?
>
> our setup:
> oVirt hostedEngine : Centor 7.3, ovirt 4.1.2
> 3 physical oVirt nodes centos 7.3, ovirt 4.1.2
> SAN DELL Equalogic
>
> cordialement, regards,
>
> 
> Nelson LAMEIRAS
> Ingénieur Systèmes et Réseaux / Systems and Networks engineer

Re: [ovirt-users] Hosted Engine/NFS Troubles

2017-07-17 Thread Luca 'remix_tj' Lorenzetto
On Mon, Jul 17, 2017 at 9:05 PM, Phillip Bailey  wrote:
> Hi,
>
> I'm having trouble with my hosted engine setup (v4.0) and could use some
> help. The problem I'm having is that whenever I try to add additional hosts
> to the setup via webadmin, the operation fails due to storage-related
> issues.
>
> webadmin shows the following error messages:
>
> "Host  cannot access the Storage Domain(s) hosted_storage
> attached to the Data Center Default. Setting Host state to Non-Operational.
> Failed to connect Host ovirt-node-1 to Storage Pool Default"
>

Hi Phillip,

your hosted engine storage is on nfs, right? Did you test if you can
mount manually on each host?

Luca



-- 
"E' assurdo impiegare gli uomini di intelligenza eccellente per fare
calcoli che potrebbero essere affidati a chiunque se si usassero delle
macchine"
Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)

"Internet è la più grande biblioteca del mondo.
Ma il problema è che i libri sono tutti sparsi sul pavimento"
John Allen Paulos, Matematico (1945-vivente)

Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt-hosted-engine state transition messages

2017-07-17 Thread Jim Kusznir
Ok, I've been ignoring this for a long time as the logs were so verbose and
didn't show anything I could identify as usable debug info.  Recently one
of my ovirt hosts (currently NOT running the main engine, but a candidate)
was cycling as much as 40 times a day between "EngineUpBadHealth and
EngineUp".  Here's the log snippit.  I included some time before and after
if that's helpful.  In this case, I got an email about bad health at 8:15
and a restore (engine up) at 8:16.  I see where the messages are sent, but
I don't see any explanation as to why / what the problem is.

BTW: 192.168.8.11 is this computer's physical IP; 192.168.8.12 is the
computer currently running the engine.  Both are also hosting the gluster
store (eg, I have 3 hosts, all are participating in the gluster replica
2+arbitrator).

I'd appreciate it if someone could shed some light on why this keeps
happening!

--Jim


MainThread::INFO::2017-07-17
08:12:06,230::config::485::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_vm_conf)
Reloading vm.conf from the shared storage domain
MainThread::INFO::2017-07-17
08:12:06,230::config::412::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(_get_vm_conf_content_from_ovf_store)
Trying to get a fresher copy of vm configuration from the OVF_STORE
MainThread::INFO::2017-07-17
08:12:08,877::ovf_store::103::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan)
Found OVF_STORE: imgUUID:e10c90a5-4d9c-4e18-b6f7-ae8f0cdf4f57,
volUUID:a9754d40-eda1-44d7-ac92-76a228f9f1ac
MainThread::INFO::2017-07-17
08:12:09,432::ovf_store::103::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan)
Found OVF_STORE: imgUUID:f22829ab-9fd5-415a-9a8f-809d3f7887d4,
volUUID:9f4760ee-119c-412a-a1e8-49e73e6ba929
MainThread::INFO::2017-07-17
08:12:09,925::ovf_store::112::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
Extracting Engine VM OVF from the OVF_STORE
MainThread::INFO::2017-07-17
08:12:10,324::ovf_store::119::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
OVF_STORE volume path: /rhev/data-center/mnt/glusterSD/192.168.8.11:
_engine/c0acdefb-7d16-48ec-9d76-659b8fe33e2a/images/f22829ab-9fd5-415a-9a8f-809d3f7887d4/9f4760ee-119c-412a-a1e8-49e73e6ba929
MainThread::INFO::2017-07-17
08:12:10,696::config::431::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(_get_vm_conf_content_from_ovf_store)
Found an OVF for HE VM, trying to convert
MainThread::INFO::2017-07-17
08:12:10,704::config::436::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(_get_vm_conf_content_from_ovf_store)
Got vm.conf from OVF_STORE
MainThread::INFO::2017-07-17
08:12:10,705::states::426::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
Engine vm running on localhost
MainThread::INFO::2017-07-17
08:12:10,714::hosted_engine::604::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm)
Initializing VDSM
MainThread::INFO::2017-07-17
08:12:14,426::hosted_engine::630::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images)
Connecting the storage
MainThread::INFO::2017-07-17
08:12:14,470::storage_server::219::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server)
Connecting storage server
MainThread::INFO::2017-07-17
08:12:19,648::storage_server::226::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server)
Connecting storage server
MainThread::INFO::2017-07-17
08:12:19,900::storage_server::233::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server)
Refreshing the storage domain
MainThread::INFO::2017-07-17
08:12:20,298::hosted_engine::657::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images)
Preparing images
MainThread::INFO::2017-07-17
08:12:20,298::image::126::ovirt_hosted_engine_ha.lib.image.Image::(prepare_images)
Preparing images
MainThread::INFO::2017-07-17
08:12:24,051::hosted_engine::660::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images)
Refreshing vm.conf
MainThread::INFO::2017-07-17
08:12:24,051::config::485::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_vm_conf)
Reloading vm.conf from the shared storage domain
MainThread::INFO::2017-07-17
08:12:24,052::config::412::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(_get_vm_conf_content_from_ovf_store)
Trying to get a fresher copy of vm configuration from the OVF_STORE
MainThread::INFO::2017-07-17
08:12:26,895::ovf_store::103::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan)
Found OVF_STORE: imgUUID:e10c90a5-4d9c-4e18-b6f7-ae8f0cdf4f57,
volUUID:a9754d40-eda1-44d7-ac92-76a228f9f1ac
MainThread::INFO::2017-07-17
08:12:27,429::ovf_store::103::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan)
Found OVF_STORE: imgUUID:f22829ab-9fd5-415a-9a8f-809d3f7887d4,
volUUID:9f4760ee-119c-412a-a1e8-49e73e6ba929
MainThread::

Re: [ovirt-users] Active Directory authentication setup

2017-07-17 Thread Todd Punderson
Sorry to reply to myself, but I figured it out.  Putting this here for 
documentation in case anyone ever runs into this as it was absolutely horrible 
to troubleshoot.


I had this set: 
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\CertSvc\Configuration\IssuingCA\CSP\AlternateSignatureAlgorithm
 = 1 (I think that's by default) That caused the CA to issue certs with 
RSASSA-PSS (1.2.840.113549.1.1.10) algorithm on them instead of sha256RSA. So I 
changed that registry value to a 0 as well as my CAPolicy.inf file and reissued 
my Root and Sub CA certs. Then refreshed the DC certs, loaded the new Root/Sub 
CAs in CentOS and it started working.


I actually figured it out from a bug report for Firefox here: 
https://support.mozilla.org/en-US/questions/986085


Either way it's working now. That drove me nuts for 2+ days.


Thank you anyway for your assistance!


From: users-boun...@ovirt.org  on behalf of Todd 
Punderson 
Sent: Monday, July 17, 2017 9:05:12 AM
To: Ondra Machacek
Cc: users@ovirt.org
Subject: Re: [ovirt-users] Active Directory authentication setup


Hi,

 Agreed on the certificate issue, I fought with it all weekend! Here's the 
output of those commands:


ldap_url_parse_ext(ldaps://DC3.home.doonga.org)
ldap_create
ldap_url_parse_ext(ldaps://DC3.home.doonga.org:636/??base)
ldap_sasl_bind
ldap_send_initial_request
ldap_new_connection 1 1 0
ldap_int_open_connection
ldap_connect_to_host: TCP DC3.home.doonga.org:636
ldap_new_socket: 3
ldap_prepare_socket: 3
ldap_connect_to_host: Trying 172.16.10.4:636
ldap_pvt_connect: fd: 3 tm: -1 async: 0
attempting to connect:
connect success
TLS: certdb config: configDir='/etc/openldap/certs' tokenDescription='ldap(0)' 
certPrefix='' keyPrefix='' flags=readOnly
TLS: using moznss security dir /etc/openldap/certs prefix .
TLS: certificate [(null)] is not valid - error -8182:Peer's certificate has an 
invalid signature..
TLS: error: connect - force handshake failure: errno 21 - moznss error -8174
TLS: can't connect: TLS error -8174:security library: bad database..
ldap_err2string
ldap_sasl_bind(SIMPLE): Can't contact LDAP server (-1)

I tried digging into this one. I'm very sure the peer doesn't have an invalid 
signature, I tested the certificate chain with openssl successfully, I'm 
guessing that error is related to the "bad database". I couldn't quite figure 
out that part of the error though.


I have an offline root and online issuing CA, here's those certs. I loaded both 
of these to the system CA trust.


[root@ovirt-engine ~]#  openssl x509 -in /root/root.pem -text -noout
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
1a:01:7c:fc:bf:77:9c:95:4e:13:7d:bf:36:a8:be:5b
Signature Algorithm: rsassaPss
 Hash Algorithm: sha256
 Mask Algorithm: mgf1 with sha256
 Salt Length: 20
 Trailer Field: 0xbc (default)
Issuer: CN=Doonga.Org Root CA
Validity
Not Before: Jul 13 01:15:39 2017 GMT
Not After : Jul 13 01:25:39 2037 GMT
Subject: CN=Doonga.Org Root CA
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
00:ac:ad:1e:3a:9c:08:76:7f:eb:83:ea:d9:f6:4b:
d3:4b:88:45:bb:50:b1:3b:a6:b9:a0:22:d4:94:a5:
b4:6a:32:39:cd:3b:5e:83:c1:1e:de:cb:0e:da:73:
e2:3a:df:f0:97:a2:72:b1:35:cf:bd:a3:a7:e5:dc:
67:ac:38:82:e8:a2:31:21:ab:cf:19:6d:a5:7d:44:
5e:f3:dd:76:d1:02:8b:cf:3b:25:ce:c0:7a:4b:0d:
ae:bb:d5:02:06:8b:0b:33:75:5a:81:1b:c1:53:52:
45:44:65:49:35:08:d7:0c:35:15:bf:6b:1e:82:49:
d2:de:ce:4b:0b:1b:6c:02:97:af:86:0c:ce:78:6f:
4f:dd:fe:9e:13:e7:43:94:53:df:76:91:8a:df:88:
4c:0b:0e:a6:6b:ef:7a:2f:ff:cc:ad:a5:36:fd:8f:
ad:44:e5:93:b3:4b:cb:43:c9:28:9d:21:86:7c:c5:
72:91:0b:a8:d5:36:f2:14:bf:df:58:27:a9:4b:04:
de:f1:89:aa:c0:27:ba:81:c9:0c:08:f7:08:f9:f3:
05:d1:d7:26:45:80:9c:d6:da:98:0c:d9:b8:44:e2:
aa:4f:32:2d:7b:5f:1a:14:ac:34:52:76:20:2d:cb:
6d:8e:d5:87:80:b2:d4:2f:0f:77:13:51:92:bb:f3:
07:75
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Key Usage:
Digital Signature, Certificate Sign, CRL Sign
X509v3 Basic Constraints: critical
CA:TRUE
X509v3 Subject Key Identifier:
72:21:77:3F:D7:2A:F9:87:BA:19:F5:32:50:B2:9E:F4:21:B9:8B:07
1.3.6.1.4.1.311.21.1:
...
X509v3 Certificate Policies:
Policy: 1.3.6.1.4.1.37476.9000.53
  User Notice:
Explicit Text:
  CPS: http://www.doo

[ovirt-users] Fwd: ISCSI storage with multiple nics on same subnet disabled on host activation

2017-07-17 Thread Nelson Lameiras
Hello, Can any one please help us with the problem described below? 

Nir, I'm including you since a quick search on the internet led me to think 
that you have worked on this part of the project. Please forgive me if I'm off 
topic. 

(I incorrectly used below the expression "patch" when I meant "configure". it's 
corrected now) 

cordialement, regards, 


Nelson LAMEIRAS 
Ingénieur Systèmes et Réseaux / Systems and Networks engineer 
Tel: +33 5 32 09 09 70 
nelson.lamei...@lyra-network.com 

www.lyra-network.com | www.payzen.eu 





Lyra Network, 109 rue de l'innovation, 31670 Labège, FRANCE 



De: "Nelson Lameiras"  
À: "ovirt users"  
Envoyé: Mercredi 7 Juin 2017 14:59:48 
Objet: [ovirt-users] ISCSI storage with multiple nics on same subnet disabled 
on host activation 

Hello, 

In our oVirt hosts, we are using DELL equallogic SAN with each server 
connecting to SAN via 2 physical interfaces. Since both interfaces share the 
same network (Equalogic limitation) we must configure sysctl to to allow iSCSI 
multipath with multiple NICs in the same subnet : 


 
net.ipv4.conf.p2p1.arp_ignore=1
net.ipv4.conf.p2p1.arp_announce=2
net.ipv4.conf.p2p1.rp_filter=2

net.ipv4.conf.p2p2.arp_ignore=1
net.ipv4.conf.p2p2.arp_announce=2
net.ipv4.conf.p2p2.rp_filter=2 




 



This works great in most setups, but for a strange reason, on some of our 
setups, the sysctl configuration is updated by VDSM when activating a host and 
the second interface stops working immeadiatly : 

 
vdsm.log 
2017-06-07 11:51:51,063+0200 INFO  (jsonrpc/5) [storage.ISCSI] Setting strict 
mode rp_filter for device 'p2p2'. (iscsi:602)
2017-06-07 11:51:51,064+0200 ERROR (jsonrpc/5) [storage.HSM] Could not connect 
to storageServer (hsm:2392)
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/hsm.py", line 2389, in connectStorageServer
conObj.connect()
  File "/usr/share/vdsm/storage/storageServer.py", line 433, in connect
iscsi.addIscsiNode(self._iface, self._target, self._cred)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/iscsi.py", line 232, in 
addIscsiNode
iscsiadm.node_login(iface.name, target.address, target.iqn)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/iscsiadm.py", line 337, 
in node_login
raise IscsiNodeError(rc, out, err) 





 

"strict mode" is enforced for second interface, and it no longuer works... 
Which means - at least - that there is no redundancy in case of hardware 
faillure and this is not acceptable for our production needs. 

What is really strange is that we have another "twin" site on another 
geographic region with simillar hardware configuration and same oVirt 
installation, and this problem does not happen. 
Can this be really random? 
What can be the root cause of this behaviour? How can I correct it? 

our setup: 
oVirt hostedEngine : Centor 7.3, ovirt 4.1.2 
3 physical oVirt nodes centos 7.3, ovirt 4.1.2 
SAN DELL Equalogic 

cordialement, regards, 


Nelson LAMEIRAS 
Ingénieur Systèmes et Réseaux / Systems and Networks engineer 
Tel: +33 5 32 09 09 70 
nelson.lamei...@lyra-network.com 

www.lyra-network.com | www.payzen.eu 





Lyra Network, 109 rue de l'innovation, 31670 Labège, FRANCE 


___ 
Users mailing list 
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Active Directory authentication setup

2017-07-17 Thread Todd Punderson
Hi,

 Agreed on the certificate issue, I fought with it all weekend! Here's the 
output of those commands:


ldap_url_parse_ext(ldaps://DC3.home.doonga.org)
ldap_create
ldap_url_parse_ext(ldaps://DC3.home.doonga.org:636/??base)
ldap_sasl_bind
ldap_send_initial_request
ldap_new_connection 1 1 0
ldap_int_open_connection
ldap_connect_to_host: TCP DC3.home.doonga.org:636
ldap_new_socket: 3
ldap_prepare_socket: 3
ldap_connect_to_host: Trying 172.16.10.4:636
ldap_pvt_connect: fd: 3 tm: -1 async: 0
attempting to connect:
connect success
TLS: certdb config: configDir='/etc/openldap/certs' tokenDescription='ldap(0)' 
certPrefix='' keyPrefix='' flags=readOnly
TLS: using moznss security dir /etc/openldap/certs prefix .
TLS: certificate [(null)] is not valid - error -8182:Peer's certificate has an 
invalid signature..
TLS: error: connect - force handshake failure: errno 21 - moznss error -8174
TLS: can't connect: TLS error -8174:security library: bad database..
ldap_err2string
ldap_sasl_bind(SIMPLE): Can't contact LDAP server (-1)

I tried digging into this one. I'm very sure the peer doesn't have an invalid 
signature, I tested the certificate chain with openssl successfully, I'm 
guessing that error is related to the "bad database". I couldn't quite figure 
out that part of the error though.


I have an offline root and online issuing CA, here's those certs. I loaded both 
of these to the system CA trust.


[root@ovirt-engine ~]#  openssl x509 -in /root/root.pem -text -noout
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
1a:01:7c:fc:bf:77:9c:95:4e:13:7d:bf:36:a8:be:5b
Signature Algorithm: rsassaPss
 Hash Algorithm: sha256
 Mask Algorithm: mgf1 with sha256
 Salt Length: 20
 Trailer Field: 0xbc (default)
Issuer: CN=Doonga.Org Root CA
Validity
Not Before: Jul 13 01:15:39 2017 GMT
Not After : Jul 13 01:25:39 2037 GMT
Subject: CN=Doonga.Org Root CA
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
00:ac:ad:1e:3a:9c:08:76:7f:eb:83:ea:d9:f6:4b:
d3:4b:88:45:bb:50:b1:3b:a6:b9:a0:22:d4:94:a5:
b4:6a:32:39:cd:3b:5e:83:c1:1e:de:cb:0e:da:73:
e2:3a:df:f0:97:a2:72:b1:35:cf:bd:a3:a7:e5:dc:
67:ac:38:82:e8:a2:31:21:ab:cf:19:6d:a5:7d:44:
5e:f3:dd:76:d1:02:8b:cf:3b:25:ce:c0:7a:4b:0d:
ae:bb:d5:02:06:8b:0b:33:75:5a:81:1b:c1:53:52:
45:44:65:49:35:08:d7:0c:35:15:bf:6b:1e:82:49:
d2:de:ce:4b:0b:1b:6c:02:97:af:86:0c:ce:78:6f:
4f:dd:fe:9e:13:e7:43:94:53:df:76:91:8a:df:88:
4c:0b:0e:a6:6b:ef:7a:2f:ff:cc:ad:a5:36:fd:8f:
ad:44:e5:93:b3:4b:cb:43:c9:28:9d:21:86:7c:c5:
72:91:0b:a8:d5:36:f2:14:bf:df:58:27:a9:4b:04:
de:f1:89:aa:c0:27:ba:81:c9:0c:08:f7:08:f9:f3:
05:d1:d7:26:45:80:9c:d6:da:98:0c:d9:b8:44:e2:
aa:4f:32:2d:7b:5f:1a:14:ac:34:52:76:20:2d:cb:
6d:8e:d5:87:80:b2:d4:2f:0f:77:13:51:92:bb:f3:
07:75
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Key Usage:
Digital Signature, Certificate Sign, CRL Sign
X509v3 Basic Constraints: critical
CA:TRUE
X509v3 Subject Key Identifier:
72:21:77:3F:D7:2A:F9:87:BA:19:F5:32:50:B2:9E:F4:21:B9:8B:07
1.3.6.1.4.1.311.21.1:
...
X509v3 Certificate Policies:
Policy: 1.3.6.1.4.1.37476.9000.53
  User Notice:
Explicit Text:
  CPS: http://www.doonga.org/pki/cps.txt

Signature Algorithm: rsassaPss
 Hash Algorithm: sha256
 Mask Algorithm: mgf1 with sha256
 Salt Length: 20
 Trailer Field: 0xbc (default)

 56:06:7e:bb:f4:c1:29:a1:05:27:8b:66:e0:23:17:56:ac:de:
 4c:65:0d:1e:97:d4:c6:71:75:a8:79:80:dd:b7:b7:08:b2:12:
 af:d7:cb:c9:99:80:7b:47:02:9e:6c:fc:83:5e:ae:4d:46:ce:
 3b:3c:f4:fe:e6:4c:66:d7:6d:2e:de:6a:31:0f:fb:ef:2b:d4:
 5a:3c:3c:a9:1e:c1:39:a4:0f:3d:9b:23:5c:94:16:9a:6f:9b:
 e0:01:33:49:f8:d3:f1:b5:9c:33:f4:23:ca:88:94:5d:bd:65:
 94:55:ad:90:72:57:78:8e:88:bc:40:81:ff:68:d3:5f:63:48:
 ae:d9:96:b4:44:b0:ed:51:e2:01:36:ad:97:2c:64:a0:17:5e:
 c5:47:e1:2f:60:f5:5a:fd:09:21:08:be:1d:6b:5a:71:d4:25:
 ea:e1:2b:1a:95:2e:aa:03:a8:91:7f:cf:11:6d:3b:d7:ff:4b:
 87:68:14:93:81:bc:64:20:14:3e:f7:99:c5:5d:fc:b9:3a:b4:
 e9:78:2a:1c:35:22:86:5c:13:c6:1a:75:c2:41:54:45:7d:31:
 4f:f5:a2:0f:c6:de:8f:bf:a6:ea:b9:a0:f6:b2:1c:bf:2f:84:
 ee:69:76:cd:b7:34:2c:dd:f9:2d:02:62:4a:0f:8b:1e:42:11:
 f8:98:ae:07

[roo

Re: [ovirt-users] oVIRT 4.1 / iSCSI Multipathing

2017-07-17 Thread Devin Acosta
V.,

I am still troubleshooting the issue, I haven’t found any resolution to my
issue at this point yet. I need to figure out by this Friday otherwise I
need to look at Xen or another solution. iSCSI and oVIRT seems problematic.


--

Devin Acosta
Red Hat Certified Architect, LinuxStack

On July 16, 2017 at 11:53:59 PM, Vinícius Ferrão (fer...@if.ufrj.br) wrote:

Have you found any solution for this problem?

I’m using an FreeNAS machine to server iSCSI but I’ve the exactly same
problem. I’ve reinstalled oVirt at least 3 times during the weekend trying
to solve the issue.

At this moment my iSCSI Multipath tab is just inconsitent. I can’t see both
VLAN’s on “Logical networks” but only one target shows up on Storage
Targets.

When I was able to found two targets everything went down and I needed to
reboot the host and the Hosted Engine to regenerate oVirt.

V.

On 11 Jul 2017, at 19:29, Devin Acosta  wrote:


I am using the latest release of oVIRT 4.1.3, and I am connecting a Dell
Compelent SAN that has 2 fault domains each on a separate VLAN that I have
attached to oVIRT. From what I understand I am suppose to go into “iSCSI
Multipathing” option and add a BOND of the iSCSI interfaces. I have done
this selecting the 2 logical networks together for iSCSI. I notice that
there is an option below to select Storage Targets but if I select the
storage targets below with the logical networks the the cluster goes crazy
and appears to be mad. Storage, Nodes, and everything goes offline even
thought I have NFS also attached to the cluster.

How should this best be configured. What we notice that happens is when the
server reboots it seems to log into the SAN correctly but according the the
Dell SAN it is only logged into once controller. So only pulls both fault
domains from a single controller.

Please Advise.

Devin

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Engine HA-Issues

2017-07-17 Thread Sven Achtelik
Hi Kasturi,

thank you for pointing me into the right direction. It turned out that I‘ve 
removed an old DNS-Server. My nodes where set to use that old DNS-Server and 
thus lost the capability to resolve the name.

Thanks,
Sven

Von: Kasturi Narra [mailto:kna...@redhat.com]
Gesendet: Montag, 17. Juli 2017 08:10
An: Sven Achtelik 
Cc: users@ovirt.org
Betreff: Re: [ovirt-users] Engine HA-Issues

Hi ,

  Can you please check the following. Following could be one of the reason why 
HE vm restarts every minute.


Check the error or engine health state. If it’s to do with Liveliness check, 
then this is mostly an issue connecting to engine.

- Check if engine FQDN is reachable from all hosts

-  curl -v 
http:///ovirt-engine/services/health
 - does this return ok?

- Access the HE console and check if ovirt-engine is running.

- Check /var/log/ovirt-engine/server.log or /var/log/ovirt-engine/engine.log if 
there are errors starting ovirt-engine



Thanks

kasturi


On Fri, Jul 14, 2017 at 10:28 PM, Sven Achtelik 
mailto:sven.achte...@eps.aero>> wrote:
Hi All,

after running solid for several month my ovirt-engine started rebooting on 
several hosts. I’ve looked into the hostend-engine –vm-status and it sees that 
the engine is up on one host but not reachable. At the same time I can access 
the gui and everything is working fine. After some time the engine is shutting 
down and all hosts are trying to start the engine until one is the winner, at 
least it looks like this. Any clues where to look at and find the issue with 
the liveliness check ?



--== Host 1 status ==--

conf_on_shared_storage : True
Status up-to-date  : True
Hostname   : ovirt-node01
Host ID: 1
Engine status  : {"reason": "vm not running on this host", 
"health": "bad", "vm": "down", "detail": "unknown"}
Score  : 3400
stopped: False
Local maintenance  : False
crc32  : 3eb33843
local_conf_timestamp   : 17128
Host timestamp : 17113
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=17113 (Fri Jul 14 11:50:23 2017)
host-id=1
score=3400
vm_conf_refresh_time=17128 (Fri Jul 14 11:50:38 2017)
conf_on_shared_storage=True
maintenance=False
state=EngineDown
stopped=False


--== Host 2 status ==--

conf_on_shared_storage : True
Status up-to-date  : True
Hostname   : ovirt-node02.mgmt.lan
Host ID: 2
Engine status  : {"reason": "failed liveliness check", 
"health": "bad", "vm": "up", "detail": "up"}
Score  : 3400
stopped: False
Local maintenance  : False
crc32  : 2a8c86cc
local_conf_timestamp   : 523182
Host timestamp : 523167
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=523167 (Fri Jul 14 11:50:25 2017)
host-id=2
score=3400
vm_conf_refresh_time=523182 (Fri Jul 14 11:50:40 2017)
conf_on_shared_storage=True
maintenance=False
state=EngineStarting
stopped=False


--== Host 3 status ==--

conf_on_shared_storage : True
Status up-to-date  : True
Hostname   : ovirt-node03.mgmt.lan
Host ID: 3
Engine status  : {"reason": "vm not running on this host", 
"health": "bad", "vm": "down", "detail": "unknown"}
Score  : 3400
stopped: False
Local maintenance  : False
crc32  : f8490d79
local_conf_timestamp   : 527698
Host timestamp : 527683
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=527683 (Fri Jul 14 11:50:33 2017)
host-id=3
score=3400
vm_conf_refresh_time=527698 (Fri Jul 14 11:50:47 2017)
conf_on_shared_storage=True
maintenance=False
state=EngineDown
stopped=False

--
Thank you,
Sven

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listi

Re: [ovirt-users] Active Directory authentication setup

2017-07-17 Thread Ondra Machacek
This is most probably certificate issue.

Can you please share output of following command:

 $ ldapsearch -d 1 -H ldaps://DC3.home.doonga.org -x -s base -b ''

And also the output of following command:

 $ openssl x509 -in /path/to/your/active_diretory_ca.pem -text -noout

Are you sure you added a proper CA cert to your system?


On Sun, Jul 16, 2017 at 1:04 AM, Todd Punderson  wrote:
> Hi,
>
>I’ve been pulling my hair out over this one. Here’s the
> output of ovirt-engine-extension-aaa-ldap-setup. Everything works fine if I
> use “plain” but I don’t really want to do that. I searched the error that’s
> shown below and tried several different “fixes” but none of them helped.
> These are Server 2016 DCs. Not too sure where to go next.
>
>
>
> [ INFO  ] Stage: Initializing
>
> [ INFO  ] Stage: Environment setup
>
>   Configuration files:
> ['/etc/ovirt-engine-extension-aaa-ldap-setup.conf.d/10-packaging.conf']
>
>   Log file:
> /tmp/ovirt-engine-extension-aaa-ldap-setup-20170715170953-wfo1pk.log
>
>   Version: otopi-1.6.2 (otopi-1.6.2-1.el7.centos)
>
> [ INFO  ] Stage: Environment packages setup
>
> [ INFO  ] Stage: Programs detection
>
> [ INFO  ] Stage: Environment customization
>
>   Welcome to LDAP extension configuration program
>
>   Available LDAP implementations:
>
>1 - 389ds
>
>2 - 389ds RFC-2307 Schema
>
>3 - Active Directory
>
>4 - IBM Security Directory Server
>
>5 - IBM Security Directory Server RFC-2307 Schema
>
>6 - IPA
>
>7 - Novell eDirectory RFC-2307 Schema
>
>8 - OpenLDAP RFC-2307 Schema
>
>9 - OpenLDAP Standard Schema
>
>   10 - Oracle Unified Directory RFC-2307 Schema
>
>   11 - RFC-2307 Schema (Generic)
>
>   12 - RHDS
>
>   13 - RHDS RFC-2307 Schema
>
>   14 - iPlanet
>
>   Please select: 3
>
>   Please enter Active Directory Forest name: home.doonga.org
>
> [ INFO  ] Resolving Global Catalog SRV record for home.doonga.org
>
> [ INFO  ] Resolving LDAP SRV record for home.doonga.org
>
>   NOTE:
>
>   It is highly recommended to use secure protocol to access the LDAP
> server.
>
>   Protocol startTLS is the standard recommended method to do so.
>
>   Only in cases in which the startTLS is not supported, fallback to
> non standard ldaps protocol.
>
>   Use plain for test environments only.
>
>   Please select protocol to use (startTLS, ldaps, plain) [startTLS]:
> ldaps
>
>   Please select method to obtain PEM encoded CA certificate (File,
> URL, Inline, System, Insecure): System
>
> [ INFO  ] Resolving SRV record 'home.doonga.org'
>
> [ INFO  ] Connecting to LDAP using 'ldaps://DC1.home.doonga.org:636'
>
> [WARNING] Cannot connect using 'ldaps://DC1.home.doonga.org:636': {'info':
> 'TLS error -8157:Certificate extension not found.', 'desc': "Can't contact
> LDAP server"}
>
> [ INFO  ] Connecting to LDAP using 'ldaps://DC2.home.doonga.org:636'
>
> [WARNING] Cannot connect using 'ldaps://DC2.home.doonga.org:636': {'info':
> 'TLS error -8157:Certificate extension not found.', 'desc': "Can't contact
> LDAP server"}
>
> [ INFO  ] Connecting to LDAP using 'ldaps://DC3.home.doonga.org:636'
>
> [WARNING] Cannot connect using 'ldaps://DC3.home.doonga.org:636': {'info':
> 'TLS error -8157:Certificate extension not found.', 'desc': "Can't contact
> LDAP server"}
>
> [ ERROR ] Cannot connect using any of available options
>
>
>
> Also:
>
> 2017-07-15 18:18:06 INFO
> otopi.plugins.ovirt_engine_extension_aaa_ldap.ldap.common
> common._connectLDAP:391 Connecting to LDAP using
> 'ldap://DC2.home.doonga.org:389'
>
> 2017-07-15 18:18:06 INFO
> otopi.plugins.ovirt_engine_extension_aaa_ldap.ldap.common
> common._connectLDAP:442 Executing startTLS
>
> 2017-07-15 18:18:06 DEBUG
> otopi.plugins.ovirt_engine_extension_aaa_ldap.ldap.common
> common._connectLDAP:459 Exception
>
> Traceback (most recent call last):
>
>   File
> "/usr/share/ovirt-engine-extension-aaa-ldap/setup/bin/../plugins/ovirt-engine-extension-aaa-ldap/ldap/common.py",
> line 443, in _connectLDAP
>
> c.start_tls_s()
>
>   File "/usr/lib64/python2.7/site-packages/ldap/ldapobject.py", line 564, in
> start_tls_s
>
> return self._ldap_call(self._l.start_tls_s)
>
>   File "/usr/lib64/python2.7/site-packages/ldap/ldapobject.py", line 99, in
> _ldap_call
>
> result = func(*args,**kwargs)
>
> CONNECT_ERROR: {'info': 'TLS error -8157:Certificate extension not found.',
> 'desc': 'Connect error'}
>
> 2017-07-15 18:18:06 WARNING
> otopi.plugins.ovirt_engine_extension_aaa_ldap.ldap.common
> common._connectLDAP:463 Cannot connect using
> 'ldap://DC2.home.doonga.org:389': {'info': 'TLS error -8157:Certificate
> extension not found.', 'desc': 'Connect error'}
>
> 2017-07-15 18:18:06 INFO
> otopi.plugins.ovirt_engine_extension_aaa_ldap.ldap.common
> common._connectLDAP:391 Conn

Re: [ovirt-users] oVIRT 4.1 / iSCSI Multipathing

2017-07-17 Thread Vinícius Ferrão
Have you found any solution for this problem?

I’m using an FreeNAS machine to server iSCSI but I’ve the exactly same problem. 
I’ve reinstalled oVirt at least 3 times during the weekend trying to solve the 
issue.

At this moment my iSCSI Multipath tab is just inconsitent. I can’t see both 
VLAN’s on “Logical networks” but only one target shows up on Storage Targets.

When I was able to found two targets everything went down and I needed to 
reboot the host and the Hosted Engine to regenerate oVirt.

V.

On 11 Jul 2017, at 19:29, Devin Acosta 
mailto:de...@pabstatencio.com>> wrote:


I am using the latest release of oVIRT 4.1.3, and I am connecting a Dell 
Compelent SAN that has 2 fault domains each on a separate VLAN that I have 
attached to oVIRT. From what I understand I am suppose to go into “iSCSI 
Multipathing” option and add a BOND of the iSCSI interfaces. I have done this 
selecting the 2 logical networks together for iSCSI. I notice that there is an 
option below to select Storage Targets but if I select the storage targets 
below with the logical networks the the cluster goes crazy and appears to be 
mad. Storage, Nodes, and everything goes offline even thought I have NFS also 
attached to the cluster.

How should this best be configured. What we notice that happens is when the 
server reboots it seems to log into the SAN correctly but according the the 
Dell SAN it is only logged into once controller. So only pulls both fault 
domains from a single controller.

Please Advise.

Devin

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users