[ovirt-users] Uploading disk images (oVirt 4.3.3.1)

2019-05-31 Thread adrianquintero
Hi,
I have an issue while trying to upload disk images thru the Web UI.
"Connection to ovirt-imageio-proxy service has failed. Make sure the service is 
installed, configured, and ovirt-engine certificate is registered as a valid CA 
in the browser."

My ovirt engine's fqdn is ovirt-engine.mydomain.com however due to network 
restrictions I had to set rules in order to reach our ovirt-engine
ovirt-engine.mydomain.com = 192.168.0.45

For example ovirt-engine.mydomain.otherstuff.com - 192.168.10.109:80, 
192.168.10.109:443, 192.168.0.45:80
So as you can see I need to hit the ovirt-engine using 
ovirt-engine.mydomain.otherstuff.com which I am able to by modifyting the 
11-setup-sso.conf file and adding 
"SSO_ENGINE_URL="https://ovirt-engine.mydomain.otherstuff.com:443/ovirt-engine/;

I am able to upload disk images when using http://ovirt-engine.mydomain.com  
but not able to http://ovirt-engine.mydomain.otherstuff.com
I know it might be related to the certificates but I need to be able to upload 
disk images using both URLs.

any ideas?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QSQKTZ6YCD7XBEUI6FZGVEQY2XKGZ6Q5/


[ovirt-users] Re: Install fresh 4.3 fails with mounting shared storage

2019-05-31 Thread Vrgotic, Marko
Roy,

It came down to manually mounting ovirt-storage domains and executing chown 
command.

Still, I took you advice and did NFS3-only  and NFS4-only tests.

Here are the results:


Test1: Protocol NFS3 / ExportPolicy NFS3  with Default (allow all)



  *   ERROR:

[ INFO  ] TASK [ovirt.hosted_engine_setup : Add NFS storage domain]

[ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[General 
Exception]". HTTP response code is 400.

[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault 
reason is \"Operation Failed\". Fault detail is \"[General Exception]\". HTTP 
response code is 400."}



  *   Volume is mounted as:

172.17.28.5:/ovirt_hosted_engine on 
/rhev/data-center/mnt/172.17.28.5:_ovirt__hosted__engine type nfs 
(rw,relatime,vers=3,rsize=65536,wsize=655

36,namlen=255,soft,nolock,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=172.17.28.5,mountvers=3,mountport=635,mountproto=udp,loc

al_lock=all,addr=172.17.28.5)



  *   ls -la /rhev/data-center/mnt:

[root@ovirt-hv-01 mnt]# ls -la

total 4

drwxr-xr-x. 3 vdsm kvm48 May 31 07:55 .

drwxr-xr-x. 3 vdsm kvm17 May 31 07:50 ..

drwxrwxr-x. 2 root root 4096 May 29 10:12 172.17.28.5:_ovirt__hosted__engine



  *   change ownership of mounted volume to vdsm:kvm
  *   umount



Reran deployment script and deployment completed successfully.





Test2: Protocol NFS4 / ExportPolicy NFS  with Default (allow all)



Deployment went through without single issue.



Seems that even though vddm:vm group with an ID 36 are created and added to 
NetApp volume, they are not applicable.

It still required to mount manually, execute “chown -R vdsm:kvm 
”, amount and rerun the deployment script or rather reenter 
storage information for deployment to proceed.

Adding next storage domain, fro example for all other test VMs, will again fail 
from UI, unless the manual mount and chow are executed previously.



Then I tried just mount second storage domain, and it failed reporting 
permission issue (which it was). After executing manual mount actions, adding 
domain from UI worked flawlessly.



Output of mount:

172.17.28.5:/ovirt_production on 
/rhev/data-center/mnt/172.17.28.5:_ovirt__production type nfs4 
(rw,relatime,vers=4.0,rsize=65536,wsize=65536,namlen=255,soft,

nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=172.17.28.12,local_lock=none,addr=172.17.28.5)

172.17.28.5:/ovirt_hosted_engine on 
/rhev/data-center/mnt/172.17.28.5:_ovirt__hosted__engine type nfs4 
(rw,relatime,vers=4.0,rsize=65536,wsize=65536,namlen=25

5,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=172.17.28.12,local_lock=none,addr=172.17.28.5)





Test3: Protocol NFS4 / ExportPolicy NFS with limited IP group access

In progress, but I have high hopes now.


Will keep you posted.

BTW.I was not able to find location in NetApp Volume where squashing is being 
defined, so I can not answer that one yet.

Thank you.

Kind regards,

Marko Vrgotic
ActiveVideo

From: "Morris, Roy" 
Date: Thursday, 30 May 2019 at 21:29
To: "Vrgotic, Marko" , "users@ovirt.org" 

Cc: "Stojchev, Darko" 
Subject: RE: Install fresh 4.3 fails with mounting shared storage

Marko,

No problem, here are some other things to check as well.

NetApp is weird about allowing changes done to the root directory of a share. I 
would recommend creating a folder on the NetApp share like “rhevstor” or 
something so that you can chown that folder and mount the folder for the 
storage domain. I never had much luck mounting and using the root level of the 
NetApp NFS share. I also have in my notes that I set “sec=sys” as a property of 
my NetApp data domain which wouldn’t allow me to mount it until I input it into 
the RHEV manager. However, you aren’t at a point of having the RHEV manager up 
and running so I’m not sure how much use this would be at the moment.

#mount -o sec=sys 172.17.28.5:/rhevstor /mnt/temp

NFS share will fail if it isn’t accessible from all hosts, so make sure to go 
into each host to run

#showmount -e 172.17.28.5

The ownership of the NFS share needs to be owned by vdsm:kvm. To do this, you 
have to manually mount the NFS share to one of the hosts temporarily then run 
the following command to get ownership settings setup.

#mkdir /mnt/temp
#mount -o sec=sys 172.17.28.5:/rhevstor /mnt/temp
#chown 36:36 /mnt/temp
#umount /mnt/temp

Then try and run the install again. If it fails, disable NFSv3 and run again to 
see if it is related to NFSv4 security settings.

Best regards,
Roy Morris

From: Vrgotic, Marko 
Sent: Thursday, May 30, 2019 12:07 PM
To: Morris, Roy ; users@ovirt.org
Cc: Stojchev, Darko 
Subject: [External] Re: Install fresh 4.3 fails with mounting shared storage

Hi Roy,

I will run all those tests tomorrow morning  (Amsterdam TimeZone) and reply 
back with results.

Regarding NetApp documentation you mentioned below, I assume it should be 
enough to just “google” for it.

Thank you very much for jumping in, we 

[ovirt-users] Re: Metrics store install failed

2019-05-31 Thread roy . morris
Case# 02392496
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LNA6P7PCIYRJLFTQK5QUMYJL34DQ3VVH/


[ovirt-users] Re: Ovirt-egine integration with OpenLDAP can't seem to find any users on Web-UI

2019-05-31 Thread Staniforth, Paul
Possibly firewall?

Regards,
Paul S.

From: rubennune...@gmail.com 
Sent: 30 May 2019 17:54
To: users@ovirt.org
Subject: [ovirt-users] Re: Ovirt-egine integration with OpenLDAP can't seem to 
find any users on Web-UI

Ok the problem is solved the users can be seen on the Web-UI, thank you!

But another problem as arrived because this was only the laboratory, now when i 
trie to do the setup between the Ovirt and the OpenLDAP in production the error 
it gives is this:

[root@ovirt aaa]# ovirt-engine-extension-aaa-ldap-setup
[ INFO  ] Stage: Initializing
[ INFO  ] Stage: Environment setup
  Configuration files: 
['/etc/ovirt-engine-extension-aaa-ldap-setup.conf.d/10-packaging.conf']
  Log file: 
/tmp/ovirt-engine-extension-aaa-ldap-setup-20190530174630-07oiqw.log
  Version: otopi-1.7.8 (otopi-1.7.8-1.el7)
[ INFO  ] Stage: Environment packages setup
[ INFO  ] Stage: Programs detection
[ INFO  ] Stage: Environment customization
  Welcome to LDAP extension configuration program
  Available LDAP implementations:
   1 - 389ds
   2 - 389ds RFC-2307 Schema
   3 - Active Directory
   4 - IBM Security Directory Server
   5 - IBM Security Directory Server RFC-2307 Schema
   6 - IPA
   7 - Novell eDirectory RFC-2307 Schema
   8 - OpenLDAP RFC-2307 Schema
   9 - OpenLDAP Standard Schema
  10 - Oracle Unified Directory RFC-2307 Schema
  11 - RFC-2307 Schema (Generic)
  12 - RHDS
  13 - RHDS RFC-2307 Schema
  14 - iPlanet
  Please select: 8

  NOTE:
  It is highly recommended to use DNS resolution for LDAP server.
  If for some reason you intend to use hosts or plain address disable 
DNS usage.

  Use DNS (Yes, No) [Yes]: no
  Available policy method:
   1 - Single server
   2 - DNS domain LDAP SRV record
   3 - Round-robin between multiple hosts
   4 - Failover between multiple hosts
  Please select: 1
  Please enter host address: 

  NOTE:
  It is highly recommended to use secure protocol to access the LDAP 
server.
  Protocol startTLS is the standard recommended method to do so.
  Only in cases in which the startTLS is not supported, fallback to non 
standard ldaps protocol.
  Use plain for test environments only.

  Please select protocol to use (startTLS, ldaps, plain) [startTLS]: 
plain
[ INFO  ] Connecting to LDAP using 'ldap://:389'
[ ERROR ] Failed to execute stage 'Environment customization': Cannot connect 
using any of available options
[ INFO  ] Stage: Clean up
  Log file is available at 
/tmp/ovirt-engine-extension-aaa-ldap-setup-20190530174630-07oiqw.log:
[ INFO  ] Stage: Pre-termination
[ INFO  ] Stage: Termination
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: 
https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovirt.org%2Fsite%2Fprivacy-policy%2Fdata=02%7C01%7Cp.staniforth%40leedsbeckett.ac.uk%7C137f805db91c45154e8208d6e51fd766%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C0%7C636948322255503491sdata=1RHWKW%2B4Vi8tM4lCGFTBooYuIC4Lcfq0%2BDVo012eWUk%3Dreserved=0
oVirt Code of Conduct: 
https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovirt.org%2Fcommunity%2Fabout%2Fcommunity-guidelines%2Fdata=02%7C01%7Cp.staniforth%40leedsbeckett.ac.uk%7C137f805db91c45154e8208d6e51fd766%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C0%7C636948322255503491sdata=iUpNRb69vC9r6K%2Fy16YOJSQtIephtPFukqZFAjWK3gw%3Dreserved=0
List Archives: 
https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.ovirt.org%2Farchives%2Flist%2Fusers%40ovirt.org%2Fmessage%2FMA6UQONQXFDSFBKJFTE25TJ5K3LG7P4D%2Fdata=02%7C01%7Cp.staniforth%40leedsbeckett.ac.uk%7C137f805db91c45154e8208d6e51fd766%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C0%7C636948322255503491sdata=VI6PZ2Uxc5hUH3uA5zmeMi%2Bz%2FXxR3yQo66yyUa%2FyqOg%3Dreserved=0
To view the terms under which this email is distributed, please go to:-
http://leedsbeckett.ac.uk/disclaimer/email/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IHIFEMT2TG56VTSSGUSML3VJZLVAOBF3/


[ovirt-users] Re: possible to clear vm started under different name warning in gui?

2019-05-31 Thread Jayme
Thanks, I feel silly for not figuring that out but it looks like a full
shutdown/power-off resolved it.  I was only doing reboots previously.

On Fri, May 31, 2019 at 10:47 AM Strahil Nikolov 
wrote:

> Have you tried to power off and then power on the VM ?
>
> Best Regards,
> Strahil Nikolov
>
> В петък, 31 май 2019 г., 8:59:54 ч. Гринуич-4, Jayme 
> написа:
>
>
> When a VM is renamed a warning in engine gui appears with an exclamation
> point stating "vm was started with a different name".  Is there a way to
> clear this warning?  The VM has been restarted a few times since but it
> doesn't go away.
>
> Thanks!
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/FLA643IOFHYXZZEWRJ6R46GQ3IVAQ2IB/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FC5LTPZKKTEZCIJXC46PXF5VROE6UGI4/


[ovirt-users] Re: Metrics store install failed

2019-05-31 Thread Shirly Radco
Hi,

I passed the email to additional engineers for consultation.

Which IP did you update and in what stage?
Did you set the variable that sets the /etc/hosts records? If yes, Did you
update the records on the engine and vms?

Please attach your ansible log and config files so I can review them.
You can also open a bug and attach the to it, so we can track the issue
better.

Best,
Shirly

On Fri, May 31, 2019, 02:53  wrote:

> I ran this command on the master0 VM and this is a fresh install I
> performed yesterday after commenting out the last variable issue I
> experienced. To me this is an issue somewhere in the playbook since I
> haven't customized anything except setting IP settings. This looks to be a
> bug.
>
> #systemctl status selinux* -l
>
> selinux-policy-migrate-local-changes@targeted.service - Migrate local
> SELinux policy changes from the old store structure to the new structure
>Loaded: loaded
> (/usr/lib/systemd/system/basic.target.wants/../selinux-policy-migrate-local-changes@.service;
> static; vendor preset: disabled)
>Active: failed (Result: exit-code) since Wed 2019-05-29 18:59:22 EDT;
> 24h ago
>  Main PID: 3376 (code=exited, status=208/STDIN)
>
> May 29 18:59:22 localhost systemd[1]: Starting Migrate local SELinux
> policy changes from the old store structure to the new structure...
> May 29 18:59:22 localhost systemd[3376]: Failed at step STDIN spawning
> /usr/libexec/selinux/selinux-policy-migrate-local-changes.sh: Inappropriate
> ioctl for device
> May 29 18:59:22 localhost systemd[1]:
> selinux-policy-migrate-local-changes@targeted.service: main process
> exited, code=exited, status=208/STDIN
> May 29 18:59:22 localhost systemd[1]: Failed to start Migrate local
> SELinux policy changes from the old store structure to the new structure.
> May 29 18:59:22 localhost systemd[1]: Unit
> selinux-policy-migrate-local-changes@targeted.service entered failed
> state.
> May 29 18:59:22 localhost systemd[1]:
> selinux-policy-migrate-local-changes@targeted.service failed.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/2Y56VCEVFUQCMBNY3ZBJVIWYTDN4XC6E/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JL6YA7RLXPR3MEW46JW25QYREBJGHJHK/


[ovirt-users] Re: possible to clear vm started under different name warning in gui?

2019-05-31 Thread Strahil Nikolov
 Have you tried to power off and then power on the VM ?
Best Regards,Strahil Nikolov

В петък, 31 май 2019 г., 8:59:54 ч. Гринуич-4, Jayme  
написа:  
 
 When a VM is renamed a warning in engine gui appears with an exclamation point 
stating "vm was started with a different name".  Is there a way to clear this 
warning?  The VM has been restarted a few times since but it doesn't go away. 
Thanks!___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FLA643IOFHYXZZEWRJ6R46GQ3IVAQ2IB/
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GOJSZGMRE5IMYKX72FS7F2LRIAX26THO/


[ovirt-users] possible to clear vm started under different name warning in gui?

2019-05-31 Thread Jayme
When a VM is renamed a warning in engine gui appears with an exclamation
point stating "vm was started with a different name".  Is there a way to
clear this warning?  The VM has been restarted a few times since but it
doesn't go away.

Thanks!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FLA643IOFHYXZZEWRJ6R46GQ3IVAQ2IB/


[ovirt-users] Re: Storage HA for manager on DR environment

2019-05-31 Thread Simone Tiraboschi
On Fri, May 31, 2019 at 1:54 PM  wrote:

> OK, but so, what is the meaning of "Configure all virtual machines that
> need to
> failover as highly available, and ensure that the virtual machine has a
> lease on the
> target storage domain." Is it assuming that the VMs are in otrher storage
> domain (no sync)?
>

You have to configure all the VMs as HA enabling a VM lease on that.
The engine will take care to restart them.

VM leases are also wrote to the relevant storage domain and so they are
going to be in sync if you are correctly syncing the storage on the two
sites.


> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/W7C2IHYFK6WVAC3K6UVTPHRM5NXWHTA7/
>


-- 

Simone Tiraboschi

He / Him / His

Principal Software Engineer

Red Hat 

stira...@redhat.com
@redhatjobs    redhatjobs
 @redhatjobs



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ISZI7UZWO46YBKSNE5MS45DZVC4T57QD/


[ovirt-users] Re: Storage HA for manager on DR environment

2019-05-31 Thread Simone Tiraboschi
On Fri, May 31, 2019 at 11:35 AM  wrote:

> > On Fri, May 31, 2019 at 10:46 AM  wrote:
> >
> >
> >
> > You cannot create a VM lease for the hosted-engine VM because the
> > hosted-engine VM is always already protected by a volume lease.
> Sorry, I don't understand this. If the storage when is placed my manager
> down, it will startup in other storage?
>

HA mechanism for the engine VM is provided by ovirt-ha-agent service
running on all the hosted-engine configured hosts (at least a couple on
each site).
The hosted-engine configured hosts communicate via a whiteboard wrote on
the hosted-engine storage domain so, if the storage devices on the two
sites are in sync (it requires latency < 7ms), all the hosted-engine host
can also see the status on the other site and eventually take over.
The volume lease is there to enforce, at storage level, that only one host
at a time is able to run the engine VM (regardless of the site where is it
since also the lock is in sync).



> >
> >
> > Did you read
> >
> https://ovirt.org/documentation/disaster-recovery-guide/active_active_ove.
> ..
> >  ?
> Yes, and it's only say "Configure all virtual machines that need to
> failover as highly available, and ensure that the virtual machine has a
> lease on the target storage domain."
> But the manager, It's configured for HA storage by default?. I have made a
> lab with 1 host and 2 storage  domain replicated and, when I pull down the
> engine storage, It don't start up again.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/DMZYJB33D5KI5NHJFKCJ4SDNXFMHYST3/
>


-- 

Simone Tiraboschi

He / Him / His

Principal Software Engineer

Red Hat 

stira...@redhat.com
@redhatjobs    redhatjobs
 @redhatjobs



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZHTOAYOG4LQNPF7SC6H3W7KQPAXIGHMJ/


[ovirt-users] Re: Install fresh 4.3 fails with mounting shared storage

2019-05-31 Thread Vrgotic, Marko
Hi Roy,

I will run all those tests tomorrow morning  (Amsterdam TimeZone) and reply 
back with results.

Regarding NetApp documentation you mentioned below, I assume it should be 
enough to just “google” for it.

Thank you very much for jumping in, we really appreciate it.

Kind regards,

Marko Vrgotic

From: "Morris, Roy" 
Date: Thursday, 30 May 2019 at 18:46
To: "Vrgotic, Marko" , "users@ovirt.org" 

Cc: "users-requ...@ovirt.org" , "Stojchev, Darko" 

Subject: RE: Install fresh 4.3 fails with mounting shared storage

Marko,

Can you try disabling NFSv4 on the NetApp side for testing and rerun the 
installer? I don’t advise leaving it at NFSv3 but just for testing we can try 
it out.

Also, there is some documentation on NetApp support regarding manually mounting 
the NFS share to change permissions then unmount. It has to be done once but 
after that the mounting should be fine.

Do you have root squash set on NetApp?

Best regards,
Roy Morris
GSA Virtualization Systems Analyst
County of Ventura
(805) 654-3625
(805) 603-9403
[cid:7c03dd9d67a9cfb78447b56087323d91a66d7c29.camel@ventura.org]

From: Vrgotic, Marko 
Sent: Thursday, May 30, 2019 1:34 AM
To: Morris, Roy ; users@ovirt.org
Cc: users-requ...@ovirt.org; Stojchev, Darko 
Subject: [External] Re: Install fresh 4.3 fails with mounting shared storage

Hi Roy,

Sure, here is the output:

Last login: Wed May 29 17:25:30 2019 from ovirt-engine.avinity.tv
[root@ovirt-hv-03 ~]# showmount -e 172.17.28.5
Export list for 172.17.28.5:
/ (everyone)
[root@ovirt-hv-03 ~]# ls -la /rhev/data-center/mnt/
total 0
drwxr-xr-x. 2 vdsm kvm  6 May 29 17:14 .
drwxr-xr-x. 3 vdsm kvm 17 May 29 17:11 ..
[root@ovirt-hv-03 ~]#

In addition, if it helps, here is the list of shares/mount points from Netapp 
side, behind the 172.17.28.5 IP:
[cid:image003.png@01D516CC.8548AE10]

Kind regards
Marko Vrgotic

From: "Morris, Roy" mailto:roy.mor...@ventura.org>>
Date: Thursday, 30 May 2019 at 00:57
To: "Vrgotic, Marko" 
mailto:m.vrgo...@activevideo.com>>, 
"users@ovirt.org" 
mailto:users@ovirt.org>>
Cc: "users-requ...@ovirt.org" 
mailto:users-requ...@ovirt.org>>
Subject: RE: Install fresh 4.3 fails with mounting shared storage

Marko,

Can you run the following commands and let us know the results.

showmount -e 172.17.28.5
ls -la /rhev/data-center/mnt/

Best regards,
Roy Morris

From: Vrgotic, Marko 
mailto:m.vrgo...@activevideo.com>>
Sent: Wednesday, May 29, 2019 4:07 AM
To: users@ovirt.org
Cc: users-requ...@ovirt.org
Subject: [External] [ovirt-users] Install fresh 4.3 fails with mounting shared 
storage

CAUTION: This email contains links. If it looks suspicious or is not expected, 
DO NOT click and immediately forward to 
spam.mana...@ventura.org.


Dear oVIrt,

We are trying to deploy new setup with Hosted-Engine , oVirt version 4.3.

Volume is on the Netapp, protocol NFS v4.
Upon populating shared storage information and path:

  Please specify the storage you would like to use (glusterfs, iscsi, 
fc, nfs)[nfs]: nfs
  Please specify the nfs version you would like to use (auto, v3, v4, 
v4_1)[auto]: auto
  Please specify the full shared storage connection path to use 
(example: host:/path): 172.17.28.5:/ovirt_hosted_engine

Following is displayed on the screen:

[ INFO  ] Creating Storage Domain
[ INFO  ] TASK [ovirt.hosted_engine_setup : Execute just a specific set of 
steps]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Force facts gathering]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Check local VM dir stat]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Enforce local VM dir existence]
[ INFO  ] skipping: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : include_tasks]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Obtain SSO token using 
username/password credentials]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Fetch host facts]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Fetch cluster ID]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Fetch cluster facts]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Fetch Datacenter facts]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Fetch Datacenter ID]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Fetch Datacenter name]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Add NFS storage domain]
[ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[General 
Exception]". HTTP response code is 400.
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault 
reason is \"Operation Failed\". Fault detail is \"[General Exception]\". HTTP 
response code is 400."}

Even with this error – 

[ovirt-users] Re: Storage HA for manager on DR environment

2019-05-31 Thread raul . caballero . girol
> On Fri, May 31, 2019 at 10:46 AM  
> 
> 
> You cannot create a VM lease for the hosted-engine VM because the
> hosted-engine VM is always already protected by a volume lease.
Sorry, I don't understand this. If the storage when is placed my manager down, 
it will startup in other storage?
> 
> 
> Did you read
> https://ovirt.org/documentation/disaster-recovery-guide/active_active_ove...
>  ?
Yes, and it's only say "Configure all virtual machines that need to failover as 
highly available, and ensure that the virtual machine has a lease on the target 
storage domain."
But the manager, It's configured for HA storage by default?. I have made a lab 
with 1 host and 2 storage  domain replicated and, when I pull down the engine 
storage, It don't start up again.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DMZYJB33D5KI5NHJFKCJ4SDNXFMHYST3/


[ovirt-users] Re: oVirt Node Blocks VirtViewer/SPICE connections (Did Not Auto-Configure Firewall?)

2019-05-31 Thread Robert O'Kane

Quick test?

service firewalld stop

then ALL ports are open.


NOT Recommended!   be sure to turn it back on when your tests are complete.

Cheers,

Robert O'Kane



On 05/31/2019 10:37 AM, Simone Tiraboschi wrote:



On Thu, May 30, 2019 at 2:56 PM Zachary Winter mailto:zachary.win...@witsconsult.com>> wrote:

I am unable to connect via SPICE (Windows VirtViewer) to VM's running on my 
compute node.  It appears the node did not auto-configure the firewall because
the .vv files appear to point to the correct IP address and common ports.  
Is there a way to re-run/re-execute the firewall auto-configuration now that the
node has already been installed?


 From the Web UI, you can set the host to maintenance mode and then select 
reinstall: it will also configure the firewall.
But are you really sure that the issue is on host side?

If not, does anyone happen to have firewall-cmd commands handy that I can 
run to resolve this quickly?  Which ports need to be opened?

The specs on the node are as follows:

OS Version:
RHEL - 7 - 6.1810.2.el7.centos
OS Description:
oVirt Node 4.3.3.1
Kernel Version:
3.10.0 - 957.10.1.el7.x86_64
KVM Version:
2.12.0 - 18.el7_6.3.1
LIBVIRT Version:
libvirt-4.5.0-10.el7_6.6
VDSM Version:
vdsm-4.30.13-1.el7
SPICE Version:
0.14.0 - 6.el7_6.1
GlusterFS Version:
glusterfs-5.5-1.el7
CEPH Version:
librbd1-10.2.5-4.el7
Open vSwitch Version:
openvswitch-2.10.1-3.el7
Kernel Features:
PTI: 1, IBRS: 0, RETP: 1, SSBD: 3
VNC Encryption:
Enabled



--

Simone Tiraboschi

He / Him / His

Principal Software Engineer

Red Hat

stira...@redhat.com 

@redhatjobs  redhatjobs 
 @redhatjobs 
  




--
Robert O'Kane
Systems Administrator
Kunsthochschule für Medien Köln
Peter-Welter-Platz 2
50676 Köln

fon: +49(221)20189-223
fax: +49(221)20189-49223
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JSGZ4YEQ24AHVEOFZLWNZGHBLZ3KUUT5/


[ovirt-users] Re: Storage HA for manager on DR environment

2019-05-31 Thread Simone Tiraboschi
On Fri, May 31, 2019 at 10:46 AM  wrote:

> Hi,
> I'm reading this guide to provide Active-Active DR for my environment (2
> sites):
>
> https://ovirt.org/documentation/disaster-recovery-guide/active_active_overview.html
>
> I have a sef-hosted environment with a storage domains per site with
> synchronous replication. I can put all my VM with a storage lease on the
> other storage site but i can't put the lease on the manager (the option is
> disable).


You cannot create a VM lease for the hosted-engine VM because the
hosted-engine VM is always already protected by a volume lease.


> How I configure the storage HA of my manager?.
>

Did you read
https://ovirt.org/documentation/disaster-recovery-guide/active_active_overview.html#configure-a-self-hosted-engine-stretch-cluster-environment
 ?


>
> Regards,
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/BYCFJQDTHDMT26GVXQNCA46MHTVPTN6Y/
>


-- 

Simone Tiraboschi

He / Him / His

Principal Software Engineer

Red Hat 

stira...@redhat.com
@redhatjobs    redhatjobs
 @redhatjobs



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/46PUSBDZHL4PLQXWHB5EGC5BOLAETVC6/


[ovirt-users] Re: Ovirt-egine integration with OpenLDAP can't seem to find any users on Web-UI

2019-05-31 Thread rubennunes12
I finally did it, i replicated the files from the lab to the production  and 
it's now working.

I'm gonna leave here the configuration of the files to the future someone who 
is with difficulties:

[root@ovirt extensions.d]# cat example.com-authn.properties 
ovirt.engine.extension.name = example.com-authn
ovirt.engine.extension.bindings.method = jbossmodule
ovirt.engine.extension.binding.jbossmodule.module = 
org.ovirt.engine-extensions.aaa.ldap
ovirt.engine.extension.binding.jbossmodule.class = 
org.ovirt.engineextensions.aaa.ldap.AuthnExtension
ovirt.engine.extension.provides = org.ovirt.engine.api.extensions.aaa.Authn
ovirt.engine.aaa.authn.profile.name = example.com
ovirt.engine.aaa.authn.authz.plugin = example.com-authz
config.profile.file.1 = ../aaa/example.com.properties
config.globals.baseDN.simple_baseDN = ou=people,dc=example,dc=com

[root@ovirt extensions.d]# cat example.com-authz.properties 
ovirt.engine.extension.name = example.com-authz
ovirt.engine.extension.bindings.method = jbossmodule
ovirt.engine.extension.binding.jbossmodule.module = 
org.ovirt.engine-extensions.aaa.ldap
ovirt.engine.extension.binding.jbossmodule.class = 
org.ovirt.engineextensions.aaa.ldap.AuthzExtension
ovirt.engine.extension.provides = org.ovirt.engine.api.extensions.aaa.Authz
config.profile.file.1 = ../aaa/example.com.properties
config.globals.baseDN.simple_baseDN = ou=people,dc=example,dc=com

[root@ovirt aaa]# cat sybase.pt.properties 
include = 

vars.server = 
vars.user = cn=Rúben Nunes,ou=people,dc=example,dc=com
vars.password = 

pool.default.auth.simple.bindDN = ${global:vars.user}
pool.default.auth.simple.password = ${global:vars.password}
pool.default.serverset.type = single
pool.default.serverset.single.server = ${global:vars.server}
pool.default.socketfactory.type = java

Note: The example.com.properties who is located on /etc/ovirt-engine/aaa/ needs 
to have as owner:group the ovirt:ovirt the other two files on extensions.d are 
owned by root:root.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CKUQHXHWU5CFFLALPLRVVUBLLCO7N4HS/


[ovirt-users] Storage HA for manager on DR environment

2019-05-31 Thread raul . caballero . girol
Hi, 
I'm reading this guide to provide Active-Active DR for my environment (2 sites):
https://ovirt.org/documentation/disaster-recovery-guide/active_active_overview.html

I have a sef-hosted environment with a storage domains per site with 
synchronous replication. I can put all my VM with a storage lease on the other 
storage site but i can't put the lease on the manager (the option is disable). 
How I configure the storage HA of my manager?. 

Regards,
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BYCFJQDTHDMT26GVXQNCA46MHTVPTN6Y/


[ovirt-users] Re: oVirt Node Blocks VirtViewer/SPICE connections (Did Not Auto-Configure Firewall?)

2019-05-31 Thread Simone Tiraboschi
On Thu, May 30, 2019 at 2:56 PM Zachary Winter <
zachary.win...@witsconsult.com> wrote:

> I am unable to connect via SPICE (Windows VirtViewer) to VM's running on
> my compute node.  It appears the node did not auto-configure the firewall
> because the .vv files appear to point to the correct IP address and common
> ports.  Is there a way to re-run/re-execute the firewall auto-configuration
> now that the node has already been installed?
>

>From the Web UI, you can set the host to maintenance mode and then select
reinstall: it will also configure the firewall.
But are you really sure that the issue is on host side?


> If not, does anyone happen to have firewall-cmd commands handy that I can
> run to resolve this quickly?  Which ports need to be opened?
>
> The specs on the node are as follows:
> OS Version:
> RHEL - 7 - 6.1810.2.el7.centos
> OS Description:
> oVirt Node 4.3.3.1
> Kernel Version:
> 3.10.0 - 957.10.1.el7.x86_64
> KVM Version:
> 2.12.0 - 18.el7_6.3.1
> LIBVIRT Version:
> libvirt-4.5.0-10.el7_6.6
> VDSM Version:
> vdsm-4.30.13-1.el7
> SPICE Version:
> 0.14.0 - 6.el7_6.1
> GlusterFS Version:
> glusterfs-5.5-1.el7
> CEPH Version:
> librbd1-10.2.5-4.el7
> Open vSwitch Version:
> openvswitch-2.10.1-3.el7
> Kernel Features:
> PTI: 1, IBRS: 0, RETP: 1, SSBD: 3
> VNC Encryption:
> Enabled
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/XIP6HNQVJXNW55YBXUL273CEH2YSHOA5/
>


-- 

Simone Tiraboschi

He / Him / His

Principal Software Engineer

Red Hat 

stira...@redhat.com
@redhatjobs    redhatjobs
 @redhatjobs



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OGEHUUPPYJEDS6UNVIPP4EY3HH7DE3GS/


[ovirt-users] oVirtSimpleBackup

2019-05-31 Thread Tommaso - Shellrent

Hi to all.

I was looking for install oVirtSimpleBackup, but now i see:

" no longer use oVirt, so I wont be furthering this project. oVirt is 
>>Awesome<< however, I decided to move all of my VMs into a large 
managed datacentre that uses vmware.


I want to thank the oVirt community and all of the people over on IRC 
for thier awesome support.


Feel free to use this code for your own ovirt backups or future oVirt 
backup software."


Anyone kown if the projec will be mainted? There is an alternative 
projet to look for?



--
Shellrent Logo  

*Tommaso De Marchi*
COO Chief Operating Officer - Shellrent S.r.l.
Tel. 0444321155  | Fax 04441492177
Via dell'Edilizia, 19 - 36100 Vicenza

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: ${hyperkitty_url}


[ovirt-users] Re: 4.3 hosted-engine setup & yum-utils RPM installation

2019-05-31 Thread Simone Tiraboschi
On Thu, May 30, 2019 at 5:22 PM Simon Coter  wrote:

> Hi,
>
>
Ciao Simon,


> is there any particular reason to get “yum-utils” (and its dependency) RPM
> installed during the hosted-engine deployment ?
> I mean, why don’t we get yum-utils RPM part of the hosted-engine image ?
> This “yum” process, executed during the deployment, could fail (or wait
> forever) if the host/engine is behind a proxy — while trying to install the
> RPMs.
>

Honestly I'm not aware of that, can you please provide more details?
where does it happen? on the host or inside the engine virtual machine?
is it going to happen before starting the engine virtual machine or during
host-deploy process when the engine is going to configure the host?


> I see two options:
>
>
>- get all the required RPMs part of the hosted-engine image
>
>
Do you mean inside ovirt-engine-appliance image?
If on host side instead, ovirt-host rpm should instead already require all
(but the ovirt-engine-appliance which is about 1 GB) the rpms needed for
the deployment.


>
>- add the option to supply a proxy for yum during the hosted-engine
>setup
>
> configuring a proxy with proxy directive in /etc/yum.conf or http_proxy at
system level is absolutely supported.


>
> Could this be a request for enhancement ?
>

hosted-engine-setup is already designed to work also in disconnected mode
assuming that all the required rpms have been installed upfront.
If it fails on that use case, and all the rpms are there, it's definitively
a bug.


> Thanks
>
> Simon
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/QEFE7S35AMTABZQIXACD6ZXOWSTKJKP3/
>


-- 

Simone Tiraboschi

He / Him / His

Principal Software Engineer

Red Hat 

stira...@redhat.com
@redhatjobs    redhatjobs
 @redhatjobs



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: