[ovirt-users] FreeIPA with ovirt 4.1

2017-02-03 Thread Slava Bendersky
Hello Everyone, 
Having trouble implement FreeIPA authentication with GSSAPI SSO and ovirt 4.1. 
I ran setup and it finished OK then it wrote the files bellow. Next I log to 
web admin with internal user and added FeeIPA user as SuperUser role. Also I 
added under System FreeIPA group authorized to login on any attempt to login 
with FreeIPA credentials getting message 


2017-02-04 00:03:08,464Z ERROR 
[org.ovirt.engine.core.sso.servlets.InteractiveAuthServlet] (default task-6) [] 
Internal Server Error: Unsupported command 
2017-02-04 00:03:08,464Z ERROR [org.ovirt.engine.core.sso.utils.SsoUtils] 
(default task-6) [] Unsupported command 
2017-02-04 00:03:08,659Z ERROR 
[org.ovirt.engine.core.aaa.servlet.SsoPostLoginServlet] (default task-3) [] 
server_error: Unsupported command 


Also when in extensions.d directory contain the following files. If I remove 
mydomain.lan-authn.properties then in web ui FreeIPA domain not showing up in 
drop down list. Any http don't have influence on this. 

[root@vhe00 extensions.d]# pwd 
/etc/ovirt-engine/extensions.d 

[root@vhe00 extensions.d]# ls 
mydomain.lan-authn.properties mydomain.lan -http-authn.properties mydomain.lan 
.properties internal-authz.properties 
mydomain.lan -authz.properties mydomain.lan -http-mapping.properties 
internal-authn.properties 
[root@vhe00 extensions.d]# 


If possible clarify how it should be and what is possible issue. 



Slava. 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] high availability

2017-02-03 Thread Nir Soffer
On Fri, Feb 3, 2017 at 9:49 PM, Yaniv Kaul  wrote:
>
>
> On Fri, Feb 3, 2017 at 8:42 PM, cmc  wrote:
>>
>> Hi,
>>
>> I have some questions about oVirt's high availability features for
>> VMs. My understanding is that it relies on the engine host to monitor
>> and manage the hypervisor hosts, so that in the case of a
>> unrecoverable failure of one those hosts, it will fence the host and
>> migrate any VM that is designated as highly available to another host
>> in the cluster. However, if the engine is itself hosted as a VM on a
>> host that fails, this process cannot take place, as the engine will be
>> down and cannot initiate monitoring, fencing and migration - is that
>> correct?
>
>
> The hosted-engine has its own HA mechanism.

You may find this useful:
http://www.ovirt.org/documentation/self-hosted/Self-Hosted_Engine_Guide/

> In addition, in 4.1 we are introducing a feature which allows HA without
> fencing, in a similar manner to hosted-engine - by a lock on the storage
> side.

For more info on this see:
https://www.ovirt.org/develop/release-management/features/storage/vm-leases/

Nir

> Y.
>
>>
>> There is the option of hosting the engine externally on dedicated
>> hardware, or on another cluster, but then it is still a single point
>> of failure. I recall reading about plans for an HA engine in the
>> future though.
>>
>> Can someone tell me what the roadmap is? Is there a plan to put
>> something like an HA agent on all the hypervisors in the cluster so
>> there is no single point of failure?
>>
>> Thanks for any information,
>>
>> Cam
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] high availability

2017-02-03 Thread Yaniv Kaul
On Fri, Feb 3, 2017 at 8:42 PM, cmc  wrote:

> Hi,
>
> I have some questions about oVirt's high availability features for
> VMs. My understanding is that it relies on the engine host to monitor
> and manage the hypervisor hosts, so that in the case of a
> unrecoverable failure of one those hosts, it will fence the host and
> migrate any VM that is designated as highly available to another host
> in the cluster. However, if the engine is itself hosted as a VM on a
> host that fails, this process cannot take place, as the engine will be
> down and cannot initiate monitoring, fencing and migration - is that
> correct?
>

The hosted-engine has its own HA mechanism.
In addition, in 4.1 we are introducing a feature which allows HA without
fencing, in a similar manner to hosted-engine - by a lock on the storage
side.
Y.


> There is the option of hosting the engine externally on dedicated
> hardware, or on another cluster, but then it is still a single point
> of failure. I recall reading about plans for an HA engine in the
> future though.
>
> Can someone tell me what the roadmap is? Is there a plan to put
> something like an HA agent on all the hypervisors in the cluster so
> there is no single point of failure?
>
> Thanks for any information,
>
> Cam
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VDAgent

2017-02-03 Thread Sandro Bonazzola
Il 30/Gen/2017 09:56 PM, "Fernando Fuentes"  ha
scritto:

Sandro,

I did the update from the hosts tab on ovirt:
The ovirt version is: oVirt Engine Version: 4.0.2.6-1.el7.centos



Sorry Fernando I missed your email.
I would suggest to update your hosts to centos 7.3 and oVirt 4.1.
If you don't want to upgrade to 4.1 please upgrade at least to latest 4.0
which is 4.0.6.





All of my hosts are Cent7 x86_64
[root@ogias ~]# uname -a
Linux ogias.aasteel.net 3.10.0-327.22.2.el7.x86_64 #1 SMP Thu Jun 23
17:05:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
[root@ogias ~]# cat /etc/redhat-release
CentOS Linux release 7.2.1511 (Core)
[root@ogias ~]#

I am trying to email you the sos report but it exceeds our mail server size
limit :(

Regards,


--
Fernando Fuentes
ffuen...@txweather.org
http://www.txweather.org



On Sat, Jan 28, 2017, at 12:29 PM, Sandro Bonazzola wrote:



Il 27/Gen/2017 16:51, "Fernando Fuentes"  ha scritto:

Team,

After a host update on my cluster, All of my Windows vm's running the
vdagent from the ovirt tools are running at 100% CPU utilization.

Any ideas why would this happen?


Hi,
Can you please share details about the update?
Which distribution? What has been updated? Can you share a sos report from
the host?




Regards,


--
Fernando Fuentes
ffuen...@txweather.org
http://www.txweather.org
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VDAgent

2017-02-03 Thread Fernando Fuentes
Any body? :(



--

Fernando Fuentes

ffuen...@txweather.org

http://www.txweather.org







On Mon, Jan 30, 2017, at 02:56 PM, Fernando Fuentes wrote:

> Sandro,

> 

> I did the update from the hosts tab on ovirt:

> The ovirt version is: oVirt Engine Version: 4.0.2.6-1.el7.centos

> 

> All of my hosts are Cent7 x86_64

> [root@ogias ~]# uname -a

> Linux ogias.aasteel.net 3.10.0-327.22.2.el7.x86_64 #1 SMP Thu Jun 23
> 17:05:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
> [root@ogias ~]# cat /etc/redhat-release

> CentOS Linux release 7.2.1511 (Core)

> [root@ogias ~]#

> 

> I am trying to email you the sos report but it exceeds our mail server
> size limit :(
> 

> Regards,

> 

> 

> --

> Fernando Fuentes

> ffuen...@txweather.org

> http://www.txweather.org

> 

> 

> 

> On Sat, Jan 28, 2017, at 12:29 PM, Sandro Bonazzola wrote:

>> 

>> 

>> Il 27/Gen/2017 16:51, "Fernando Fuentes"  ha
>> scritto:
>>> Team,

>>> 

>>> After a host update on my cluster, All of my Windows vm's
>>> running the
>>> vdagent from the ovirt tools are running at 100% CPU utilization.

>>> 

>>> Any ideas why would this happen?

>> 

>> Hi,

>> Can you please share details about the update?

>> Which distribution? What has been updated? Can you share a sos report
>> from the host?
>> 

>> 

>> 

>>> 

>>> Regards,

>>>
>>>
>>> --
>>>  Fernando Fuentes ffuen...@txweather.org http://www.txweather.org
>>>  ___
>>>  Users mailing list Users@ovirt.org
>>>  http://lists.ovirt.org/mailman/listinfo/users
> 

> _

> Users mailing list

> Users@ovirt.org

> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] high availability

2017-02-03 Thread cmc
Hi,

I have some questions about oVirt's high availability features for
VMs. My understanding is that it relies on the engine host to monitor
and manage the hypervisor hosts, so that in the case of a
unrecoverable failure of one those hosts, it will fence the host and
migrate any VM that is designated as highly available to another host
in the cluster. However, if the engine is itself hosted as a VM on a
host that fails, this process cannot take place, as the engine will be
down and cannot initiate monitoring, fencing and migration - is that
correct?

There is the option of hosting the engine externally on dedicated
hardware, or on another cluster, but then it is still a single point
of failure. I recall reading about plans for an HA engine in the
future though.

Can someone tell me what the roadmap is? Is there a plan to put
something like an HA agent on all the hypervisors in the cluster so
there is no single point of failure?

Thanks for any information,

Cam
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-03 Thread Simone Tiraboschi
On Fri, Feb 3, 2017 at 7:20 PM, Simone Tiraboschi 
wrote:

>
>
> On Fri, Feb 3, 2017 at 5:22 PM, Ralf Schenk  wrote:
>
>> Hello,
>>
>> of course:
>>
>> [root@microcloud27 mnt]# sanlock client status
>> daemon 8a93c9ea-e242-408c-a63d-a9356bb22df5.microcloud
>> p -1 helper
>> p -1 listener
>> p -1 status
>>
>> sanlock.log attached. (Beginning 2017-01-27 where everything was fine)
>>
> Thanks, the issue is here:
>
> 2017-02-02 19:01:22+0100 4848 [1048]: s36 lockspace 
> 7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96:3:/rhev/data-center/mnt/glusterSD/glusterfs.rxmgmt.databay.de:_engine/7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96/dom_md/ids:0
> 2017-02-02 19:03:42+0100 4988 [12983]: s36 delta_acquire host_id 3 busy1 3 15 
> 13129 7ad427b1-fbb6-4cee-b9ee-01f596fddfbb.microcloud
> 2017-02-02 19:03:43+0100 4989 [1048]: s36 add_lockspace fail result -262
>
> Could you please check if you have other hosts contending for the same ID
> (id=3 in this case).
>

Another option is to manually force a sanlock renewal on that host and
check what happens, something like:
sanlock client renewal -s 7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96:3:/rhev/data-
center/mnt/glusterSD/glusterfs.rxmgmt.databay.de:_engine/7c8deaa8-be02-4aaf-
b9b4-ddc8da99ad96/dom_md/ids:0


>
>
>> Bye
>>
>> Am 03.02.2017 um 16:12 schrieb Simone Tiraboschi:
>>
>> The hosted-engine storage domain is mounted for sure,
>> but the issue is here:
>> Exception: Failed to start monitoring domain
>> (sd_uuid=7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96, host_id=3): timeout
>> during domain acquisition
>>
>> The point is that in VDSM logs I see just something like:
>> 2017-02-02 21:05:22,283 INFO  (jsonrpc/1) [dispatcher] Run and protect:
>> repoStats(options=None) (logUtils:49)
>> 2017-02-02 21:05:22,285 INFO  (jsonrpc/1) [dispatcher] Run and protect:
>> repoStats, Return response: {u'a7fbaaad-7043-4391-9523-3bedcdc4fb0d':
>> {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay':
>> '0.000748727', 'lastCheck': '0.1', 'valid': True},
>> u'2b2a44fc-f2bd-47cd-b7af-00be59e30a35': {'code': 0, 'actual': True,
>> 'version': 0, 'acquired': True, 'delay': '0.00082529', 'lastCheck': '0.1',
>> 'valid': True}, u'5d99af76-33b5-47d8-99da-1f32413c7bb0': {'code': 0,
>> 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000349356',
>> 'lastCheck': '5.3', 'valid': True}, u'7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96':
>> {'code': 0, 'actual': True, 'version': 4, 'acquired': False, 'delay':
>> '0.000377052', 'lastCheck': '0.6', 'valid': True}} (logUtils:52)
>>
>> Where the other storage domains have 'acquired': True whil it's
>> always 'acquired': False for the hosted-engine storage domain.
>>
>> Could you please share your /var/log/sanlock.log from the same host and
>> the output of
>>  sanlock client status
>> ?
>>
>>
>>
>>
>> On Fri, Feb 3, 2017 at 3:52 PM, Ralf Schenk  wrote:
>>
>>> Hello,
>>>
>>> I also put host in Maintenance and restarted vdsm while ovirt-ha-agent
>>> is running. I can mount the gluster Volume "engine" manually in the host.
>>>
>>> I get this repeatedly in /var/log/vdsm.log:
>>>
>>> 2017-02-03 15:29:28,891 INFO  (MainThread) [vds] Exiting (vdsm:167)
>>> 2017-02-03 15:29:30,974 INFO  (MainThread) [vds] (PID: 11456) I am the
>>> actual vdsm 4.19.4-1.el7.centos microcloud27 (3.10.0-514.6.1.el7.x86_64)
>>> (vdsm:145)
>>> 2017-02-03 15:29:30,974 INFO  (MainThread) [vds] VDSM will run with cpu
>>> affinity: frozenset([1]) (vdsm:251)
>>> 2017-02-03 15:29:31,013 INFO  (MainThread) [storage.check] Starting
>>> check service (check:91)
>>> 2017-02-03 15:29:31,017 INFO  (MainThread) [storage.Dispatcher] Starting
>>> StorageDispatcher... (dispatcher:47)
>>> 2017-02-03 15:29:31,017 INFO  (check/loop) [storage.asyncevent] Starting
>>>  (asyncevent:122)
>>> 2017-02-03 15:29:31,156 INFO  (MainThread) [dispatcher] Run and protect:
>>> registerDomainStateChangeCallback(callbackFunc=>> object at 0x2881fc8>) (logUtils:49)
>>> 2017-02-03 15:29:31,156 INFO  (MainThread) [dispatcher] Run and protect:
>>> registerDomainStateChangeCallback, Return response: None (logUtils:52)
>>> 2017-02-03 15:29:31,160 INFO  (MainThread) [MOM] Preparing MOM interface
>>> (momIF:49)
>>> 2017-02-03 15:29:31,161 INFO  (MainThread) [MOM] Using named unix socket
>>> /var/run/vdsm/mom-vdsm.sock (momIF:58)
>>> 2017-02-03 15:29:31,162 INFO  (MainThread) [root] Unregistering all
>>> secrets (secret:91)
>>> 2017-02-03 15:29:31,164 INFO  (MainThread) [vds] Setting channels'
>>> timeout to 30 seconds. (vmchannels:223)
>>> 2017-02-03 15:29:31,165 INFO  (MainThread) [vds.MultiProtocolAcceptor]
>>> Listening at :::54321 (protocoldetector:185)
>>> 2017-02-03 15:29:31,354 INFO  (vmrecovery) [vds] recovery: completed in
>>> 0s (clientIF:495)
>>> 2017-02-03 15:29:31,371 INFO  (BindingXMLRPC) [vds] XMLRPC server
>>> running (bindingxmlrpc:63)
>>> 2017-02-03 15:29:31,471 INFO  (periodic/1) [dispatcher] Run and protect:
>>> repoStats(options=None) (logUtils:49)
>>> 2017-02-03 15:29:31,472 INFO  (periodic/1) [dispatcher] Run and protect

Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-03 Thread Simone Tiraboschi
On Fri, Feb 3, 2017 at 5:22 PM, Ralf Schenk  wrote:

> Hello,
>
> of course:
>
> [root@microcloud27 mnt]# sanlock client status
> daemon 8a93c9ea-e242-408c-a63d-a9356bb22df5.microcloud
> p -1 helper
> p -1 listener
> p -1 status
>
> sanlock.log attached. (Beginning 2017-01-27 where everything was fine)
>
Thanks, the issue is here:

2017-02-02 19:01:22+0100 4848 [1048]: s36 lockspace
7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96:3:/rhev/data-center/mnt/glusterSD/glusterfs.rxmgmt.databay.de:_engine/7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96/dom_md/ids:0
2017-02-02 19:03:42+0100 4988 [12983]: s36 delta_acquire host_id 3
busy1 3 15 13129 7ad427b1-fbb6-4cee-b9ee-01f596fddfbb.microcloud
2017-02-02 19:03:43+0100 4989 [1048]: s36 add_lockspace fail result -262

Could you please check if you have other hosts contending for the same ID
(id=3 in this case).


> Bye
>
> Am 03.02.2017 um 16:12 schrieb Simone Tiraboschi:
>
> The hosted-engine storage domain is mounted for sure,
> but the issue is here:
> Exception: Failed to start monitoring domain 
> (sd_uuid=7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96,
> host_id=3): timeout during domain acquisition
>
> The point is that in VDSM logs I see just something like:
> 2017-02-02 21:05:22,283 INFO  (jsonrpc/1) [dispatcher] Run and protect:
> repoStats(options=None) (logUtils:49)
> 2017-02-02 21:05:22,285 INFO  (jsonrpc/1) [dispatcher] Run and protect:
> repoStats, Return response: {u'a7fbaaad-7043-4391-9523-3bedcdc4fb0d':
> {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay':
> '0.000748727', 'lastCheck': '0.1', 'valid': True},
> u'2b2a44fc-f2bd-47cd-b7af-00be59e30a35': {'code': 0, 'actual': True,
> 'version': 0, 'acquired': True, 'delay': '0.00082529', 'lastCheck': '0.1',
> 'valid': True}, u'5d99af76-33b5-47d8-99da-1f32413c7bb0': {'code': 0,
> 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000349356',
> 'lastCheck': '5.3', 'valid': True}, u'7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96':
> {'code': 0, 'actual': True, 'version': 4, 'acquired': False, 'delay':
> '0.000377052', 'lastCheck': '0.6', 'valid': True}} (logUtils:52)
>
> Where the other storage domains have 'acquired': True whil it's
> always 'acquired': False for the hosted-engine storage domain.
>
> Could you please share your /var/log/sanlock.log from the same host and
> the output of
>  sanlock client status
> ?
>
>
>
>
> On Fri, Feb 3, 2017 at 3:52 PM, Ralf Schenk  wrote:
>
>> Hello,
>>
>> I also put host in Maintenance and restarted vdsm while ovirt-ha-agent is
>> running. I can mount the gluster Volume "engine" manually in the host.
>>
>> I get this repeatedly in /var/log/vdsm.log:
>>
>> 2017-02-03 15:29:28,891 INFO  (MainThread) [vds] Exiting (vdsm:167)
>> 2017-02-03 15:29:30,974 INFO  (MainThread) [vds] (PID: 11456) I am the
>> actual vdsm 4.19.4-1.el7.centos microcloud27 (3.10.0-514.6.1.el7.x86_64)
>> (vdsm:145)
>> 2017-02-03 15:29:30,974 INFO  (MainThread) [vds] VDSM will run with cpu
>> affinity: frozenset([1]) (vdsm:251)
>> 2017-02-03 15:29:31,013 INFO  (MainThread) [storage.check] Starting check
>> service (check:91)
>> 2017-02-03 15:29:31,017 INFO  (MainThread) [storage.Dispatcher] Starting
>> StorageDispatcher... (dispatcher:47)
>> 2017-02-03 15:29:31,017 INFO  (check/loop) [storage.asyncevent] Starting
>>  (asyncevent:122)
>> 2017-02-03 15:29:31,156 INFO  (MainThread) [dispatcher] Run and protect:
>> registerDomainStateChangeCallback(callbackFunc=> at 0x2881fc8>) (logUtils:49)
>> 2017-02-03 15:29:31,156 INFO  (MainThread) [dispatcher] Run and protect:
>> registerDomainStateChangeCallback, Return response: None (logUtils:52)
>> 2017-02-03 15:29:31,160 INFO  (MainThread) [MOM] Preparing MOM interface
>> (momIF:49)
>> 2017-02-03 15:29:31,161 INFO  (MainThread) [MOM] Using named unix socket
>> /var/run/vdsm/mom-vdsm.sock (momIF:58)
>> 2017-02-03 15:29:31,162 INFO  (MainThread) [root] Unregistering all
>> secrets (secret:91)
>> 2017-02-03 15:29:31,164 INFO  (MainThread) [vds] Setting channels'
>> timeout to 30 seconds. (vmchannels:223)
>> 2017-02-03 15:29:31,165 INFO  (MainThread) [vds.MultiProtocolAcceptor]
>> Listening at :::54321 (protocoldetector:185)
>> 2017-02-03 15:29:31,354 INFO  (vmrecovery) [vds] recovery: completed in
>> 0s (clientIF:495)
>> 2017-02-03 15:29:31,371 INFO  (BindingXMLRPC) [vds] XMLRPC server running
>> (bindingxmlrpc:63)
>> 2017-02-03 15:29:31,471 INFO  (periodic/1) [dispatcher] Run and protect:
>> repoStats(options=None) (logUtils:49)
>> 2017-02-03 15:29:31,472 INFO  (periodic/1) [dispatcher] Run and protect:
>> repoStats, Return response: {} (logUtils:52)
>> 2017-02-03 15:29:31,472 WARN  (periodic/1) [MOM] MOM not available.
>> (momIF:116)
>> 2017-02-03 15:29:31,473 WARN  (periodic/1) [MOM] MOM not available, KSM
>> stats will be missing. (momIF:79)
>> 2017-02-03 15:29:31,474 ERROR (periodic/1) [root] failed to retrieve
>> Hosted Engine HA info (api:252)
>> Traceback (most recent call last):
>>   File "/usr/lib/python2.7/site-packages/vdsm/host/api.py", line 231, in
>> _getHaInfo

Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-03 Thread Jiri Slezka

Hi,

I updated our oVirt cluster day after 4.1.0 went public.

Upgrade was simple but while migrating and upgrading hosts some vms was 
stucked with 100% cpu usage and totally non responsible. I had to power 
of them and start again. But it could be some problem with 
CentOS7.2->7.3 transition or kvm-ev upgrade. Unfortunately I had no time 
to examine logs yet :-(


Also I experienced one or two "UI Exception" but not a big deal.

UI is more and more polished. I really like how it shifts to patternfly 
look and feel.


btw. We have standalone gluster cluster not for vms, just for general 
storage purposes. Is wise to use oVirt manager as web ui for its management?


Is safe to import this gluster into oVirt? I saw this option there but I 
don't want broke things that works :-)


At the end - thanks for your great work. I still see lot of features I 
still missing in oVirt but it is highly usable and great piece of 
software. And also oVirt community is nice and helpful.


Cheers,

Jiri



On 02/02/2017 01:19 PM, Sandro Bonazzola wrote:

Hi,
did you install/update to 4.1.0? Let us know your experience!
We end up knowing only when things doesn't work well, let us know it
works fine for you :-)

If you're not planning an update to 4.1.0 in the near future, let us
know why.
Maybe we can help.

Thanks!
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com 


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM has been paused due to storage I/O problem

2017-02-03 Thread Benjamin Marzinski
On Fri, Feb 03, 2017 at 12:31:49AM +0100, Gianluca Cecchi wrote:
>On Thu, Feb 2, 2017 at 10:53 PM, Benjamin Marzinski
><[1]bmarz...@redhat.com> wrote:
> 
>  > > I'm trying to mitigate inserting a timeout for my SAN devices but
>  I'm not
>  > > sure of its effectiveness as CentOS 7 behavior  of "multipathd -k"
>  and then
>  > > "show config" seems different from CentOS 6.x
>  > > In fact my attempt for multipath.conf is this
> 
>  There was a significant change in how multipath deals with merging
>  device configurations between RHEL6 and RHEL7.  The short answer is, as
>  long as you copy the entire existing configuration, and just change what
>  you want changed (like you did), you can ignore the change.  Also,
>  multipath doesn't care if you quote numbers.
> 
>  If you want to verify that no_path_retry is being set as intented, you
>  can run:
> 
>  # multipath -r -v3 | grep no_path_retry
> 
>Hi Benjamin,
>thank you very much for the explanations, especially the long one ;-)
>I tried and confirmed that I has no_path_retry = 4 as expected
>The regex matching is only for merge, correct?

No. Both RHEL6 and RHEL7 use regex matching to determine which
device configuration to use with your device, otherwise

product "^1814"

would never match any device, since there is no array with a literal
product string of "^1814". RHEL7 also uses the same regex matching to
determine which builtin device configuration a user-supplied device
configuration should modify. RHEL6 uses string matching for this. 

>So in your example if in RH EL 7 I put this
>        device {
>                vendor "IBM"
>                product "^1814"
>                no_path_retry 12
>        }
>It would not match for merging, but it would match for applying to my
>device (because it is put at the end of config read backwards).

correct.  The confusing point is that in the merging case, "^1814" in
the user-supplied configuration is being treaded as a string that needs
to regex match the regular expression "^1814" in the builtin
configuration. These don't match. For matching the device configuration
to the device, "^1814" in the user-supplied configuration is being
treated as a regular expression that needs to regex match the actual
product string of the device.

>And it would apply only the no_path_retry setting, while all other ones
>would not be picked from builtin configuration for device, but from
>defaults in general.
>So for example it would set path_checker not this way:
>path_checker "rdac"
>but this way:
>path_checker "directio"
>that is default..
>correct?

exactly.

-Ben
 
> References
> 
>Visible links
>1. mailto:bmarz...@redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-03 Thread Andrea Ghelardi
Running ovirt v4.0.5.5.1 and not planning to upgrade to 4.1 yet.
We are happy with stability of our production servers and wait for 4.1.1 to 
come out.
The only real need to upgrade for us would be the added compatibility with 
Windows server 2016 guest tools.
… and the trim, of course, but we can wait a little bit longer for it…

Cheers
AG

From: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] On Behalf Of 
Sandro Bonazzola
Sent: Thursday, February 2, 2017 1:19 PM
To: users 
Subject: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

Hi,
did you install/update to 4.1.0? Let us know your experience!
We end up knowing only when things doesn't work well, let us know it works fine 
for you :-)

If you're not planning an update to 4.1.0 in the near future, let us know why.
Maybe we can help.

Thanks!
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-03 Thread Simone Tiraboschi
The hosted-engine storage domain is mounted for sure,
but the issue is here:
Exception: Failed to start monitoring domain
(sd_uuid=7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96, host_id=3): timeout during
domain acquisition

The point is that in VDSM logs I see just something like:
2017-02-02 21:05:22,283 INFO  (jsonrpc/1) [dispatcher] Run and protect:
repoStats(options=None) (logUtils:49)
2017-02-02 21:05:22,285 INFO  (jsonrpc/1) [dispatcher] Run and protect:
repoStats, Return response: {u'a7fbaaad-7043-4391-9523-3bedcdc4fb0d':
{'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay':
'0.000748727', 'lastCheck': '0.1', 'valid': True},
u'2b2a44fc-f2bd-47cd-b7af-00be59e30a35': {'code': 0, 'actual': True,
'version': 0, 'acquired': True, 'delay': '0.00082529', 'lastCheck': '0.1',
'valid': True}, u'5d99af76-33b5-47d8-99da-1f32413c7bb0': {'code': 0,
'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000349356',
'lastCheck': '5.3', 'valid': True},
u'7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96': {'code': 0, 'actual': True,
'version': 4, 'acquired': False, 'delay': '0.000377052', 'lastCheck':
'0.6', 'valid': True}} (logUtils:52)

Where the other storage domains have 'acquired': True whil it's
always 'acquired': False for the hosted-engine storage domain.

Could you please share your /var/log/sanlock.log from the same host and the
output of
 sanlock client status
?




On Fri, Feb 3, 2017 at 3:52 PM, Ralf Schenk  wrote:

> Hello,
>
> I also put host in Maintenance and restarted vdsm while ovirt-ha-agent is
> running. I can mount the gluster Volume "engine" manually in the host.
>
> I get this repeatedly in /var/log/vdsm.log:
>
> 2017-02-03 15:29:28,891 INFO  (MainThread) [vds] Exiting (vdsm:167)
> 2017-02-03 15:29:30,974 INFO  (MainThread) [vds] (PID: 11456) I am the
> actual vdsm 4.19.4-1.el7.centos microcloud27 (3.10.0-514.6.1.el7.x86_64)
> (vdsm:145)
> 2017-02-03 15:29:30,974 INFO  (MainThread) [vds] VDSM will run with cpu
> affinity: frozenset([1]) (vdsm:251)
> 2017-02-03 15:29:31,013 INFO  (MainThread) [storage.check] Starting check
> service (check:91)
> 2017-02-03 15:29:31,017 INFO  (MainThread) [storage.Dispatcher] Starting
> StorageDispatcher... (dispatcher:47)
> 2017-02-03 15:29:31,017 INFO  (check/loop) [storage.asyncevent] Starting
>  (asyncevent:122)
> 2017-02-03 15:29:31,156 INFO  (MainThread) [dispatcher] Run and protect:
> registerDomainStateChangeCallback(callbackFunc= at 0x2881fc8>) (logUtils:49)
> 2017-02-03 15:29:31,156 INFO  (MainThread) [dispatcher] Run and protect:
> registerDomainStateChangeCallback, Return response: None (logUtils:52)
> 2017-02-03 15:29:31,160 INFO  (MainThread) [MOM] Preparing MOM interface
> (momIF:49)
> 2017-02-03 15:29:31,161 INFO  (MainThread) [MOM] Using named unix socket
> /var/run/vdsm/mom-vdsm.sock (momIF:58)
> 2017-02-03 15:29:31,162 INFO  (MainThread) [root] Unregistering all
> secrets (secret:91)
> 2017-02-03 15:29:31,164 INFO  (MainThread) [vds] Setting channels' timeout
> to 30 seconds. (vmchannels:223)
> 2017-02-03 15:29:31,165 INFO  (MainThread) [vds.MultiProtocolAcceptor]
> Listening at :::54321 (protocoldetector:185)
> 2017-02-03 15:29:31,354 INFO  (vmrecovery) [vds] recovery: completed in 0s
> (clientIF:495)
> 2017-02-03 15:29:31,371 INFO  (BindingXMLRPC) [vds] XMLRPC server running
> (bindingxmlrpc:63)
> 2017-02-03 15:29:31,471 INFO  (periodic/1) [dispatcher] Run and protect:
> repoStats(options=None) (logUtils:49)
> 2017-02-03 15:29:31,472 INFO  (periodic/1) [dispatcher] Run and protect:
> repoStats, Return response: {} (logUtils:52)
> 2017-02-03 15:29:31,472 WARN  (periodic/1) [MOM] MOM not available.
> (momIF:116)
> 2017-02-03 15:29:31,473 WARN  (periodic/1) [MOM] MOM not available, KSM
> stats will be missing. (momIF:79)
> 2017-02-03 15:29:31,474 ERROR (periodic/1) [root] failed to retrieve
> Hosted Engine HA info (api:252)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/host/api.py", line 231, in
> _getHaInfo
> stats = instance.get_all_stats()
>   File 
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py",
> line 103, in get_all_stats
> self._configure_broker_conn(broker)
>   File 
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py",
> line 180, in _configure_broker_conn
> dom_type=dom_type)
>   File 
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py",
> line 177, in set_storage_domain
> .format(sd_type, options, e))
> RequestError: Failed to set storage domain FilesystemBackend, options
> {'dom_type': 'glusterfs', 'sd_uuid': '7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96'}:
> Request failed:  ted_engine_ha.lib.storage_backends.BackendFailureException'>
> 2017-02-03 15:29:35,920 INFO  (Reactor thread) [ProtocolDetector.AcceptorImpl]
> Accepted connection from ::1:49506 (protocoldetector:72)
> 2017-02-03 15:29:35,929 INFO  (Reactor thread) [ProtocolDetector.Detector]
> Detected protocol stomp from ::1:49506 (protocoldetector:127

Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-03 Thread Ralf Schenk
Hello,

I also put host in Maintenance and restarted vdsm while ovirt-ha-agent
is running. I can mount the gluster Volume "engine" manually in the host.

I get this repeatedly in /var/log/vdsm.log:

2017-02-03 15:29:28,891 INFO  (MainThread) [vds] Exiting (vdsm:167)
2017-02-03 15:29:30,974 INFO  (MainThread) [vds] (PID: 11456) I am the
actual vdsm 4.19.4-1.el7.centos microcloud27 (3.10.0-514.6.1.el7.x86_64)
(vdsm:145)
2017-02-03 15:29:30,974 INFO  (MainThread) [vds] VDSM will run with cpu
affinity: frozenset([1]) (vdsm:251)
2017-02-03 15:29:31,013 INFO  (MainThread) [storage.check] Starting
check service (check:91)
2017-02-03 15:29:31,017 INFO  (MainThread) [storage.Dispatcher] Starting
StorageDispatcher... (dispatcher:47)
2017-02-03 15:29:31,017 INFO  (check/loop) [storage.asyncevent] Starting
 (asyncevent:122)
2017-02-03 15:29:31,156 INFO  (MainThread) [dispatcher] Run and protect:
registerDomainStateChangeCallback(callbackFunc=) (logUtils:49)
2017-02-03 15:29:31,156 INFO  (MainThread) [dispatcher] Run and protect:
registerDomainStateChangeCallback, Return response: None (logUtils:52)
2017-02-03 15:29:31,160 INFO  (MainThread) [MOM] Preparing MOM interface
(momIF:49)
2017-02-03 15:29:31,161 INFO  (MainThread) [MOM] Using named unix socket
/var/run/vdsm/mom-vdsm.sock (momIF:58)
2017-02-03 15:29:31,162 INFO  (MainThread) [root] Unregistering all
secrets (secret:91)
2017-02-03 15:29:31,164 INFO  (MainThread) [vds] Setting channels'
timeout to 30 seconds. (vmchannels:223)
2017-02-03 15:29:31,165 INFO  (MainThread) [vds.MultiProtocolAcceptor]
Listening at :::54321 (protocoldetector:185)
2017-02-03 15:29:31,354 INFO  (vmrecovery) [vds] recovery: completed in
0s (clientIF:495)
2017-02-03 15:29:31,371 INFO  (BindingXMLRPC) [vds] XMLRPC server
running (bindingxmlrpc:63)
2017-02-03 15:29:31,471 INFO  (periodic/1) [dispatcher] Run and protect:
repoStats(options=None) (logUtils:49)
2017-02-03 15:29:31,472 INFO  (periodic/1) [dispatcher] Run and protect:
repoStats, Return response: {} (logUtils:52)
2017-02-03 15:29:31,472 WARN  (periodic/1) [MOM] MOM not available.
(momIF:116)
2017-02-03 15:29:31,473 WARN  (periodic/1) [MOM] MOM not available, KSM
stats will be missing. (momIF:79)
2017-02-03 15:29:31,474 ERROR (periodic/1) [root] failed to retrieve
Hosted Engine HA info (api:252)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/host/api.py", line 231, in
_getHaInfo
stats = instance.get_all_stats()
  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py",
line 103, in get_all_stats
self._configure_broker_conn(broker)
  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py",
line 180, in _configure_broker_conn
dom_type=dom_type)
  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py",
line 177, in set_storage_domain
.format(sd_type, options, e))
RequestError: Failed to set storage domain FilesystemBackend, options
{'dom_type': 'glusterfs', 'sd_uuid':
'7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96'}: Request failed: 
2017-02-03 15:29:35,920 INFO  (Reactor thread)
[ProtocolDetector.AcceptorImpl] Accepted connection from ::1:49506
(protocoldetector:72)
2017-02-03 15:29:35,929 INFO  (Reactor thread)
[ProtocolDetector.Detector] Detected protocol stomp from ::1:49506
(protocoldetector:127)
2017-02-03 15:29:35,930 INFO  (Reactor thread) [Broker.StompAdapter]
Processing CONNECT request (stompreactor:102)
2017-02-03 15:29:35,930 INFO  (JsonRpc (StompReactor))
[Broker.StompAdapter] Subscribe command received (stompreactor:129)
2017-02-03 15:29:36,067 INFO  (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC
call Host.ping succeeded in 0.00 seconds (__init__:515)
2017-02-03 15:29:36,071 INFO  (jsonrpc/1) [throttled] Current
getAllVmStats: {} (throttledlog:105)
2017-02-03 15:29:36,071 INFO  (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC
call Host.getAllVmStats succeeded in 0.00 seconds (__init__:515)
2017-02-03 15:29:46,435 INFO  (periodic/0) [dispatcher] Run and protect:
repoStats(options=None) (logUtils:49)
2017-02-03 15:29:46,435 INFO  (periodic/0) [dispatcher] Run and protect:
repoStats, Return response: {} (logUtils:52)
2017-02-03 15:29:46,439 ERROR (periodic/0) [root] failed to retrieve
Hosted Engine HA info (api:252)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/host/api.py", line 231, in
_getHaInfo
stats = instance.get_all_stats()
  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py",
line 103, in get_all_stats
self._configure_broker_conn(broker)
  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py",
line 180, in _configure_broker_conn
dom_type=dom_type)
  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py",
line 177, in set_storage_domain
.format(sd_type, options, e))
RequestError: Failed to set storage domain FilesystemBackend, options
{'dom_type': 'glusterfs', 'sd_uuid':
'7c8deaa8-be02-4aaf-

Re: [ovirt-users] NFS and pass discards\unmap question

2017-02-03 Thread Sergey Kulikov

Thanks! I think now It's question to NetApp, when they'll make 4.2 available,
I've tried to manually mount v4.2 on host,
but unfortunately:
# mount -o vers=4.2 10.1.1.111:/test /tmp/123 
mount.nfs: Protocol not supported

so, my NetApp is vers=4.1 max (

-- 



 Friday, February 3, 2017, 15:54:54:





> On Feb 3, 2017 1:50 PM, "Nir Soffer"  wrote:

> On Fri, Feb 3, 2017 at 2:29 PM, Sergey Kulikov  wrote:
 >>
 >>
 >> Hm... maybe I need to set any options, is there any way to force ovirt to 
 >> mount with this extension, or version 4.2
 >> there is only 4.1 selection in "New Domain" menu.
 >> Current mount options:
 >> type nfs4 
 >> (rw,relatime,vers=4.1,rsize=65536,wsize=65536,namlen=255,soft,nosharecache,proto=tcp,port=0,timeo=600,retrans=6,sec=sys,local_lock=none)
 >>
 >> it should work only if forced option vers=4.2 ?
 >> I thought it's implemented as feature to older version, not 4.2, there is 
 >> few info about this.
>  
>  Looks like ovirt engine does not allow nfs version 4.2.




> But custom options can be used. 
> Y. 


>  
>  We have this RFE:
>  https://bugzilla.redhat.com/1406398
>  
>  So practically, both sparsify and pass discard with NFS are useless
>  in the current version.
>  
>  I think this should be fix for next 4.1 build.
>  
>  Nir
>  

 >>
 >>
 >> --
 >>
 >>
 >>
 >>  Friday, February 3, 2017, 14:45:43:
 >>
 >>
 >>
 >>
 >>
 >>> On Thu, Feb 2, 2017 at 11:45 PM, Sergey Kulikov  wrote:
 >>
 >>
 >>>  I've upgraded to 4.1 release, it have great feature "Pass
 >>> discards", that now can be used without vdsm hooks,
 >>>  After upgrade I've tested it with NFS 4.1 storage, exported from
 >>> netapp, but unfortunately found out, that
 >>>  it's not working, after some investigation, I've found, that NFS
 >>> implementation(even 4.1) in Centos 7
 >>>  doesn't support sparse files and fallocate(FALLOC_FL_PUNCH_HOLE),
 >>> that quemu uses for file storage, it was
 >>>  added only in kernel 3.18, and sparse files is also announced feature of 
 >>>upcoming NFS4.2,
 >>>  sparsify also not working on this data domains(runs, but nothing happens).
 >>>
 >>>  This test also shows, that FALLOC_FL_PUNCH_HOLE not working, it
 >>> was executed on centos ovirt host with mounted nfs share:
 >>>  # truncate -s 1024 test1
 >>>  # fallocate -p -o 0 -l 1024 test1
 >>>  fallocate: keep size mode (-n option) unsupported
 >>>
 >>>  Is there any plans to backport this feature to node-ng, or centos? or we 
 >>>should wait for RHEL 8?
 >>
 >>
 >>
 >>
 >>> Interesting, I was under the impression it was fixed some time ago,
 >>> for 7.2[1] (kernel-3.10.0-313.el7)
 >>> Perhaps you are not mounted with 4.2?
 >>
 >>
 >>> Y.
 >>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1079385
 >>>
 >>>  NFS is more and more popular, so discards is VERY useful feature.
 >>>  I'm also planning to test fallocate on latest fedora with 4.x kernel and 
 >>>mounted nfs.
 >>>
 >>>  Thanks for your work!
 >>>
 >>>  --
 >>>
 >>>  ___
 >>>  Users mailing list
 >>>  Users@ovirt.org
 >>>  http://lists.ovirt.org/mailman/listinfo/users
 >>>
 >>
 >> ___
 >> Users mailing list
 >> Users@ovirt.org
 >> http://lists.ovirt.org/mailman/listinfo/users
>  

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NFS and pass discards\unmap question

2017-02-03 Thread Nir Soffer
On Fri, Feb 3, 2017 at 2:54 PM, Yaniv Kaul  wrote:
>
>
> On Feb 3, 2017 1:50 PM, "Nir Soffer"  wrote:
>
> On Fri, Feb 3, 2017 at 2:29 PM, Sergey Kulikov  wrote:
>>
>>
>> Hm... maybe I need to set any options, is there any way to force ovirt to
>> mount with this extension, or version 4.2
>> there is only 4.1 selection in "New Domain" menu.
>> Current mount options:
>> type nfs4
>> (rw,relatime,vers=4.1,rsize=65536,wsize=65536,namlen=255,soft,nosharecache,proto=tcp,port=0,timeo=600,retrans=6,sec=sys,local_lock=none)
>>
>> it should work only if forced option vers=4.2 ?
>> I thought it's implemented as feature to older version, not 4.2, there is
>> few info about this.
>
> Looks like ovirt engine does not allow nfs version 4.2.
>
>
> But custom options can be used.

Does not work for me - I tried all combinations of:

NFS Version: Auto Negotiate
NFS Version: V4
NFS Version: V4.1
NFS Version: V3 (default)

With:

Additional mount options: nfsvers=4,minorversion=2
Additional mount options: minorversion=2
Additional mount options: vers=4.2

It always fail with this error:

Error while executing action: Cannot edit Storage Connection.
Custom mount options contain the following duplicate managed options:
...

Engine does not let you specify minorversion, nfsvers, or vers.

Adding managed 4.2 item to the menu seems like the way to fix this.

Nir

> Y.
>
>
> We have this RFE:
> https://bugzilla.redhat.com/1406398
>
> So practically, both sparsify and pass discard with NFS are useless
> in the current version.
>
> I think this should be fix for next 4.1 build.
>
> Nir
>
>>
>>
>> --
>>
>>
>>
>>  Friday, February 3, 2017, 14:45:43:
>>
>>
>>
>>
>>
>>> On Thu, Feb 2, 2017 at 11:45 PM, Sergey Kulikov  wrote:
>>
>>
>>>  I've upgraded to 4.1 release, it have great feature "Pass
>>> discards", that now can be used without vdsm hooks,
>>>  After upgrade I've tested it with NFS 4.1 storage, exported from
>>> netapp, but unfortunately found out, that
>>>  it's not working, after some investigation, I've found, that NFS
>>> implementation(even 4.1) in Centos 7
>>>  doesn't support sparse files and fallocate(FALLOC_FL_PUNCH_HOLE),
>>> that quemu uses for file storage, it was
>>>  added only in kernel 3.18, and sparse files is also announced feature of
>>> upcoming NFS4.2,
>>>  sparsify also not working on this data domains(runs, but nothing
>>> happens).
>>>
>>>  This test also shows, that FALLOC_FL_PUNCH_HOLE not working, it
>>> was executed on centos ovirt host with mounted nfs share:
>>>  # truncate -s 1024 test1
>>>  # fallocate -p -o 0 -l 1024 test1
>>>  fallocate: keep size mode (-n option) unsupported
>>>
>>>  Is there any plans to backport this feature to node-ng, or centos? or we
>>> should wait for RHEL 8?
>>
>>
>>
>>
>>> Interesting, I was under the impression it was fixed some time ago,
>>> for 7.2[1] (kernel-3.10.0-313.el7)
>>> Perhaps you are not mounted with 4.2?
>>
>>
>>> Y.
>>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1079385
>>>
>>>  NFS is more and more popular, so discards is VERY useful feature.
>>>  I'm also planning to test fallocate on latest fedora with 4.x kernel and
>>> mounted nfs.
>>>
>>>  Thanks for your work!
>>>
>>>  --
>>>
>>>  ___
>>>  Users mailing list
>>>  Users@ovirt.org
>>>  http://lists.ovirt.org/mailman/listinfo/users
>>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt Engine 4.1 fails on specifying interface for ovirtmgmt

2017-02-03 Thread Karli Sjöberg
Heya!

I´m trying to complete 'hosted-engine --deploy' but I´m stuck at:
Please indicate a nic to set ovirtmgmt bridge on: (eno1) [eno1]: eno1.1
[ ERROR ] Invalid value

As you can see it´s a vlan-tagged interface. If I type in just 'eno1',
the setup continues but fails obviously since it´s the wrong network.

Any pointers?

/K
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NFS and pass discards\unmap question

2017-02-03 Thread Yaniv Kaul
On Feb 3, 2017 1:50 PM, "Nir Soffer"  wrote:

On Fri, Feb 3, 2017 at 2:29 PM, Sergey Kulikov  wrote:
>
>
> Hm... maybe I need to set any options, is there any way to force ovirt to
mount with this extension, or version 4.2
> there is only 4.1 selection in "New Domain" menu.
> Current mount options:
> type nfs4 (rw,relatime,vers=4.1,rsize=65536,wsize=65536,namlen=255,
soft,nosharecache,proto=tcp,port=0,timeo=600,retrans=6,
sec=sys,local_lock=none)
>
> it should work only if forced option vers=4.2 ?
> I thought it's implemented as feature to older version, not 4.2, there is
few info about this.

Looks like ovirt engine does not allow nfs version 4.2.


But custom options can be used.
Y.


We have this RFE:
https://bugzilla.redhat.com/1406398

So practically, both sparsify and pass discard with NFS are useless
in the current version.

I think this should be fix for next 4.1 build.

Nir

>
>
> --
>
>
>
>  Friday, February 3, 2017, 14:45:43:
>
>
>
>
>
>> On Thu, Feb 2, 2017 at 11:45 PM, Sergey Kulikov  wrote:
>
>
>>  I've upgraded to 4.1 release, it have great feature "Pass
>> discards", that now can be used without vdsm hooks,
>>  After upgrade I've tested it with NFS 4.1 storage, exported from
>> netapp, but unfortunately found out, that
>>  it's not working, after some investigation, I've found, that NFS
>> implementation(even 4.1) in Centos 7
>>  doesn't support sparse files and fallocate(FALLOC_FL_PUNCH_HOLE),
>> that quemu uses for file storage, it was
>>  added only in kernel 3.18, and sparse files is also announced feature
of upcoming NFS4.2,
>>  sparsify also not working on this data domains(runs, but nothing
happens).
>>
>>  This test also shows, that FALLOC_FL_PUNCH_HOLE not working, it
>> was executed on centos ovirt host with mounted nfs share:
>>  # truncate -s 1024 test1
>>  # fallocate -p -o 0 -l 1024 test1
>>  fallocate: keep size mode (-n option) unsupported
>>
>>  Is there any plans to backport this feature to node-ng, or centos? or
we should wait for RHEL 8?
>
>
>
>
>> Interesting, I was under the impression it was fixed some time ago,
>> for 7.2[1] (kernel-3.10.0-313.el7)
>> Perhaps you are not mounted with 4.2?
>
>
>> Y.
>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1079385
>>
>>  NFS is more and more popular, so discards is VERY useful feature.
>>  I'm also planning to test fallocate on latest fedora with 4.x kernel
and mounted nfs.
>>
>>  Thanks for your work!
>>
>>  --
>>
>>  ___
>>  Users mailing list
>>  Users@ovirt.org
>>  http://lists.ovirt.org/mailman/listinfo/users
>>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Orphaned Export Domain

2017-02-03 Thread Maton, Brett
Managed to attach it by editing dom_md/metadata and removing

POOL_UUID
SDUUID

then updating the checksum...


On 3 February 2017 at 12:19, Maton, Brett  wrote:

> I forgot to cleanly detach the export domain from my previous cluster ( no
> longer exists ), how can I import the domain into a new cluster ?
>
> Any help appreciated
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NFS and pass discards\unmap question

2017-02-03 Thread Nir Soffer
On Fri, Feb 3, 2017 at 2:29 PM, Sergey Kulikov  wrote:
>
>
> Hm... maybe I need to set any options, is there any way to force ovirt to 
> mount with this extension, or version 4.2
> there is only 4.1 selection in "New Domain" menu.
> Current mount options:
> type nfs4 
> (rw,relatime,vers=4.1,rsize=65536,wsize=65536,namlen=255,soft,nosharecache,proto=tcp,port=0,timeo=600,retrans=6,sec=sys,local_lock=none)
>
> it should work only if forced option vers=4.2 ?
> I thought it's implemented as feature to older version, not 4.2, there is few 
> info about this.

Looks like ovirt engine does not allow nfs version 4.2.

We have this RFE:
https://bugzilla.redhat.com/1406398

So practically, both sparsify and pass discard with NFS are useless
in the current version.

I think this should be fix for next 4.1 build.

Nir

>
>
> --
>
>
>
>  Friday, February 3, 2017, 14:45:43:
>
>
>
>
>
>> On Thu, Feb 2, 2017 at 11:45 PM, Sergey Kulikov  wrote:
>
>
>>  I've upgraded to 4.1 release, it have great feature "Pass
>> discards", that now can be used without vdsm hooks,
>>  After upgrade I've tested it with NFS 4.1 storage, exported from
>> netapp, but unfortunately found out, that
>>  it's not working, after some investigation, I've found, that NFS
>> implementation(even 4.1) in Centos 7
>>  doesn't support sparse files and fallocate(FALLOC_FL_PUNCH_HOLE),
>> that quemu uses for file storage, it was
>>  added only in kernel 3.18, and sparse files is also announced feature of 
>> upcoming NFS4.2,
>>  sparsify also not working on this data domains(runs, but nothing happens).
>>
>>  This test also shows, that FALLOC_FL_PUNCH_HOLE not working, it
>> was executed on centos ovirt host with mounted nfs share:
>>  # truncate -s 1024 test1
>>  # fallocate -p -o 0 -l 1024 test1
>>  fallocate: keep size mode (-n option) unsupported
>>
>>  Is there any plans to backport this feature to node-ng, or centos? or we 
>> should wait for RHEL 8?
>
>
>
>
>> Interesting, I was under the impression it was fixed some time ago,
>> for 7.2[1] (kernel-3.10.0-313.el7)
>> Perhaps you are not mounted with 4.2?
>
>
>> Y.
>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1079385
>>
>>  NFS is more and more popular, so discards is VERY useful feature.
>>  I'm also planning to test fallocate on latest fedora with 4.x kernel and 
>> mounted nfs.
>>
>>  Thanks for your work!
>>
>>  --
>>
>>  ___
>>  Users mailing list
>>  Users@ovirt.org
>>  http://lists.ovirt.org/mailman/listinfo/users
>>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-03 Thread Simone Tiraboschi
I see there an ERROR on stopMonitoringDomain but I cannot see the
correspondent  startMonitoringDomain; could you please look for it?

On Fri, Feb 3, 2017 at 1:16 PM, Ralf Schenk  wrote:

> Hello,
>
> attached is my vdsm.log from the host with hosted-engine-ha around the
> time-frame of agent timeout that is not working anymore for engine (it
> works in Ovirt and is active). It simply isn't working for engine-ha
> anymore after Update.
>
> At 2017-02-02 19:25:34,248 you'll find an error corresponoding to agent
> timeout error.
>
> Bye
>
>
>
> Am 03.02.2017 um 11:28 schrieb Simone Tiraboschi:
>
> 3. Three of my hosts have the hosted engine deployed for ha. First all
>>> three where marked by a crown (running was gold and others where silver).
>>> After upgrading the 3 Host deployed hosted engine ha is not active anymore.
>>>
>>> I can't get this host back with working ovirt-ha-agent/broker. I already
>>> rebooted, manually restarted the services but It isn't able to get cluster
>>> state according to
>>> "hosted-engine --vm-status". The other hosts state the host status as
>>> "unknown stale-data"
>>>
>>> I already shut down all agents on all hosts and issued a "hosted-engine
>>> --reinitialize-lockspace" but that didn't help.
>>>
>>> Agents stops working after a timeout-error according to log:
>>>
>>> MainThread::INFO::2017-02-02 19:24:52,040::hosted_engine::8
>>> 41::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
>>> VDSM domain monitor status: PENDING
>>> MainThread::INFO::2017-02-02 19:24:59,185::hosted_engine::8
>>> 41::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
>>> VDSM domain monitor status: PENDING
>>> MainThread::INFO::2017-02-02 19:25:06,333::hosted_engine::8
>>> 41::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
>>> VDSM domain monitor status: PENDING
>>> MainThread::INFO::2017-02-02 19:25:13,554::hosted_engine::8
>>> 41::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
>>> VDSM domain monitor status: PENDING
>>> MainThread::INFO::2017-02-02 19:25:20,710::hosted_engine::8
>>> 41::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
>>> VDSM domain monitor status: PENDING
>>> MainThread::INFO::2017-02-02 19:25:27,865::hosted_engine::8
>>> 41::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
>>> VDSM domain monitor status: PENDING
>>> MainThread::ERROR::2017-02-02 19:25:27,866::hosted_engine::8
>>> 15::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_domain_monitor)
>>> Failed to start monitoring domain 
>>> (sd_uuid=7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96,
>>> host_id=3): timeout during domain acquisition
>>> MainThread::WARNING::2017-02-02 19:25:27,866::hosted_engine::4
>>> 69::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
>>> Error while monitoring engine: Failed to start monitoring domain
>>> (sd_uuid=7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96, host_id=3): timeout
>>> during domain acquisition
>>> MainThread::WARNING::2017-02-02 19:25:27,866::hosted_engine::4
>>> 72::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
>>> Unexpected error
>>> Traceback (most recent call last):
>>>   File 
>>> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py",
>>> line 443, in start_monitoring
>>> self._initialize_domain_monitor()
>>>   File 
>>> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py",
>>> line 816, in _initialize_domain_monitor
>>> raise Exception(msg)
>>> Exception: Failed to start monitoring domain
>>> (sd_uuid=7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96, host_id=3): timeout
>>> during domain acquisition
>>> MainThread::ERROR::2017-02-02 19:25:27,866::hosted_engine::4
>>> 85::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
>>> Shutting down the agent because of 3 failures in a row!
>>> MainThread::INFO::2017-02-02 19:25:32,087::hosted_engine::8
>>> 41::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
>>> VDSM domain monitor status: PENDING
>>> MainThread::INFO::2017-02-02 19:25:34,250::hosted_engine::7
>>> 69::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_domain_monitor)
>>> Failed to stop monitoring domain 
>>> (sd_uuid=7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96):
>>> Storage domain is member of pool: u'domain=7c8deaa8-be02-4aaf-b9
>>> b4-ddc8da99ad96'
>>> MainThread::INFO::2017-02-02 19:25:34,254::agent::143::ovir
>>> t_hosted_engine_ha.agent.agent.Agent::(run) Agent shutting down
>>>
>> Simone, Martin, can you please follow up on this?
>>
>
> Ralph, could you please attach vdsm logs from on of your hosts for the
> relevant time frame?
>
>
> --
>
>
> *Ralf Schenk*
> fon +49 (0) 24 05 / 40 83 70 <+49%202405%20408370>
> fax +49 (0) 24 05 / 40 83 759 <+49%202405%204083759>

Re: [ovirt-users] NFS and pass discards\unmap question

2017-02-03 Thread Sergey Kulikov


Unfortunately I can't browse this bug:
"You are not authorized to access bug #1079385."
Can you email me details on ths bug?
I think that's the reason I can't find this fix for rhel\centos in google)


-- 



 Friday, February 3, 2017, 14:45:43:





> On Thu, Feb 2, 2017 at 11:45 PM, Sergey Kulikov  wrote:


>  I've upgraded to 4.1 release, it have great feature "Pass
> discards", that now can be used without vdsm hooks,
>  After upgrade I've tested it with NFS 4.1 storage, exported from
> netapp, but unfortunately found out, that
>  it's not working, after some investigation, I've found, that NFS
> implementation(even 4.1) in Centos 7
>  doesn't support sparse files and fallocate(FALLOC_FL_PUNCH_HOLE),
> that quemu uses for file storage, it was
>  added only in kernel 3.18, and sparse files is also announced feature of 
> upcoming NFS4.2,
>  sparsify also not working on this data domains(runs, but nothing happens).
>  
>  This test also shows, that FALLOC_FL_PUNCH_HOLE not working, it
> was executed on centos ovirt host with mounted nfs share:
>  # truncate -s 1024 test1
>  # fallocate -p -o 0 -l 1024 test1
>  fallocate: keep size mode (-n option) unsupported
>  
>  Is there any plans to backport this feature to node-ng, or centos? or we 
> should wait for RHEL 8?




> Interesting, I was under the impression it was fixed some time ago,
> for 7.2[1] (kernel-3.10.0-313.el7)
> Perhaps you are not mounted with 4.2?


> Y.
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1079385
>  
>  NFS is more and more popular, so discards is VERY useful feature.
>  I'm also planning to test fallocate on latest fedora with 4.x kernel and 
> mounted nfs.
>  
>  Thanks for your work!
>  
>  --
>  
>  ___
>  Users mailing list
>  Users@ovirt.org
>  http://lists.ovirt.org/mailman/listinfo/users
>  

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NFS and pass discards\unmap question

2017-02-03 Thread Sergey Kulikov


Hm... maybe I need to set any options, is there any way to force ovirt to mount 
with this extension, or version 4.2
there is only 4.1 selection in "New Domain" menu.
Current mount options:
type nfs4 
(rw,relatime,vers=4.1,rsize=65536,wsize=65536,namlen=255,soft,nosharecache,proto=tcp,port=0,timeo=600,retrans=6,sec=sys,local_lock=none)

it should work only if forced option vers=4.2 ?
I thought it's implemented as feature to older version, not 4.2, there is few 
info about this.


-- 



 Friday, February 3, 2017, 14:45:43:





> On Thu, Feb 2, 2017 at 11:45 PM, Sergey Kulikov  wrote:


>  I've upgraded to 4.1 release, it have great feature "Pass
> discards", that now can be used without vdsm hooks,
>  After upgrade I've tested it with NFS 4.1 storage, exported from
> netapp, but unfortunately found out, that
>  it's not working, after some investigation, I've found, that NFS
> implementation(even 4.1) in Centos 7
>  doesn't support sparse files and fallocate(FALLOC_FL_PUNCH_HOLE),
> that quemu uses for file storage, it was
>  added only in kernel 3.18, and sparse files is also announced feature of 
> upcoming NFS4.2,
>  sparsify also not working on this data domains(runs, but nothing happens).
>  
>  This test also shows, that FALLOC_FL_PUNCH_HOLE not working, it
> was executed on centos ovirt host with mounted nfs share:
>  # truncate -s 1024 test1
>  # fallocate -p -o 0 -l 1024 test1
>  fallocate: keep size mode (-n option) unsupported
>  
>  Is there any plans to backport this feature to node-ng, or centos? or we 
> should wait for RHEL 8?




> Interesting, I was under the impression it was fixed some time ago,
> for 7.2[1] (kernel-3.10.0-313.el7)
> Perhaps you are not mounted with 4.2?


> Y.
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1079385
>  
>  NFS is more and more popular, so discards is VERY useful feature.
>  I'm also planning to test fallocate on latest fedora with 4.x kernel and 
> mounted nfs.
>  
>  Thanks for your work!
>  
>  --
>  
>  ___
>  Users mailing list
>  Users@ovirt.org
>  http://lists.ovirt.org/mailman/listinfo/users
>  

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Orphaned Export Domain

2017-02-03 Thread Maton, Brett
I forgot to cleanly detach the export domain from my previous cluster ( no
longer exists ), how can I import the domain into a new cluster ?

Any help appreciated
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-03 Thread Ralf Schenk
Hello,

attached is my vdsm.log from the host with hosted-engine-ha around the
time-frame of agent timeout that is not working anymore for engine (it
works in Ovirt and is active). It simply isn't working for engine-ha
anymore after Update.

At 2017-02-02 19:25:34,248 you'll find an error corresponoding to agent
timeout error.

Bye



Am 03.02.2017 um 11:28 schrieb Simone Tiraboschi:
>
> 3. Three of my hosts have the hosted engine deployed for ha.
> First all three where marked by a crown (running was gold and
> others where silver). After upgrading the 3 Host deployed
> hosted engine ha is not active anymore.
>
> I can't get this host back with working ovirt-ha-agent/broker.
> I already rebooted, manually restarted the services but It
> isn't able to get cluster state according to
> "hosted-engine --vm-status". The other hosts state the host
> status as "unknown stale-data"
>
> I already shut down all agents on all hosts and issued a
> "hosted-engine --reinitialize-lockspace" but that didn't help.
>
> Agents stops working after a timeout-error according to log:
>
> MainThread::INFO::2017-02-02
> 
> 19:24:52,040::hosted_engine::841::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
> VDSM domain monitor status: PENDING
> MainThread::INFO::2017-02-02
> 
> 19:24:59,185::hosted_engine::841::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
> VDSM domain monitor status: PENDING
> MainThread::INFO::2017-02-02
> 
> 19:25:06,333::hosted_engine::841::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
> VDSM domain monitor status: PENDING
> MainThread::INFO::2017-02-02
> 
> 19:25:13,554::hosted_engine::841::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
> VDSM domain monitor status: PENDING
> MainThread::INFO::2017-02-02
> 
> 19:25:20,710::hosted_engine::841::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
> VDSM domain monitor status: PENDING
> MainThread::INFO::2017-02-02
> 
> 19:25:27,865::hosted_engine::841::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
> VDSM domain monitor status: PENDING
> MainThread::ERROR::2017-02-02
> 
> 19:25:27,866::hosted_engine::815::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_domain_monitor)
> Failed to start monitoring domain
> (sd_uuid=7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96, host_id=3):
> timeout during domain acquisition
> MainThread::WARNING::2017-02-02
> 
> 19:25:27,866::hosted_engine::469::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Error while monitoring engine: Failed to start monitoring
> domain (sd_uuid=7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96,
> host_id=3): timeout during domain acquisition
> MainThread::WARNING::2017-02-02
> 
> 19:25:27,866::hosted_engine::472::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Unexpected error
> Traceback (most recent call last):
>   File
> 
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py",
> line 443, in start_monitoring
> self._initialize_domain_monitor()
>   File
> 
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py",
> line 816, in _initialize_domain_monitor
> raise Exception(msg)
> Exception: Failed to start monitoring domain
> (sd_uuid=7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96, host_id=3):
> timeout during domain acquisition
> MainThread::ERROR::2017-02-02
> 
> 19:25:27,866::hosted_engine::485::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Shutting down the agent because of 3 failures in a row!
> MainThread::INFO::2017-02-02
> 
> 19:25:32,087::hosted_engine::841::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
> VDSM domain monitor status: PENDING
> MainThread::INFO::2017-02-02
> 
> 19:25:34,250::hosted_engine::769::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_domain_monitor)
> Failed to stop monitoring domain
> (sd_uuid=7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96): Storage domain
> is member of pool: u'domain=7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96'
> MainThread::INFO::2017-02-02
> 
> 19:25:34,254::agent::143::ovirt_hosted_engine_ha.agent.agent.Agent::(run)
> Agent shutting down
>
> Simone, Martin, can you please follow up on this?
>
>
> Ralp

Re: [ovirt-users] NFS and pass discards\unmap question

2017-02-03 Thread Yaniv Kaul
On Thu, Feb 2, 2017 at 11:45 PM, Sergey Kulikov  wrote:

>
> I've upgraded to 4.1 release, it have great feature "Pass discards", that
> now can be used without vdsm hooks,
> After upgrade I've tested it with NFS 4.1 storage, exported from netapp,
> but unfortunately found out, that
> it's not working, after some investigation, I've found, that NFS
> implementation(even 4.1) in Centos 7
> doesn't support sparse files and fallocate(FALLOC_FL_PUNCH_HOLE), that
> quemu uses for file storage, it was
> added only in kernel 3.18, and sparse files is also announced feature of
> upcoming NFS4.2,
> sparsify also not working on this data domains(runs, but nothing happens).
>
> This test also shows, that FALLOC_FL_PUNCH_HOLE not working, it was
> executed on centos ovirt host with mounted nfs share:
> # truncate -s 1024 test1
> # fallocate -p -o 0 -l 1024 test1
> fallocate: keep size mode (-n option) unsupported
>
> Is there any plans to backport this feature to node-ng, or centos? or we
> should wait for RHEL 8?
>

Interesting, I was under the impression it was fixed some time ago, for
7.2[1] (kernel-3.10.0-313.el7)
Perhaps you are not mounted with 4.2?

Y.
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1079385


> NFS is more and more popular, so discards is VERY useful feature.
> I'm also planning to test fallocate on latest fedora with 4.x kernel and
> mounted nfs.
>
> Thanks for your work!
>
> --
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt 4.1 Release rpm - access forbidden

2017-02-03 Thread Kai Wagner
Hi,

I tried to install and setup oVirt 4.1 but after I tried

|yum install http://resources.ovirt.org/pub/yum-repo/ovirt-release41.rpm|

|without any success I opened the direct link and I got a "Forbidden" ->
You don't have permissions error message.|

|Thanks
|


-- 
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 
(AG Nürnberg)



signature.asc
Description: OpenPGP digital signature
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-03 Thread Francesco Romani


On 02/03/2017 10:54 AM, Ralf Schenk wrote:
>
> Hello,
>
> I upgraded my cluster of 8 hosts with gluster storage and
> hosted-engine-ha. They were already Centos 7.3 and using Ovirt 4.0.6
> and gluster 3.7.x packages from storage-sig testing.
>
> I'm missing the storage listed under storage tab but this is already
> filed by a bug. Increasing Cluster and Storage Compability level and
> also "reset emulated machine" after having upgraded one host after
> another without the need to shutdown vm's works well. (VM's get sign
> that there will be changes after reboot).
>
> Important: you also have to issue a yum update on the host for
> upgrading additional components like i.e. gluster to 3.8.x. I was
> frightened of this step but It worked well except a configuration
> issue I was responsible for in gluster.vol (I had "transport socket,
> rdma")
>
> Bugs/Quirks so far:
>
> 1. After restarting a single VM that used RNG-Device I got an error
> (it was german) but like "RNG Device not supported by cluster". I hat
> to disable RNG Device save the settings. Again settings and enable RNG
> Device. Then machine boots up.
> I think there is a migration step missing from /dev/random to
> /dev/urandom for exisiting VM's.
>

Hi!
Sorry about this trouble. Please file a bug about this, we will likely
need some Vdsm + Engine fixes.


Bests,

-- 
Francesco Romani
Red Hat Engineering Virtualization R & D
IRC: fromani

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-03 Thread Ramesh Nachimuthu




- Original Message -
> From: "Ralf Schenk" 
> To: "Ramesh Nachimuthu" 
> Cc: users@ovirt.org
> Sent: Friday, February 3, 2017 4:19:02 PM
> Subject: Re: [ovirt-users] [Call for feedback] did you install/update to 
> 4.1.0?
> 
> Hello,
> 
> in reality my cluster is a hyper-converged cluster. But how do I tell
> this Ovirt Engine ? Of course I activated the checkbox "Gluster"
> (already some versions ago around 4.0.x) but that didn't change anything.
> 

Do you see any error/warning in the engine.log?

Regards,
Ramesh

> Bye
> Am 03.02.2017 um 11:18 schrieb Ramesh Nachimuthu:
> >> 2. I'm missing any gluster specific management features as my gluster is
> >> not
> >> managable in any way from the GUI. I expected to see my gluster now in
> >> dashboard and be able to add volumes etc. What do I need to do to "import"
> >> my existing gluster (Only one volume so far) to be managable ?
> >>
> >>
> > If it is a hyperconverged cluster, then all your hosts are already managed
> > by ovirt. So you just need to enable 'Gluster Service' in the Cluster,
> > gluster volume will be imported automatically when you enable gluster
> > service.
> >
> > If it is not a hyperconverged cluster, then you have to create a new
> > cluster and enable only 'Gluster Service'. Then you can import or add the
> > gluster hosts to this Gluster cluster.
> >
> > You may also need to define a gluster network if you are using a separate
> > network for gluster data traffic. More at
> > http://www.ovirt.org/develop/release-management/features/network/select-network-for-gluster/
> >
> >
> >
> 
> --
> 
> 
> *Ralf Schenk*
> fon +49 (0) 24 05 / 40 83 70
> fax +49 (0) 24 05 / 40 83 759
> mail *r...@databay.de* 
>   
> *Databay AG*
> Jens-Otto-Krag-Straße 11
> D-52146 Würselen
> *www.databay.de* 
> 
> Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
> Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
> Philipp Hermanns
> Aufsichtsratsvorsitzender: Wilhelm Dohmen
> 
> 
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-03 Thread Ralf Schenk
Hello,

in reality my cluster is a hyper-converged cluster. But how do I tell
this Ovirt Engine ? Of course I activated the checkbox "Gluster"
(already some versions ago around 4.0.x) but that didn't change anything.

Bye
Am 03.02.2017 um 11:18 schrieb Ramesh Nachimuthu:
>> 2. I'm missing any gluster specific management features as my gluster is not
>> managable in any way from the GUI. I expected to see my gluster now in
>> dashboard and be able to add volumes etc. What do I need to do to "import"
>> my existing gluster (Only one volume so far) to be managable ?
>>
>>
> If it is a hyperconverged cluster, then all your hosts are already managed by 
> ovirt. So you just need to enable 'Gluster Service' in the Cluster, gluster 
> volume will be imported automatically when you enable gluster service. 
>
> If it is not a hyperconverged cluster, then you have to create a new cluster 
> and enable only 'Gluster Service'. Then you can import or add the gluster 
> hosts to this Gluster cluster. 
>
> You may also need to define a gluster network if you are using a separate 
> network for gluster data traffic. More at 
> http://www.ovirt.org/develop/release-management/features/network/select-network-for-gluster/
>
>
>

-- 


*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *r...@databay.de* 

*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* 

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-03 Thread Simone Tiraboschi
On Fri, Feb 3, 2017 at 11:17 AM, Sandro Bonazzola 
wrote:

>
>
> On Fri, Feb 3, 2017 at 10:54 AM, Ralf Schenk  wrote:
>
>> Hello,
>>
>> I upgraded my cluster of 8 hosts with gluster storage and
>> hosted-engine-ha. They were already Centos 7.3 and using Ovirt 4.0.6 and
>> gluster 3.7.x packages from storage-sig testing.
>>
>> I'm missing the storage listed under storage tab but this is already
>> filed by a bug. Increasing Cluster and Storage Compability level and also
>> "reset emulated machine" after having upgraded one host after another
>> without the need to shutdown vm's works well. (VM's get sign that there
>> will be changes after reboot).
>>
>> Important: you also have to issue a yum update on the host for upgrading
>> additional components like i.e. gluster to 3.8.x. I was frightened of this
>> step but It worked well except a configuration issue I was responsible for
>> in gluster.vol (I had "transport socket, rdma")
>>
>> Bugs/Quirks so far:
>>
>> 1. After restarting a single VM that used RNG-Device I got an error (it
>> was german) but like "RNG Device not supported by cluster". I hat to
>> disable RNG Device save the settings. Again settings and enable RNG Device.
>> Then machine boots up.
>> I think there is a migration step missing from /dev/random to
>> /dev/urandom for exisiting VM's.
>>
>
> Tomas, Francesco, Michal, can you please follow up on this?
>
>
>
>> 2. I'm missing any gluster specific management features as my gluster is
>> not managable in any way from the GUI. I expected to see my gluster now in
>> dashboard and be able to add volumes etc. What do I need to do to "import"
>> my existing gluster (Only one volume so far) to be managable ?
>>
>
> Sahina, can you please follow up on this?
>
>
>> 3. Three of my hosts have the hosted engine deployed for ha. First all
>> three where marked by a crown (running was gold and others where silver).
>> After upgrading the 3 Host deployed hosted engine ha is not active anymore.
>>
>> I can't get this host back with working ovirt-ha-agent/broker. I already
>> rebooted, manually restarted the services but It isn't able to get cluster
>> state according to
>> "hosted-engine --vm-status". The other hosts state the host status as
>> "unknown stale-data"
>>
>> I already shut down all agents on all hosts and issued a "hosted-engine
>> --reinitialize-lockspace" but that didn't help.
>>
>> Agents stops working after a timeout-error according to log:
>>
>> MainThread::INFO::2017-02-02 19:24:52,040::hosted_engine::8
>> 41::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
>> VDSM domain monitor status: PENDING
>> MainThread::INFO::2017-02-02 19:24:59,185::hosted_engine::8
>> 41::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
>> VDSM domain monitor status: PENDING
>> MainThread::INFO::2017-02-02 19:25:06,333::hosted_engine::8
>> 41::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
>> VDSM domain monitor status: PENDING
>> MainThread::INFO::2017-02-02 19:25:13,554::hosted_engine::8
>> 41::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
>> VDSM domain monitor status: PENDING
>> MainThread::INFO::2017-02-02 19:25:20,710::hosted_engine::8
>> 41::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
>> VDSM domain monitor status: PENDING
>> MainThread::INFO::2017-02-02 19:25:27,865::hosted_engine::8
>> 41::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
>> VDSM domain monitor status: PENDING
>> MainThread::ERROR::2017-02-02 19:25:27,866::hosted_engine::8
>> 15::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_domain_monitor)
>> Failed to start monitoring domain 
>> (sd_uuid=7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96,
>> host_id=3): timeout during domain acquisition
>> MainThread::WARNING::2017-02-02 19:25:27,866::hosted_engine::4
>> 69::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
>> Error while monitoring engine: Failed to start monitoring domain
>> (sd_uuid=7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96, host_id=3): timeout
>> during domain acquisition
>> MainThread::WARNING::2017-02-02 19:25:27,866::hosted_engine::4
>> 72::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
>> Unexpected error
>> Traceback (most recent call last):
>>   File 
>> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py",
>> line 443, in start_monitoring
>> self._initialize_domain_monitor()
>>   File 
>> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py",
>> line 816, in _initialize_domain_monitor
>> raise Exception(msg)
>> Exception: Failed to start monitoring domain
>> (sd_uuid=7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96, host_id=3): timeout
>> during domain acquisition
>> MainThread::ERROR::2017-02-02 19:25:27,866::hosted_engine::4
>> 85::ovir

Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-03 Thread Ramesh Nachimuthu




- Original Message -
> From: "Ralf Schenk" 
> To: users@ovirt.org
> Sent: Friday, February 3, 2017 3:24:55 PM
> Subject: Re: [ovirt-users] [Call for feedback] did you install/update to 
> 4.1.0?
> 
> 
> 
> Hello,
> 
> I upgraded my cluster of 8 hosts with gluster storage and hosted-engine-ha.
> They were already Centos 7.3 and using Ovirt 4.0.6 and gluster 3.7.x
> packages from storage-sig testing.
> 
> 
> I'm missing the storage listed under storage tab but this is already filed by
> a bug. Increasing Cluster and Storage Compability level and also "reset
> emulated machine" after having upgraded one host after another without the
> need to shutdown vm's works well. (VM's get sign that there will be changes
> after reboot).
> 
> Important: you also have to issue a yum update on the host for upgrading
> additional components like i.e. gluster to 3.8.x. I was frightened of this
> step but It worked well except a configuration issue I was responsible for
> in gluster.vol (I had "transport socket, rdma")
> 
> 
> Bugs/Quirks so far:
> 
> 
> 1. After restarting a single VM that used RNG-Device I got an error (it was
> german) but like "RNG Device not supported by cluster". I hat to disable RNG
> Device save the settings. Again settings and enable RNG Device. Then machine
> boots up.
> I think there is a migration step missing from /dev/random to /dev/urandom
> for exisiting VM's.
> 
> 2. I'm missing any gluster specific management features as my gluster is not
> managable in any way from the GUI. I expected to see my gluster now in
> dashboard and be able to add volumes etc. What do I need to do to "import"
> my existing gluster (Only one volume so far) to be managable ?
> 
> 

If it is a hyperconverged cluster, then all your hosts are already managed by 
ovirt. So you just need to enable 'Gluster Service' in the Cluster, gluster 
volume will be imported automatically when you enable gluster service. 

If it is not a hyperconverged cluster, then you have to create a new cluster 
and enable only 'Gluster Service'. Then you can import or add the gluster hosts 
to this Gluster cluster. 

You may also need to define a gluster network if you are using a separate 
network for gluster data traffic. More at 
http://www.ovirt.org/develop/release-management/features/network/select-network-for-gluster/



> 3. Three of my hosts have the hosted engine deployed for ha. First all three
> where marked by a crown (running was gold and others where silver). After
> upgrading the 3 Host deployed hosted engine ha is not active anymore.
> 
> I can't get this host back with working ovirt-ha-agent/broker. I already
> rebooted, manually restarted the services but It isn't able to get cluster
> state according to
> "hosted-engine --vm-status". The other hosts state the host status as
> "unknown stale-data"
> 
> I already shut down all agents on all hosts and issued a "hosted-engine
> --reinitialize-lockspace" but that didn't help.
> 
> 
> Agents stops working after a timeout-error according to log:
> 
> MainThread::INFO::2017-02-02
> 19:24:52,040::hosted_engine::841::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
> VDSM domain monitor status: PENDING
> MainThread::INFO::2017-02-02
> 19:24:59,185::hosted_engine::841::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
> VDSM domain monitor status: PENDING
> MainThread::INFO::2017-02-02
> 19:25:06,333::hosted_engine::841::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
> VDSM domain monitor status: PENDING
> MainThread::INFO::2017-02-02
> 19:25:13,554::hosted_engine::841::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
> VDSM domain monitor status: PENDING
> MainThread::INFO::2017-02-02
> 19:25:20,710::hosted_engine::841::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
> VDSM domain monitor status: PENDING
> MainThread::INFO::2017-02-02
> 19:25:27,865::hosted_engine::841::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
> VDSM domain monitor status: PENDING
> MainThread::ERROR::2017-02-02
> 19:25:27,866::hosted_engine::815::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_domain_monitor)
> Failed to start monitoring domain
> (sd_uuid=7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96, host_id=3): timeout during
> domain acquisition
> MainThread::WARNING::2017-02-02
> 19:25:27,866::hosted_engine::469::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Error while monitoring engine: Failed to start monitoring domain
> (sd_uuid=7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96, host_id=3): timeout during
> domain acquisition
> MainThread::WARNING::2017-02-02
> 19:25:27,866::hosted_engine::472::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Unexpected error
> Traceback (most recent call last):
> File
> "/usr/

Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-03 Thread Sandro Bonazzola
On Fri, Feb 3, 2017 at 10:54 AM, Ralf Schenk  wrote:

> Hello,
>
> I upgraded my cluster of 8 hosts with gluster storage and
> hosted-engine-ha. They were already Centos 7.3 and using Ovirt 4.0.6 and
> gluster 3.7.x packages from storage-sig testing.
>
> I'm missing the storage listed under storage tab but this is already filed
> by a bug. Increasing Cluster and Storage Compability level and also "reset
> emulated machine" after having upgraded one host after another without the
> need to shutdown vm's works well. (VM's get sign that there will be changes
> after reboot).
>
> Important: you also have to issue a yum update on the host for upgrading
> additional components like i.e. gluster to 3.8.x. I was frightened of this
> step but It worked well except a configuration issue I was responsible for
> in gluster.vol (I had "transport socket, rdma")
>
> Bugs/Quirks so far:
>
> 1. After restarting a single VM that used RNG-Device I got an error (it
> was german) but like "RNG Device not supported by cluster". I hat to
> disable RNG Device save the settings. Again settings and enable RNG Device.
> Then machine boots up.
> I think there is a migration step missing from /dev/random to /dev/urandom
> for exisiting VM's.
>

Tomas, Francesco, Michal, can you please follow up on this?



> 2. I'm missing any gluster specific management features as my gluster is
> not managable in any way from the GUI. I expected to see my gluster now in
> dashboard and be able to add volumes etc. What do I need to do to "import"
> my existing gluster (Only one volume so far) to be managable ?
>

Sahina, can you please follow up on this?


> 3. Three of my hosts have the hosted engine deployed for ha. First all
> three where marked by a crown (running was gold and others where silver).
> After upgrading the 3 Host deployed hosted engine ha is not active anymore.
>
> I can't get this host back with working ovirt-ha-agent/broker. I already
> rebooted, manually restarted the services but It isn't able to get cluster
> state according to
> "hosted-engine --vm-status". The other hosts state the host status as
> "unknown stale-data"
>
> I already shut down all agents on all hosts and issued a "hosted-engine
> --reinitialize-lockspace" but that didn't help.
>
> Agents stops working after a timeout-error according to log:
>
> MainThread::INFO::2017-02-02 19:24:52,040::hosted_engine::
> 841::ovirt_hosted_engine_ha.agent.hosted_engine.
> HostedEngine::(_get_domain_monitor_status) VDSM domain monitor status:
> PENDING
> MainThread::INFO::2017-02-02 19:24:59,185::hosted_engine::
> 841::ovirt_hosted_engine_ha.agent.hosted_engine.
> HostedEngine::(_get_domain_monitor_status) VDSM domain monitor status:
> PENDING
> MainThread::INFO::2017-02-02 19:25:06,333::hosted_engine::
> 841::ovirt_hosted_engine_ha.agent.hosted_engine.
> HostedEngine::(_get_domain_monitor_status) VDSM domain monitor status:
> PENDING
> MainThread::INFO::2017-02-02 19:25:13,554::hosted_engine::
> 841::ovirt_hosted_engine_ha.agent.hosted_engine.
> HostedEngine::(_get_domain_monitor_status) VDSM domain monitor status:
> PENDING
> MainThread::INFO::2017-02-02 19:25:20,710::hosted_engine::
> 841::ovirt_hosted_engine_ha.agent.hosted_engine.
> HostedEngine::(_get_domain_monitor_status) VDSM domain monitor status:
> PENDING
> MainThread::INFO::2017-02-02 19:25:27,865::hosted_engine::
> 841::ovirt_hosted_engine_ha.agent.hosted_engine.
> HostedEngine::(_get_domain_monitor_status) VDSM domain monitor status:
> PENDING
> MainThread::ERROR::2017-02-02 19:25:27,866::hosted_engine::
> 815::ovirt_hosted_engine_ha.agent.hosted_engine.
> HostedEngine::(_initialize_domain_monitor) Failed to start monitoring
> domain (sd_uuid=7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96, host_id=3): timeout
> during domain acquisition
> MainThread::WARNING::2017-02-02 19:25:27,866::hosted_engine::
> 469::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Error while monitoring engine: Failed to start monitoring domain
> (sd_uuid=7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96, host_id=3): timeout during
> domain acquisition
> MainThread::WARNING::2017-02-02 19:25:27,866::hosted_engine::
> 472::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Unexpected error
> Traceback (most recent call last):
>   File 
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py",
> line 443, in start_monitoring
> self._initialize_domain_monitor()
>   File 
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py",
> line 816, in _initialize_domain_monitor
> raise Exception(msg)
> Exception: Failed to start monitoring domain 
> (sd_uuid=7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96,
> host_id=3): timeout during domain acquisition
> MainThread::ERROR::2017-02-02 19:25:27,866::hosted_engine::
> 485::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Shutting down the agent because of 3 failures in a row!
> MainThread::INFO:

[ovirt-users] [ANN] ovirt-engine async release for oVirt 4.1.0

2017-02-03 Thread Sandro Bonazzola
On February 3rd 2017 the oVirt team issued an async release of ovirt-engine
package including a fix for a single bug:
https://bugzilla.redhat.com/1417597 - Failed to update template

Who already updated to oVirt 4.1.0 will need to update ovirt-engine in
order to be able to edit templates.

Thanks,
-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-03 Thread Ralf Schenk
Hello,

I upgraded my cluster of 8 hosts with gluster storage and
hosted-engine-ha. They were already Centos 7.3 and using Ovirt 4.0.6 and
gluster 3.7.x packages from storage-sig testing.

I'm missing the storage listed under storage tab but this is already
filed by a bug. Increasing Cluster and Storage Compability level and
also "reset emulated machine" after having upgraded one host after
another without the need to shutdown vm's works well. (VM's get sign
that there will be changes after reboot).

Important: you also have to issue a yum update on the host for upgrading
additional components like i.e. gluster to 3.8.x. I was frightened of
this step but It worked well except a configuration issue I was
responsible for in gluster.vol (I had "transport socket, rdma")

Bugs/Quirks so far:

1. After restarting a single VM that used RNG-Device I got an error (it
was german) but like "RNG Device not supported by cluster". I hat to
disable RNG Device save the settings. Again settings and enable RNG
Device. Then machine boots up.
I think there is a migration step missing from /dev/random to
/dev/urandom for exisiting VM's.

2. I'm missing any gluster specific management features as my gluster is
not managable in any way from the GUI. I expected to see my gluster now
in dashboard and be able to add volumes etc. What do I need to do to
"import" my existing gluster (Only one volume so far) to be managable ?

3. Three of my hosts have the hosted engine deployed for ha. First all
three where marked by a crown (running was gold and others where
silver). After upgrading the 3 Host deployed hosted engine ha is not
active anymore.

I can't get this host back with working ovirt-ha-agent/broker. I already
rebooted, manually restarted the services but It isn't able to get
cluster state according to
"hosted-engine --vm-status". The other hosts state the host status as
"unknown stale-data"

I already shut down all agents on all hosts and issued a "hosted-engine
--reinitialize-lockspace" but that didn't help.

Agents stops working after a timeout-error according to log:

MainThread::INFO::2017-02-02
19:24:52,040::hosted_engine::841::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
VDSM domain monitor status: PENDING
MainThread::INFO::2017-02-02
19:24:59,185::hosted_engine::841::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
VDSM domain monitor status: PENDING
MainThread::INFO::2017-02-02
19:25:06,333::hosted_engine::841::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
VDSM domain monitor status: PENDING
MainThread::INFO::2017-02-02
19:25:13,554::hosted_engine::841::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
VDSM domain monitor status: PENDING
MainThread::INFO::2017-02-02
19:25:20,710::hosted_engine::841::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
VDSM domain monitor status: PENDING
MainThread::INFO::2017-02-02
19:25:27,865::hosted_engine::841::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
VDSM domain monitor status: PENDING
MainThread::ERROR::2017-02-02
19:25:27,866::hosted_engine::815::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_domain_monitor)
Failed to start monitoring domain
(sd_uuid=7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96, host_id=3): timeout
during domain acquisition
MainThread::WARNING::2017-02-02
19:25:27,866::hosted_engine::469::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Error while monitoring engine: Failed to start monitoring domain
(sd_uuid=7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96, host_id=3): timeout
during domain acquisition
MainThread::WARNING::2017-02-02
19:25:27,866::hosted_engine::472::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Unexpected error
Traceback (most recent call last):
  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py",
line 443, in start_monitoring
self._initialize_domain_monitor()
  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py",
line 816, in _initialize_domain_monitor
raise Exception(msg)
Exception: Failed to start monitoring domain
(sd_uuid=7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96, host_id=3): timeout
during domain acquisition
MainThread::ERROR::2017-02-02
19:25:27,866::hosted_engine::485::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Shutting down the agent because of 3 failures in a row!
MainThread::INFO::2017-02-02
19:25:32,087::hosted_engine::841::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
VDSM domain monitor status: PENDING
MainThread::INFO::2017-02-02
19:25:34,250::hosted_engine::769::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_domain_monitor)
Failed to stop monitoring domain
(sd_uuid=7c8deaa8-be02-4aa

Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-03 Thread Sergey Kulikov
Title: Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?










On Thu, Feb 2, 2017 at 9:59 PM,  wrote:


Updated from 4.0.6
Docs are quite incomplete, it's not mentioned about installing ovirt-release41 on centos HV and ovirt-nodes manually, you need to guess.
Also links in release notes are broken ( https://www.ovirt.org/release/4.1.0/ )
They are going to https://www.ovirt.org/release/4.1.0/Hosted_Engine_Howto , but docs for 4.1.0 are absent.


Thanks, opened https://github.com/oVirt/ovirt-site/issues/765
I'd like to ask you if you can push your suggestion on documentation fixes / improvements editing the website following "Edit this page on GitHub" link at the bottom of the page.
Any help getting documentation updated and more useful to users is really appreciated.




Sure, thanks for pointing to that feature, you've already done this work for me)
I'll use github for any new suggestions.









Upgrade went well, everything migrated without problems(I need to restart VMs only to change cluster level to 4.1).
Good news, SPICE HTML 5 client now working for me on Win client with firefox, before on 4.x it was sending connect requests forever.

There is some bugs I've found playing with new version:
1) some storage tabs displaying "No items to display "
for example:
if I'm expanding System\Data centers\[dc name]\ and selecting Storage it displays nothing in main tab, but displays all domains in tree,
if I'm selecting [dc name] and Storage tab, also nothing,
but in System \ Strorage tab all domains present,
also in Clusters\[cluster name]\ Storage tab they present.

Thanks, opened https://bugzilla.redhat.com/show_bug.cgi?id=1418924

 

2) links to embedded files and clients aren't working, engine says 404, examples:
https://[your manager's address]/ovirt-engine/services/files/spice/usbdk-x64.msi
https://[your manager's address]/ovirt-engine/services/files/spice/virt-viewer-x64.msi
and other,
but they are in docs(in ovirt and also in rhel)


Thanks, opened https://bugzilla.redhat.com/show_bug.cgi?id=1418923

 

3) there is also link in "Console options" menu (right click on VM) called "Console Client Resources", it's going to dead location:
http://www.ovirt.org/documentation/admin-guide/virt/console-client-resources 
If you are going to fix issue №2 maybe also adding links directly to installation files embedded will be more helpful for users) 


Thanks, opened https://bugzilla.redhat.com/show_bug.cgi?id=1418921

 
4) little disappointed about "pass discards" on NFS storage, as I've found NFS implementation(even 4.1) in Centos 7 doesn't support
fallocate(FALLOC_FL_PUNCH_HOLE), that quemu uses for file storage, it was added only in kernel 3.18, sparsify also not working, but I'll mail separate
thread with this question.

-- 



Thursday, February 2, 2017, 15:19:29:





Hi,
did you install/update to 4.1.0? Let us know your experience!
We end up knowing only when things doesn't work well, let us know it works fine for you :-)

If you're not planning an update to 4.1.0 in the near future, let us know why.
Maybe we can help.

Thanks!
-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com





-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-03 Thread Sandro Bonazzola
On Fri, Feb 3, 2017 at 9:14 AM, Yura Poltoratskiy 
wrote:

> I've done an upgrade of ovirt-engine tomorrow. There were two problems.
>
> The first - packages from epel repo, solved by disable repo and downgrade
> package to an existing version in ovirt-release40 repo (yes, there is info
> in documentation about epel repo).
>
> The second (and it is not only for current version) - run the engine-setup
> always not complete successfully because cat not start
> ovirt-engine-notifier.service after upgrade, and the error in notifier is
> that there is no MAIL_SERVER. Every time I am upgrading engine I have the
> same error. Than I add MAIL_SERVER=127.0.0.1 to /usr/share/ovirt-engine/
> services/ovirt-engine-notifier/ovirt-engine-notifier.conf and start
> notifier without problem. Is it my mistake?
>

Adding Martin Perina, he may be able to assist you on this.



> And one more question. In Events tab I can see "User vasya@internal
> logged out.", but there are no message that 'vasya' logged in. Could
> someone tell me how to debug this issue?
>

Martin can probably help as well here, adding also Greg and Alexander.




>
> 02.02.2017 14:19, Sandro Bonazzola пишет:
>
> Hi,
> did you install/update to 4.1.0? Let us know your experience!
> We end up knowing only when things doesn't work well, let us know it works
> fine for you :-)
>
> If you're not planning an update to 4.1.0 in the near future, let us know
> why.
> Maybe we can help.
>
> Thanks!
> --
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com
>
>
> ___
> Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>


-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-03 Thread Yura Poltoratskiy

I've done an upgrade of ovirt-engine tomorrow. There were two problems.

The first - packages from epel repo, solved by disable repo and 
downgrade package to an existing version in ovirt-release40 repo (yes, 
there is info in documentation about epel repo).


The second (and it is not only for current version) - run the 
engine-setup always not complete successfully because cat not start 
ovirt-engine-notifier.service after upgrade, and the error in notifier 
is that there is no MAIL_SERVER. Every time I am upgrading engine I have 
the same error. Than I add MAIL_SERVER=127.0.0.1 to 
/usr/share/ovirt-engine/services/ovirt-engine-notifier/ovirt-engine-notifier.conf 
and start notifier without problem. Is it my mistake?


And one more question. In Events tab I can see "User vasya@internal 
logged out.", but there are no message that 'vasya' logged in. Could 
someone tell me how to debug this issue?



02.02.2017 14:19, Sandro Bonazzola пишет:

Hi,
did you install/update to 4.1.0? Let us know your experience!
We end up knowing only when things doesn't work well, let us know it 
works fine for you :-)


If you're not planning an update to 4.1.0 in the near future, let us 
know why.

Maybe we can help.

Thanks!
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com 


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users