Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-06 Thread Ralf Schenk
Yes, but neither is working...


Am 06.02.2017 um 13:33 schrieb Simone Tiraboschi:
> On Mon, Feb 6, 2017 at 12:42 PM, Ralf Schenk  > wrote:
>
> Hello,
>
> I set the host to maintenance mode and tried to undeploy engine
> via GUI. The action in GUI doesn't show an error but afterwards it
> still shows only "Undeploy" on hosted-engine tab od the host.
>
> Even removing the host from the cluster doesn't work because the
> GUI says "The hosts maekred with * still have hosted engine
> deployed on them. Hosted engine should be undeployed before they
> are removed"
>
> Yes, sorry: it's now a two step process, you have first to undeploy
> hosted-engine from the host and only then you could remove the host.
>

-- 


*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *r...@databay.de* 

*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* 

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-06 Thread Simone Tiraboschi
On Mon, Feb 6, 2017 at 12:42 PM, Ralf Schenk  wrote:

> Hello,
>
> I set the host to maintenance mode and tried to undeploy engine via GUI.
> The action in GUI doesn't show an error but afterwards it still shows only
> "Undeploy" on hosted-engine tab od the host.
>
> Even removing the host from the cluster doesn't work because the GUI says
> "The hosts maekred with * still have hosted engine deployed on them. Hosted
> engine should be undeployed before they are removed"
>
Yes, sorry: it's now a two step process, you have first to undeploy
hosted-engine from the host and only then you could remove the host.



> Bye
> Am 06.02.2017 um 11:44 schrieb Simone Tiraboschi:
>
>
>
> On Sat, Feb 4, 2017 at 11:52 AM, Ralf Schenk  wrote:
>
>> Hello,
>>
>> I have set up 3 hosts for engine, 2 of them are working correct. There is
>> no other host even having broker/agent installed. Is it possible that the
>> error occurs because the hosts are multihomed (Management IP, IP for
>> storage) and can communicate with different IP's ?
>>
> Having multiple logical networks for storage, management and so on is a
> good practice and it's advised so I tend to exclude any issue there.
> The point is why your microcloud27.sub.mydomain.de fails acquiring a lock
> as host 3.
> Probably the simplest fix is just setting it in maintenance mode from the
> engine, removing it and deploying it from the engine as an hosted engine
> host again.
>
>
>
>
> --
>
>
> *Ralf Schenk*
> fon +49 (0) 24 05 / 40 83 70 <+49%202405%20408370>
> fax +49 (0) 24 05 / 40 83 759 <+49%202405%204083759>
> mail *r...@databay.de* 
>
> *Databay AG*
> Jens-Otto-Krag-Straße 11
> D-52146 Würselen
> *www.databay.de* 
>
> Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
> Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
> Philipp Hermanns
> Aufsichtsratsvorsitzender: Wilhelm Dohmen
> --
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-06 Thread cmc
Hi Sandro,

I upgraded my 2 host setup + engine (engine is currently on separate
hardware, but I plan to make it self-hosted), and it went like
clockwork. My engine + hosts were running 4.0.5 and 7.2, so after
installing 4.1 release, I did an OS update to 7.3 first, starting with
the engine, then ran engine-setup. I opted to do a 'yum upgrade' on
the the first host, which actually updated all the ovirt packages as
well and rebooted (I'm not sure this is an approved method, but it
worked fine). After the first host was back, I upgraded the second
host from the GUI, but then I ran a yum upgrade to update all the OS
stuff, such as the kernel, libc etc, and rebooted.

Many thanks for making the upgrade process so smooth!

Cheers,

Cam

On Thu, Feb 2, 2017 at 12:19 PM, Sandro Bonazzola  wrote:
> Hi,
> did you install/update to 4.1.0? Let us know your experience!
> We end up knowing only when things doesn't work well, let us know it works
> fine for you :-)
>
> If you're not planning an update to 4.1.0 in the near future, let us know
> why.
> Maybe we can help.
>
> Thanks!
> --
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-06 Thread Ralf Schenk
Hello,

I set the host to maintenance mode and tried to undeploy engine via GUI.
The action in GUI doesn't show an error but afterwards it still shows
only "Undeploy" on hosted-engine tab od the host.

Even removing the host from the cluster doesn't work because the GUI
says "The hosts maekred with * still have hosted engine deployed on
them. Hosted engine should be undeployed before they are removed"

Bye
Am 06.02.2017 um 11:44 schrieb Simone Tiraboschi:
>
>
> On Sat, Feb 4, 2017 at 11:52 AM, Ralf Schenk  > wrote:
>
> Hello,
>
> I have set up 3 hosts for engine, 2 of them are working correct.
> There is no other host even having broker/agent installed. Is it
> possible that the error occurs because the hosts are multihomed
> (Management IP, IP for storage) and can communicate with different
> IP's ?
>
> Having multiple logical networks for storage, management and so on is
> a good practice and it's advised so I tend to exclude any issue there.
> The point is why your microcloud27.sub.mydomain.de
>  fails acquiring a lock as host 3.
> Probably the simplest fix is just setting it in maintenance mode from
> the engine, removing it and deploying it from the engine as an hosted
> engine host again. 
>
>  

-- 


*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *r...@databay.de* 

*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* 

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-06 Thread Simone Tiraboschi
On Sat, Feb 4, 2017 at 11:52 AM, Ralf Schenk  wrote:

> Hello,
>
> I have set up 3 hosts for engine, 2 of them are working correct. There is
> no other host even having broker/agent installed. Is it possible that the
> error occurs because the hosts are multihomed (Management IP, IP for
> storage) and can communicate with different IP's ?
>
Having multiple logical networks for storage, management and so on is a
good practice and it's advised so I tend to exclude any issue there.
The point is why your microcloud27.sub.mydomain.de fails acquiring a lock
as host 3.
Probably the simplest fix is just setting it in maintenance mode from the
engine, removing it and deploying it from the engine as an hosted engine
host again.



> hosted-engine --vm-status on both working hosts seems correct: (3 is out
> of order...)
>
> [root@microcloud21 ~]# hosted-engine --vm-status
>
>
> --== Host 1 status ==--
>
> conf_on_shared_storage : True
> Status up-to-date  : True
> Hostname   : microcloud21.sub.mydomain.de
> Host ID: 1
> Engine status  : {"health": "good", "vm": "up",
> "detail": "up"}
> Score  : 3400
> stopped: False
> Local maintenance  : False
> crc32  : 5941227d
> local_conf_timestamp   : 152316
> Host timestamp : 152302
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=152302 (Sat Feb  4 11:49:29 2017)
> host-id=1
> score=3400
> vm_conf_refresh_time=152316 (Sat Feb  4 11:49:43 2017)
> conf_on_shared_storage=True
> maintenance=False
> state=EngineUp
> stopped=False
>
>
> --== Host 2 status ==--
>
> conf_on_shared_storage : True
> Status up-to-date  : True
> Hostname   : microcloud24.sub.mydomain.de
> Host ID: 2
> Engine status  : {"reason": "vm not running on this
> host", "health": "bad", " vm": "down",
> "detail": "unknown"}
> Score  : 3400
> stopped: False
> Local maintenance  : False
> crc32  : 77e25433
> local_conf_timestamp   : 157637
> Host timestamp : 157623
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=157623 (Sat Feb  4 11:49:34 2017)
> host-id=2
> score=3400
> vm_conf_refresh_time=157637 (Sat Feb  4 11:49:48 2017)
> conf_on_shared_storage=True
> maintenance=False
> state=EngineDown
> stopped=False
>
>
> --== Host 3 status ==--
>
> conf_on_shared_storage : True
> Status up-to-date  : False
> Hostname   : microcloud27.sub.mydomain.de
> Host ID: 3
> Engine status  : unknown stale-data
> Score  : 0
> stopped: True
> Local maintenance  : False
> crc32  : 74798986
> local_conf_timestamp   : 77946
> Host timestamp : 77932
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=77932 (Fri Feb  3 15:19:25 2017)
> host-id=3
> score=0
> vm_conf_refresh_time=77946 (Fri Feb  3 15:19:39 2017)
> conf_on_shared_storage=True
> maintenance=False
> state=AgentStopped
> stopped=True
>
> Am 03.02.2017 um 19:20 schrieb Simone Tiraboschi:
>
>
>
> On Fri, Feb 3, 2017 at 5:22 PM, Ralf Schenk  wrote:
>
>> Hello,
>>
>> of course:
>>
>> [root@microcloud27 mnt]# sanlock client status
>> daemon 8a93c9ea-e242-408c-a63d-a9356bb22df5.microcloud
>> p -1 helper
>> p -1 listener
>> p -1 status
>>
>> sanlock.log attached. (Beginning 2017-01-27 where everything was fine)
>>
> Thanks, the issue is here:
>
> 2017-02-02 19:01:22+0100 4848 [1048]: s36 lockspace 
> 7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96:3:/rhev/data-center/mnt/glusterSD/glusterfs.sub.mydomain.de:_engine/7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96/dom_md/ids:0
> 2017-02-02 19:03:42+0100 4988 [12983]: s36 delta_acquire host_id 3 busy1 3 15 
> 13129 7ad427b1-fbb6-4cee-b9ee-01f596fddfbb.microcloud
> 2017-02-02 19:03:43+0100 4989 [1048]: s36 add_lockspace fail result -262
>
> Could you please check if you have other hosts contending for the same ID
> (id=3 in this case).
>
>
>
> --
>
>
> *Ralf Schenk*
> fon +49 (0) 24 05 / 40 83 70 <+49%202405%20408370>
> fax +49 (0) 24 05 / 40 83 759 <+49%202405%204083759>
> mail *r...@databay.de* 

Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-05 Thread Arman Khalatyan
https://bugzilla.redhat.com/show_bug.cgi?id=1419352
Done.


***

 Dr. Arman Khalatyan  eScience -SuperComputing
 Leibniz-Institut für Astrophysik Potsdam (AIP)
 An der Sternwarte 16, 14482 Potsdam, Germany

***

On Sun, Feb 5, 2017 at 8:55 PM, Nir Soffer  wrote:

> On Sun, Feb 5, 2017 at 9:39 PM, Arman Khalatyan  wrote:
>
>> All upgrades are went smoothly! Thanks for the release.
>> There is an minor problem I saw:
>> After upgrading from 4.0.6 to 4.1 the GUI dialog for moving the disks
>> from one Storage to another is not rendered correctly when multiple
>> disks(>8) are selected for move.
>> please see the attachment:
>> ​
>>
>
> Thanks for reporting this, would you file a bug?
> https://bugzilla.redhat.com/enter_bug.cgi?product=ovirt-engine
>
>
>>
>>
>> ***
>>
>>  Dr. Arman Khalatyan  eScience -SuperComputing
>>  Leibniz-Institut für Astrophysik Potsdam (AIP)
>>  An der Sternwarte 16, 14482 Potsdam, Germany
>>
>> ***
>>
>> On Sat, Feb 4, 2017 at 5:08 PM, Martin Perina  wrote:
>>
>>>
>>>
>>> On Fri, Feb 3, 2017 at 9:24 AM, Sandro Bonazzola 
>>> wrote:
>>>


 On Fri, Feb 3, 2017 at 9:14 AM, Yura Poltoratskiy <
 yurapolt...@gmail.com> wrote:

> I've done an upgrade of ovirt-engine tomorrow. There were two
> problems.
>
> The first - packages from epel repo, solved by disable repo and
> downgrade package to an existing version in ovirt-release40 repo (yes,
> there is info in documentation about epel repo).
>
> The second (and it is not only for current version) - run the
> engine-setup always not complete successfully because cat not start
> ovirt-engine-notifier.service after upgrade, and the error in notifier is
> that there is no MAIL_SERVER. Every time I am upgrading engine I have the
> same error. Than I add MAIL_SERVER=127.0.0.1 to
> /usr/share/ovirt-engine/services/ovirt-engine-notifier/ovirt-engine-notifier.conf
> and start notifier without problem. Is it my mistake?
>

>>> ​Please never change anything in /usr/share/ovirt-engine, those files
>>> are always overwritten during upgrade. If you need change any option in
>>> ovirt-engine-notifier, please create new configuration file in
>>> /etc/ovirt-engine/notifier/notifier.conf.d directory. For example if
>>> you need to set MAIL_SERVER please create 
>>> /etc/ovirt-engine/notifier/notifier.conf.d/99-custom.conf
>>> with following content:
>>>
>>>   MAIL_SERVER=127.0.0.1
>>>
>>> After saving the file please restart ovirt-engine-notifier service:
>>>
>>>   systemctl restart ovirt-engine-notifier
>>>
>>>
 Adding Martin Perina, he may be able to assist you on this.



> And one more question. In Events tab I can see "User vasya@internal
> logged out.", but there are no message that 'vasya' logged in. Could
> someone tell me how to debug this issue?
>

>>> ​Please share complete log to analyze this, but this user may be logged
>>> in before upgrade and we just clean its session after upgrade.
>>> ​
>>>
>>>

 Martin can probably help as well here, adding also Greg and Alexander.




>
> 02.02.2017 14:19, Sandro Bonazzola пишет:
>
> Hi,
> did you install/update to 4.1.0? Let us know your experience!
> We end up knowing only when things doesn't work well, let us know it
> works fine for you :-)
>
> If you're not planning an update to 4.1.0 in the near future, let us
> know why.
> Maybe we can help.
>
> Thanks!
> --
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community
> collaboration.
> See how it works at redhat.com
>
>
> ___
> Users mailing 
> listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>


 --
 Sandro Bonazzola
 Better technology. Faster innovation. Powered by community
 collaboration.
 See how it works at redhat.com

>>>
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-05 Thread Nir Soffer
On Sun, Feb 5, 2017 at 9:39 PM, Arman Khalatyan  wrote:

> All upgrades are went smoothly! Thanks for the release.
> There is an minor problem I saw:
> After upgrading from 4.0.6 to 4.1 the GUI dialog for moving the disks from
> one Storage to another is not rendered correctly when multiple disks(>8)
> are selected for move.
> please see the attachment:
> ​
>

Thanks for reporting this, would you file a bug?
https://bugzilla.redhat.com/enter_bug.cgi?product=ovirt-engine


>
>
> ***
>
>  Dr. Arman Khalatyan  eScience -SuperComputing
>  Leibniz-Institut für Astrophysik Potsdam (AIP)
>  An der Sternwarte 16, 14482 Potsdam, Germany
>
> ***
>
> On Sat, Feb 4, 2017 at 5:08 PM, Martin Perina  wrote:
>
>>
>>
>> On Fri, Feb 3, 2017 at 9:24 AM, Sandro Bonazzola 
>> wrote:
>>
>>>
>>>
>>> On Fri, Feb 3, 2017 at 9:14 AM, Yura Poltoratskiy >> > wrote:
>>>
 I've done an upgrade of ovirt-engine tomorrow. There were two problems.

 The first - packages from epel repo, solved by disable repo and
 downgrade package to an existing version in ovirt-release40 repo (yes,
 there is info in documentation about epel repo).

 The second (and it is not only for current version) - run the
 engine-setup always not complete successfully because cat not start
 ovirt-engine-notifier.service after upgrade, and the error in notifier is
 that there is no MAIL_SERVER. Every time I am upgrading engine I have the
 same error. Than I add MAIL_SERVER=127.0.0.1 to
 /usr/share/ovirt-engine/services/ovirt-engine-notifier/ovirt-engine-notifier.conf
 and start notifier without problem. Is it my mistake?

>>>
>> ​Please never change anything in /usr/share/ovirt-engine, those files are
>> always overwritten during upgrade. If you need change any option in
>> ovirt-engine-notifier, please create new configuration file in
>> /etc/ovirt-engine/notifier/notifier.conf.d directory. For example if you
>> need to set MAIL_SERVER please create 
>> /etc/ovirt-engine/notifier/notifier.conf.d/99-custom.conf
>> with following content:
>>
>>   MAIL_SERVER=127.0.0.1
>>
>> After saving the file please restart ovirt-engine-notifier service:
>>
>>   systemctl restart ovirt-engine-notifier
>>
>>
>>> Adding Martin Perina, he may be able to assist you on this.
>>>
>>>
>>>
 And one more question. In Events tab I can see "User vasya@internal
 logged out.", but there are no message that 'vasya' logged in. Could
 someone tell me how to debug this issue?

>>>
>> ​Please share complete log to analyze this, but this user may be logged
>> in before upgrade and we just clean its session after upgrade.
>> ​
>>
>>
>>>
>>> Martin can probably help as well here, adding also Greg and Alexander.
>>>
>>>
>>>
>>>

 02.02.2017 14:19, Sandro Bonazzola пишет:

 Hi,
 did you install/update to 4.1.0? Let us know your experience!
 We end up knowing only when things doesn't work well, let us know it
 works fine for you :-)

 If you're not planning an update to 4.1.0 in the near future, let us
 know why.
 Maybe we can help.

 Thanks!
 --
 Sandro Bonazzola
 Better technology. Faster innovation. Powered by community
 collaboration.
 See how it works at redhat.com


 ___
 Users mailing 
 listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users



 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users


>>>
>>>
>>> --
>>> Sandro Bonazzola
>>> Better technology. Faster innovation. Powered by community collaboration.
>>> See how it works at redhat.com
>>>
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-05 Thread Arman Khalatyan
All upgrades are went smoothly! Thanks for the release.
There is an minor problem I saw:
After upgrading from 4.0.6 to 4.1 the GUI dialog for moving the disks from
one Storage to another is not rendered correctly when multiple disks(>8)
are selected for move.
please see the attachment:
​


***

 Dr. Arman Khalatyan  eScience -SuperComputing
 Leibniz-Institut für Astrophysik Potsdam (AIP)
 An der Sternwarte 16, 14482 Potsdam, Germany

***

On Sat, Feb 4, 2017 at 5:08 PM, Martin Perina  wrote:

>
>
> On Fri, Feb 3, 2017 at 9:24 AM, Sandro Bonazzola 
> wrote:
>
>>
>>
>> On Fri, Feb 3, 2017 at 9:14 AM, Yura Poltoratskiy 
>> wrote:
>>
>>> I've done an upgrade of ovirt-engine tomorrow. There were two problems.
>>>
>>> The first - packages from epel repo, solved by disable repo and
>>> downgrade package to an existing version in ovirt-release40 repo (yes,
>>> there is info in documentation about epel repo).
>>>
>>> The second (and it is not only for current version) - run the
>>> engine-setup always not complete successfully because cat not start
>>> ovirt-engine-notifier.service after upgrade, and the error in notifier is
>>> that there is no MAIL_SERVER. Every time I am upgrading engine I have the
>>> same error. Than I add MAIL_SERVER=127.0.0.1 to
>>> /usr/share/ovirt-engine/services/ovirt-engine-notifier/ovirt-engine-notifier.conf
>>> and start notifier without problem. Is it my mistake?
>>>
>>
> ​Please never change anything in /usr/share/ovirt-engine, those files are
> always overwritten during upgrade. If you need change any option in
> ovirt-engine-notifier, please create new configuration file in
> /etc/ovirt-engine/notifier/notifier.conf.d directory. For example if you
> need to set MAIL_SERVER please create 
> /etc/ovirt-engine/notifier/notifier.conf.d/99-custom.conf
> with following content:
>
>   MAIL_SERVER=127.0.0.1
>
> After saving the file please restart ovirt-engine-notifier service:
>
>   systemctl restart ovirt-engine-notifier
>
>
>> Adding Martin Perina, he may be able to assist you on this.
>>
>>
>>
>>> And one more question. In Events tab I can see "User vasya@internal
>>> logged out.", but there are no message that 'vasya' logged in. Could
>>> someone tell me how to debug this issue?
>>>
>>
> ​Please share complete log to analyze this, but this user may be logged in
> before upgrade and we just clean its session after upgrade.
> ​
>
>
>>
>> Martin can probably help as well here, adding also Greg and Alexander.
>>
>>
>>
>>
>>>
>>> 02.02.2017 14:19, Sandro Bonazzola пишет:
>>>
>>> Hi,
>>> did you install/update to 4.1.0? Let us know your experience!
>>> We end up knowing only when things doesn't work well, let us know it
>>> works fine for you :-)
>>>
>>> If you're not planning an update to 4.1.0 in the near future, let us
>>> know why.
>>> Maybe we can help.
>>>
>>> Thanks!
>>> --
>>> Sandro Bonazzola
>>> Better technology. Faster innovation. Powered by community collaboration.
>>> See how it works at redhat.com
>>>
>>>
>>> ___
>>> Users mailing 
>>> listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>>
>> --
>> Sandro Bonazzola
>> Better technology. Faster innovation. Powered by community collaboration.
>> See how it works at redhat.com
>>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-04 Thread Martin Perina
On Fri, Feb 3, 2017 at 9:24 AM, Sandro Bonazzola 
wrote:

>
>
> On Fri, Feb 3, 2017 at 9:14 AM, Yura Poltoratskiy 
> wrote:
>
>> I've done an upgrade of ovirt-engine tomorrow. There were two problems.
>>
>> The first - packages from epel repo, solved by disable repo and downgrade
>> package to an existing version in ovirt-release40 repo (yes, there is info
>> in documentation about epel repo).
>>
>> The second (and it is not only for current version) - run the
>> engine-setup always not complete successfully because cat not start
>> ovirt-engine-notifier.service after upgrade, and the error in notifier is
>> that there is no MAIL_SERVER. Every time I am upgrading engine I have the
>> same error. Than I add MAIL_SERVER=127.0.0.1 to
>> /usr/share/ovirt-engine/services/ovirt-engine-notifier/ovirt-engine-notifier.conf
>> and start notifier without problem. Is it my mistake?
>>
>
​Please never change anything in /usr/share/ovirt-engine, those files are
always overwritten during upgrade. If you need change any option in
ovirt-engine-notifier, please create new configuration file in
/etc/ovirt-engine/notifier/notifier.conf.d directory. For example if you
need to set MAIL_SERVER please create
/etc/ovirt-engine/notifier/notifier.conf.d/99-custom.conf with following
content:

  MAIL_SERVER=127.0.0.1

After saving the file please restart ovirt-engine-notifier service:

  systemctl restart ovirt-engine-notifier


> Adding Martin Perina, he may be able to assist you on this.
>
>
>
>> And one more question. In Events tab I can see "User vasya@internal
>> logged out.", but there are no message that 'vasya' logged in. Could
>> someone tell me how to debug this issue?
>>
>
​Please share complete log to analyze this, but this user may be logged in
before upgrade and we just clean its session after upgrade.
​


>
> Martin can probably help as well here, adding also Greg and Alexander.
>
>
>
>
>>
>> 02.02.2017 14:19, Sandro Bonazzola пишет:
>>
>> Hi,
>> did you install/update to 4.1.0? Let us know your experience!
>> We end up knowing only when things doesn't work well, let us know it
>> works fine for you :-)
>>
>> If you're not planning an update to 4.1.0 in the near future, let us know
>> why.
>> Maybe we can help.
>>
>> Thanks!
>> --
>> Sandro Bonazzola
>> Better technology. Faster innovation. Powered by community collaboration.
>> See how it works at redhat.com
>>
>>
>> ___
>> Users mailing 
>> listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>>
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
>
> --
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-04 Thread Uwe Laverenz

Hi all,

Am 02.02.2017 um 13:19 schrieb Sandro Bonazzola:


did you install/update to 4.1.0? Let us know your experience!
We end up knowing only when things doesn't work well, let us know it
works fine for you :-)


I just updated my test environment (3 hosts, hosted engine, iSCSI) to 
4.1 and it worked very well. I initially had a problem to migrate my 
engine vm to another host but this could have been a local problem.


The only thing that could be improved is the online documentation (404 
errors, already adressed in another thread). ;)


Otherwise erverything runs very well so far, thank you for your work!

cu,
Uwe
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-04 Thread Ralf Schenk
Hello,

I have set up 3 hosts for engine, 2 of them are working correct. There
is no other host even having broker/agent installed. Is it possible that
the error occurs because the hosts are multihomed (Management IP, IP for
storage) and can communicate with different IP's ?

hosted-engine --vm-status on both working hosts seems correct: (3 is out
of order...)

[root@microcloud21 ~]# hosted-engine --vm-status


--== Host 1 status ==--

conf_on_shared_storage : True
Status up-to-date  : True
Hostname   : microcloud21.sub.mydomain.de
Host ID: 1
Engine status  : {"health": "good", "vm": "up",
"detail": "up"}
Score  : 3400
stopped: False
Local maintenance  : False
crc32  : 5941227d
local_conf_timestamp   : 152316
Host timestamp : 152302
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=152302 (Sat Feb  4 11:49:29 2017)
host-id=1
score=3400
vm_conf_refresh_time=152316 (Sat Feb  4 11:49:43 2017)
conf_on_shared_storage=True
maintenance=False
state=EngineUp
stopped=False


--== Host 2 status ==--

conf_on_shared_storage : True
Status up-to-date  : True
Hostname   : microcloud24.sub.mydomain.de
Host ID: 2
Engine status  : {"reason": "vm not running on this
host", "health": "bad", " vm": "down",
"detail": "unknown"}
Score  : 3400
stopped: False
Local maintenance  : False
crc32  : 77e25433
local_conf_timestamp   : 157637
Host timestamp : 157623
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=157623 (Sat Feb  4 11:49:34 2017)
host-id=2
score=3400
vm_conf_refresh_time=157637 (Sat Feb  4 11:49:48 2017)
conf_on_shared_storage=True
maintenance=False
state=EngineDown
stopped=False


--== Host 3 status ==--

conf_on_shared_storage : True
Status up-to-date  : False
Hostname   : microcloud27.sub.mydomain.de
Host ID: 3
Engine status  : unknown stale-data
Score  : 0
stopped: True
Local maintenance  : False
crc32  : 74798986
local_conf_timestamp   : 77946
Host timestamp : 77932
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=77932 (Fri Feb  3 15:19:25 2017)
host-id=3
score=0
vm_conf_refresh_time=77946 (Fri Feb  3 15:19:39 2017)
conf_on_shared_storage=True
maintenance=False
state=AgentStopped
stopped=True

Am 03.02.2017 um 19:20 schrieb Simone Tiraboschi:

>
>
> On Fri, Feb 3, 2017 at 5:22 PM, Ralf Schenk  > wrote:
>
> Hello,
>
> of course:
>
> [root@microcloud27 mnt]# sanlock client status
> daemon 8a93c9ea-e242-408c-a63d-a9356bb22df5.microcloud
> p -1 helper
> p -1 listener
> p -1 status
>
> sanlock.log attached. (Beginning 2017-01-27 where everything was fine)
>
> Thanks, the issue is here:
> 2017-02-02 19:01:22+0100 4848 [1048]: s36 lockspace 
> 7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96:3:/rhev/data-center/mnt/glusterSD/glusterfs.sub.mydomain.de:_engine/7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96/dom_md/ids:0
> 2017-02-02 19:03:42+0100 4988 [12983]: s36 delta_acquire host_id 3 busy1 3 15 
> 13129 7ad427b1-fbb6-4cee-b9ee-01f596fddfbb.microcloud
> 2017-02-02 19:03:43+0100 4989 [1048]: s36 add_lockspace fail result -262
> Could you please check if you have other hosts contending for the same
> ID (id=3 in this case).
>  

-- 


*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *r...@databay.de* 

*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* 

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Klaus Scholzen (RA)


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-03 Thread Simone Tiraboschi
On Fri, Feb 3, 2017 at 7:20 PM, Simone Tiraboschi 
wrote:

>
>
> On Fri, Feb 3, 2017 at 5:22 PM, Ralf Schenk  wrote:
>
>> Hello,
>>
>> of course:
>>
>> [root@microcloud27 mnt]# sanlock client status
>> daemon 8a93c9ea-e242-408c-a63d-a9356bb22df5.microcloud
>> p -1 helper
>> p -1 listener
>> p -1 status
>>
>> sanlock.log attached. (Beginning 2017-01-27 where everything was fine)
>>
> Thanks, the issue is here:
>
> 2017-02-02 19:01:22+0100 4848 [1048]: s36 lockspace 
> 7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96:3:/rhev/data-center/mnt/glusterSD/glusterfs.rxmgmt.databay.de:_engine/7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96/dom_md/ids:0
> 2017-02-02 19:03:42+0100 4988 [12983]: s36 delta_acquire host_id 3 busy1 3 15 
> 13129 7ad427b1-fbb6-4cee-b9ee-01f596fddfbb.microcloud
> 2017-02-02 19:03:43+0100 4989 [1048]: s36 add_lockspace fail result -262
>
> Could you please check if you have other hosts contending for the same ID
> (id=3 in this case).
>

Another option is to manually force a sanlock renewal on that host and
check what happens, something like:
sanlock client renewal -s 7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96:3:/rhev/data-
center/mnt/glusterSD/glusterfs.rxmgmt.databay.de:_engine/7c8deaa8-be02-4aaf-
b9b4-ddc8da99ad96/dom_md/ids:0


>
>
>> Bye
>>
>> Am 03.02.2017 um 16:12 schrieb Simone Tiraboschi:
>>
>> The hosted-engine storage domain is mounted for sure,
>> but the issue is here:
>> Exception: Failed to start monitoring domain
>> (sd_uuid=7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96, host_id=3): timeout
>> during domain acquisition
>>
>> The point is that in VDSM logs I see just something like:
>> 2017-02-02 21:05:22,283 INFO  (jsonrpc/1) [dispatcher] Run and protect:
>> repoStats(options=None) (logUtils:49)
>> 2017-02-02 21:05:22,285 INFO  (jsonrpc/1) [dispatcher] Run and protect:
>> repoStats, Return response: {u'a7fbaaad-7043-4391-9523-3bedcdc4fb0d':
>> {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay':
>> '0.000748727', 'lastCheck': '0.1', 'valid': True},
>> u'2b2a44fc-f2bd-47cd-b7af-00be59e30a35': {'code': 0, 'actual': True,
>> 'version': 0, 'acquired': True, 'delay': '0.00082529', 'lastCheck': '0.1',
>> 'valid': True}, u'5d99af76-33b5-47d8-99da-1f32413c7bb0': {'code': 0,
>> 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000349356',
>> 'lastCheck': '5.3', 'valid': True}, u'7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96':
>> {'code': 0, 'actual': True, 'version': 4, 'acquired': False, 'delay':
>> '0.000377052', 'lastCheck': '0.6', 'valid': True}} (logUtils:52)
>>
>> Where the other storage domains have 'acquired': True whil it's
>> always 'acquired': False for the hosted-engine storage domain.
>>
>> Could you please share your /var/log/sanlock.log from the same host and
>> the output of
>>  sanlock client status
>> ?
>>
>>
>>
>>
>> On Fri, Feb 3, 2017 at 3:52 PM, Ralf Schenk  wrote:
>>
>>> Hello,
>>>
>>> I also put host in Maintenance and restarted vdsm while ovirt-ha-agent
>>> is running. I can mount the gluster Volume "engine" manually in the host.
>>>
>>> I get this repeatedly in /var/log/vdsm.log:
>>>
>>> 2017-02-03 15:29:28,891 INFO  (MainThread) [vds] Exiting (vdsm:167)
>>> 2017-02-03 15:29:30,974 INFO  (MainThread) [vds] (PID: 11456) I am the
>>> actual vdsm 4.19.4-1.el7.centos microcloud27 (3.10.0-514.6.1.el7.x86_64)
>>> (vdsm:145)
>>> 2017-02-03 15:29:30,974 INFO  (MainThread) [vds] VDSM will run with cpu
>>> affinity: frozenset([1]) (vdsm:251)
>>> 2017-02-03 15:29:31,013 INFO  (MainThread) [storage.check] Starting
>>> check service (check:91)
>>> 2017-02-03 15:29:31,017 INFO  (MainThread) [storage.Dispatcher] Starting
>>> StorageDispatcher... (dispatcher:47)
>>> 2017-02-03 15:29:31,017 INFO  (check/loop) [storage.asyncevent] Starting
>>>  (asyncevent:122)
>>> 2017-02-03 15:29:31,156 INFO  (MainThread) [dispatcher] Run and protect:
>>> registerDomainStateChangeCallback(callbackFunc=>> object at 0x2881fc8>) (logUtils:49)
>>> 2017-02-03 15:29:31,156 INFO  (MainThread) [dispatcher] Run and protect:
>>> registerDomainStateChangeCallback, Return response: None (logUtils:52)
>>> 2017-02-03 15:29:31,160 INFO  (MainThread) [MOM] Preparing MOM interface
>>> (momIF:49)
>>> 2017-02-03 15:29:31,161 INFO  (MainThread) [MOM] Using named unix socket
>>> /var/run/vdsm/mom-vdsm.sock (momIF:58)
>>> 2017-02-03 15:29:31,162 INFO  (MainThread) [root] Unregistering all
>>> secrets (secret:91)
>>> 2017-02-03 15:29:31,164 INFO  (MainThread) [vds] Setting channels'
>>> timeout to 30 seconds. (vmchannels:223)
>>> 2017-02-03 15:29:31,165 INFO  (MainThread) [vds.MultiProtocolAcceptor]
>>> Listening at :::54321 (protocoldetector:185)
>>> 2017-02-03 15:29:31,354 INFO  (vmrecovery) [vds] recovery: completed in
>>> 0s (clientIF:495)
>>> 2017-02-03 15:29:31,371 INFO  (BindingXMLRPC) [vds] XMLRPC server
>>> running (bindingxmlrpc:63)
>>> 2017-02-03 15:29:31,471 INFO  (periodic/1) [dispatcher] Run and protect:
>>> repoStats(options=None) (logUtils:49)
>>> 2017-02-03 

Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-03 Thread Simone Tiraboschi
On Fri, Feb 3, 2017 at 5:22 PM, Ralf Schenk  wrote:

> Hello,
>
> of course:
>
> [root@microcloud27 mnt]# sanlock client status
> daemon 8a93c9ea-e242-408c-a63d-a9356bb22df5.microcloud
> p -1 helper
> p -1 listener
> p -1 status
>
> sanlock.log attached. (Beginning 2017-01-27 where everything was fine)
>
Thanks, the issue is here:

2017-02-02 19:01:22+0100 4848 [1048]: s36 lockspace
7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96:3:/rhev/data-center/mnt/glusterSD/glusterfs.rxmgmt.databay.de:_engine/7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96/dom_md/ids:0
2017-02-02 19:03:42+0100 4988 [12983]: s36 delta_acquire host_id 3
busy1 3 15 13129 7ad427b1-fbb6-4cee-b9ee-01f596fddfbb.microcloud
2017-02-02 19:03:43+0100 4989 [1048]: s36 add_lockspace fail result -262

Could you please check if you have other hosts contending for the same ID
(id=3 in this case).


> Bye
>
> Am 03.02.2017 um 16:12 schrieb Simone Tiraboschi:
>
> The hosted-engine storage domain is mounted for sure,
> but the issue is here:
> Exception: Failed to start monitoring domain 
> (sd_uuid=7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96,
> host_id=3): timeout during domain acquisition
>
> The point is that in VDSM logs I see just something like:
> 2017-02-02 21:05:22,283 INFO  (jsonrpc/1) [dispatcher] Run and protect:
> repoStats(options=None) (logUtils:49)
> 2017-02-02 21:05:22,285 INFO  (jsonrpc/1) [dispatcher] Run and protect:
> repoStats, Return response: {u'a7fbaaad-7043-4391-9523-3bedcdc4fb0d':
> {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay':
> '0.000748727', 'lastCheck': '0.1', 'valid': True},
> u'2b2a44fc-f2bd-47cd-b7af-00be59e30a35': {'code': 0, 'actual': True,
> 'version': 0, 'acquired': True, 'delay': '0.00082529', 'lastCheck': '0.1',
> 'valid': True}, u'5d99af76-33b5-47d8-99da-1f32413c7bb0': {'code': 0,
> 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000349356',
> 'lastCheck': '5.3', 'valid': True}, u'7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96':
> {'code': 0, 'actual': True, 'version': 4, 'acquired': False, 'delay':
> '0.000377052', 'lastCheck': '0.6', 'valid': True}} (logUtils:52)
>
> Where the other storage domains have 'acquired': True whil it's
> always 'acquired': False for the hosted-engine storage domain.
>
> Could you please share your /var/log/sanlock.log from the same host and
> the output of
>  sanlock client status
> ?
>
>
>
>
> On Fri, Feb 3, 2017 at 3:52 PM, Ralf Schenk  wrote:
>
>> Hello,
>>
>> I also put host in Maintenance and restarted vdsm while ovirt-ha-agent is
>> running. I can mount the gluster Volume "engine" manually in the host.
>>
>> I get this repeatedly in /var/log/vdsm.log:
>>
>> 2017-02-03 15:29:28,891 INFO  (MainThread) [vds] Exiting (vdsm:167)
>> 2017-02-03 15:29:30,974 INFO  (MainThread) [vds] (PID: 11456) I am the
>> actual vdsm 4.19.4-1.el7.centos microcloud27 (3.10.0-514.6.1.el7.x86_64)
>> (vdsm:145)
>> 2017-02-03 15:29:30,974 INFO  (MainThread) [vds] VDSM will run with cpu
>> affinity: frozenset([1]) (vdsm:251)
>> 2017-02-03 15:29:31,013 INFO  (MainThread) [storage.check] Starting check
>> service (check:91)
>> 2017-02-03 15:29:31,017 INFO  (MainThread) [storage.Dispatcher] Starting
>> StorageDispatcher... (dispatcher:47)
>> 2017-02-03 15:29:31,017 INFO  (check/loop) [storage.asyncevent] Starting
>>  (asyncevent:122)
>> 2017-02-03 15:29:31,156 INFO  (MainThread) [dispatcher] Run and protect:
>> registerDomainStateChangeCallback(callbackFunc=> at 0x2881fc8>) (logUtils:49)
>> 2017-02-03 15:29:31,156 INFO  (MainThread) [dispatcher] Run and protect:
>> registerDomainStateChangeCallback, Return response: None (logUtils:52)
>> 2017-02-03 15:29:31,160 INFO  (MainThread) [MOM] Preparing MOM interface
>> (momIF:49)
>> 2017-02-03 15:29:31,161 INFO  (MainThread) [MOM] Using named unix socket
>> /var/run/vdsm/mom-vdsm.sock (momIF:58)
>> 2017-02-03 15:29:31,162 INFO  (MainThread) [root] Unregistering all
>> secrets (secret:91)
>> 2017-02-03 15:29:31,164 INFO  (MainThread) [vds] Setting channels'
>> timeout to 30 seconds. (vmchannels:223)
>> 2017-02-03 15:29:31,165 INFO  (MainThread) [vds.MultiProtocolAcceptor]
>> Listening at :::54321 (protocoldetector:185)
>> 2017-02-03 15:29:31,354 INFO  (vmrecovery) [vds] recovery: completed in
>> 0s (clientIF:495)
>> 2017-02-03 15:29:31,371 INFO  (BindingXMLRPC) [vds] XMLRPC server running
>> (bindingxmlrpc:63)
>> 2017-02-03 15:29:31,471 INFO  (periodic/1) [dispatcher] Run and protect:
>> repoStats(options=None) (logUtils:49)
>> 2017-02-03 15:29:31,472 INFO  (periodic/1) [dispatcher] Run and protect:
>> repoStats, Return response: {} (logUtils:52)
>> 2017-02-03 15:29:31,472 WARN  (periodic/1) [MOM] MOM not available.
>> (momIF:116)
>> 2017-02-03 15:29:31,473 WARN  (periodic/1) [MOM] MOM not available, KSM
>> stats will be missing. (momIF:79)
>> 2017-02-03 15:29:31,474 ERROR (periodic/1) [root] failed to retrieve
>> Hosted Engine HA info (api:252)
>> Traceback (most recent call last):
>>   File 

Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-03 Thread Jiri Slezka

Hi,

I updated our oVirt cluster day after 4.1.0 went public.

Upgrade was simple but while migrating and upgrading hosts some vms was 
stucked with 100% cpu usage and totally non responsible. I had to power 
of them and start again. But it could be some problem with 
CentOS7.2->7.3 transition or kvm-ev upgrade. Unfortunately I had no time 
to examine logs yet :-(


Also I experienced one or two "UI Exception" but not a big deal.

UI is more and more polished. I really like how it shifts to patternfly 
look and feel.


btw. We have standalone gluster cluster not for vms, just for general 
storage purposes. Is wise to use oVirt manager as web ui for its management?


Is safe to import this gluster into oVirt? I saw this option there but I 
don't want broke things that works :-)


At the end - thanks for your great work. I still see lot of features I 
still missing in oVirt but it is highly usable and great piece of 
software. And also oVirt community is nice and helpful.


Cheers,

Jiri



On 02/02/2017 01:19 PM, Sandro Bonazzola wrote:

Hi,
did you install/update to 4.1.0? Let us know your experience!
We end up knowing only when things doesn't work well, let us know it
works fine for you :-)

If you're not planning an update to 4.1.0 in the near future, let us
know why.
Maybe we can help.

Thanks!
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com 


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-03 Thread Andrea Ghelardi
Running ovirt v4.0.5.5.1 and not planning to upgrade to 4.1 yet.
We are happy with stability of our production servers and wait for 4.1.1 to 
come out.
The only real need to upgrade for us would be the added compatibility with 
Windows server 2016 guest tools.
… and the trim, of course, but we can wait a little bit longer for it…

Cheers
AG

From: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] On Behalf Of 
Sandro Bonazzola
Sent: Thursday, February 2, 2017 1:19 PM
To: users <users@ovirt.org>
Subject: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

Hi,
did you install/update to 4.1.0? Let us know your experience!
We end up knowing only when things doesn't work well, let us know it works fine 
for you :-)

If you're not planning an update to 4.1.0 in the near future, let us know why.
Maybe we can help.

Thanks!
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com<http://redhat.com>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-03 Thread Simone Tiraboschi
The hosted-engine storage domain is mounted for sure,
but the issue is here:
Exception: Failed to start monitoring domain
(sd_uuid=7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96, host_id=3): timeout during
domain acquisition

The point is that in VDSM logs I see just something like:
2017-02-02 21:05:22,283 INFO  (jsonrpc/1) [dispatcher] Run and protect:
repoStats(options=None) (logUtils:49)
2017-02-02 21:05:22,285 INFO  (jsonrpc/1) [dispatcher] Run and protect:
repoStats, Return response: {u'a7fbaaad-7043-4391-9523-3bedcdc4fb0d':
{'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay':
'0.000748727', 'lastCheck': '0.1', 'valid': True},
u'2b2a44fc-f2bd-47cd-b7af-00be59e30a35': {'code': 0, 'actual': True,
'version': 0, 'acquired': True, 'delay': '0.00082529', 'lastCheck': '0.1',
'valid': True}, u'5d99af76-33b5-47d8-99da-1f32413c7bb0': {'code': 0,
'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000349356',
'lastCheck': '5.3', 'valid': True},
u'7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96': {'code': 0, 'actual': True,
'version': 4, 'acquired': False, 'delay': '0.000377052', 'lastCheck':
'0.6', 'valid': True}} (logUtils:52)

Where the other storage domains have 'acquired': True whil it's
always 'acquired': False for the hosted-engine storage domain.

Could you please share your /var/log/sanlock.log from the same host and the
output of
 sanlock client status
?




On Fri, Feb 3, 2017 at 3:52 PM, Ralf Schenk  wrote:

> Hello,
>
> I also put host in Maintenance and restarted vdsm while ovirt-ha-agent is
> running. I can mount the gluster Volume "engine" manually in the host.
>
> I get this repeatedly in /var/log/vdsm.log:
>
> 2017-02-03 15:29:28,891 INFO  (MainThread) [vds] Exiting (vdsm:167)
> 2017-02-03 15:29:30,974 INFO  (MainThread) [vds] (PID: 11456) I am the
> actual vdsm 4.19.4-1.el7.centos microcloud27 (3.10.0-514.6.1.el7.x86_64)
> (vdsm:145)
> 2017-02-03 15:29:30,974 INFO  (MainThread) [vds] VDSM will run with cpu
> affinity: frozenset([1]) (vdsm:251)
> 2017-02-03 15:29:31,013 INFO  (MainThread) [storage.check] Starting check
> service (check:91)
> 2017-02-03 15:29:31,017 INFO  (MainThread) [storage.Dispatcher] Starting
> StorageDispatcher... (dispatcher:47)
> 2017-02-03 15:29:31,017 INFO  (check/loop) [storage.asyncevent] Starting
>  (asyncevent:122)
> 2017-02-03 15:29:31,156 INFO  (MainThread) [dispatcher] Run and protect:
> registerDomainStateChangeCallback(callbackFunc= at 0x2881fc8>) (logUtils:49)
> 2017-02-03 15:29:31,156 INFO  (MainThread) [dispatcher] Run and protect:
> registerDomainStateChangeCallback, Return response: None (logUtils:52)
> 2017-02-03 15:29:31,160 INFO  (MainThread) [MOM] Preparing MOM interface
> (momIF:49)
> 2017-02-03 15:29:31,161 INFO  (MainThread) [MOM] Using named unix socket
> /var/run/vdsm/mom-vdsm.sock (momIF:58)
> 2017-02-03 15:29:31,162 INFO  (MainThread) [root] Unregistering all
> secrets (secret:91)
> 2017-02-03 15:29:31,164 INFO  (MainThread) [vds] Setting channels' timeout
> to 30 seconds. (vmchannels:223)
> 2017-02-03 15:29:31,165 INFO  (MainThread) [vds.MultiProtocolAcceptor]
> Listening at :::54321 (protocoldetector:185)
> 2017-02-03 15:29:31,354 INFO  (vmrecovery) [vds] recovery: completed in 0s
> (clientIF:495)
> 2017-02-03 15:29:31,371 INFO  (BindingXMLRPC) [vds] XMLRPC server running
> (bindingxmlrpc:63)
> 2017-02-03 15:29:31,471 INFO  (periodic/1) [dispatcher] Run and protect:
> repoStats(options=None) (logUtils:49)
> 2017-02-03 15:29:31,472 INFO  (periodic/1) [dispatcher] Run and protect:
> repoStats, Return response: {} (logUtils:52)
> 2017-02-03 15:29:31,472 WARN  (periodic/1) [MOM] MOM not available.
> (momIF:116)
> 2017-02-03 15:29:31,473 WARN  (periodic/1) [MOM] MOM not available, KSM
> stats will be missing. (momIF:79)
> 2017-02-03 15:29:31,474 ERROR (periodic/1) [root] failed to retrieve
> Hosted Engine HA info (api:252)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/host/api.py", line 231, in
> _getHaInfo
> stats = instance.get_all_stats()
>   File 
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py",
> line 103, in get_all_stats
> self._configure_broker_conn(broker)
>   File 
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py",
> line 180, in _configure_broker_conn
> dom_type=dom_type)
>   File 
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py",
> line 177, in set_storage_domain
> .format(sd_type, options, e))
> RequestError: Failed to set storage domain FilesystemBackend, options
> {'dom_type': 'glusterfs', 'sd_uuid': '7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96'}:
> Request failed:  ted_engine_ha.lib.storage_backends.BackendFailureException'>
> 2017-02-03 15:29:35,920 INFO  (Reactor thread) [ProtocolDetector.AcceptorImpl]
> Accepted connection from ::1:49506 (protocoldetector:72)
> 2017-02-03 15:29:35,929 INFO  (Reactor thread) [ProtocolDetector.Detector]
> Detected protocol stomp from ::1:49506 

Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-03 Thread Ralf Schenk
Hello,

I also put host in Maintenance and restarted vdsm while ovirt-ha-agent
is running. I can mount the gluster Volume "engine" manually in the host.

I get this repeatedly in /var/log/vdsm.log:

2017-02-03 15:29:28,891 INFO  (MainThread) [vds] Exiting (vdsm:167)
2017-02-03 15:29:30,974 INFO  (MainThread) [vds] (PID: 11456) I am the
actual vdsm 4.19.4-1.el7.centos microcloud27 (3.10.0-514.6.1.el7.x86_64)
(vdsm:145)
2017-02-03 15:29:30,974 INFO  (MainThread) [vds] VDSM will run with cpu
affinity: frozenset([1]) (vdsm:251)
2017-02-03 15:29:31,013 INFO  (MainThread) [storage.check] Starting
check service (check:91)
2017-02-03 15:29:31,017 INFO  (MainThread) [storage.Dispatcher] Starting
StorageDispatcher... (dispatcher:47)
2017-02-03 15:29:31,017 INFO  (check/loop) [storage.asyncevent] Starting
 (asyncevent:122)
2017-02-03 15:29:31,156 INFO  (MainThread) [dispatcher] Run and protect:
registerDomainStateChangeCallback(callbackFunc=) (logUtils:49)
2017-02-03 15:29:31,156 INFO  (MainThread) [dispatcher] Run and protect:
registerDomainStateChangeCallback, Return response: None (logUtils:52)
2017-02-03 15:29:31,160 INFO  (MainThread) [MOM] Preparing MOM interface
(momIF:49)
2017-02-03 15:29:31,161 INFO  (MainThread) [MOM] Using named unix socket
/var/run/vdsm/mom-vdsm.sock (momIF:58)
2017-02-03 15:29:31,162 INFO  (MainThread) [root] Unregistering all
secrets (secret:91)
2017-02-03 15:29:31,164 INFO  (MainThread) [vds] Setting channels'
timeout to 30 seconds. (vmchannels:223)
2017-02-03 15:29:31,165 INFO  (MainThread) [vds.MultiProtocolAcceptor]
Listening at :::54321 (protocoldetector:185)
2017-02-03 15:29:31,354 INFO  (vmrecovery) [vds] recovery: completed in
0s (clientIF:495)
2017-02-03 15:29:31,371 INFO  (BindingXMLRPC) [vds] XMLRPC server
running (bindingxmlrpc:63)
2017-02-03 15:29:31,471 INFO  (periodic/1) [dispatcher] Run and protect:
repoStats(options=None) (logUtils:49)
2017-02-03 15:29:31,472 INFO  (periodic/1) [dispatcher] Run and protect:
repoStats, Return response: {} (logUtils:52)
2017-02-03 15:29:31,472 WARN  (periodic/1) [MOM] MOM not available.
(momIF:116)
2017-02-03 15:29:31,473 WARN  (periodic/1) [MOM] MOM not available, KSM
stats will be missing. (momIF:79)
2017-02-03 15:29:31,474 ERROR (periodic/1) [root] failed to retrieve
Hosted Engine HA info (api:252)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/host/api.py", line 231, in
_getHaInfo
stats = instance.get_all_stats()
  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py",
line 103, in get_all_stats
self._configure_broker_conn(broker)
  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py",
line 180, in _configure_broker_conn
dom_type=dom_type)
  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py",
line 177, in set_storage_domain
.format(sd_type, options, e))
RequestError: Failed to set storage domain FilesystemBackend, options
{'dom_type': 'glusterfs', 'sd_uuid':
'7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96'}: Request failed: 
2017-02-03 15:29:35,920 INFO  (Reactor thread)
[ProtocolDetector.AcceptorImpl] Accepted connection from ::1:49506
(protocoldetector:72)
2017-02-03 15:29:35,929 INFO  (Reactor thread)
[ProtocolDetector.Detector] Detected protocol stomp from ::1:49506
(protocoldetector:127)
2017-02-03 15:29:35,930 INFO  (Reactor thread) [Broker.StompAdapter]
Processing CONNECT request (stompreactor:102)
2017-02-03 15:29:35,930 INFO  (JsonRpc (StompReactor))
[Broker.StompAdapter] Subscribe command received (stompreactor:129)
2017-02-03 15:29:36,067 INFO  (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC
call Host.ping succeeded in 0.00 seconds (__init__:515)
2017-02-03 15:29:36,071 INFO  (jsonrpc/1) [throttled] Current
getAllVmStats: {} (throttledlog:105)
2017-02-03 15:29:36,071 INFO  (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC
call Host.getAllVmStats succeeded in 0.00 seconds (__init__:515)
2017-02-03 15:29:46,435 INFO  (periodic/0) [dispatcher] Run and protect:
repoStats(options=None) (logUtils:49)
2017-02-03 15:29:46,435 INFO  (periodic/0) [dispatcher] Run and protect:
repoStats, Return response: {} (logUtils:52)
2017-02-03 15:29:46,439 ERROR (periodic/0) [root] failed to retrieve
Hosted Engine HA info (api:252)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/host/api.py", line 231, in
_getHaInfo
stats = instance.get_all_stats()
  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py",
line 103, in get_all_stats
self._configure_broker_conn(broker)
  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py",
line 180, in _configure_broker_conn
dom_type=dom_type)
  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py",
line 177, in set_storage_domain
.format(sd_type, options, e))
RequestError: Failed to set storage domain FilesystemBackend, options
{'dom_type': 'glusterfs', 'sd_uuid':

Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-03 Thread Simone Tiraboschi
I see there an ERROR on stopMonitoringDomain but I cannot see the
correspondent  startMonitoringDomain; could you please look for it?

On Fri, Feb 3, 2017 at 1:16 PM, Ralf Schenk  wrote:

> Hello,
>
> attached is my vdsm.log from the host with hosted-engine-ha around the
> time-frame of agent timeout that is not working anymore for engine (it
> works in Ovirt and is active). It simply isn't working for engine-ha
> anymore after Update.
>
> At 2017-02-02 19:25:34,248 you'll find an error corresponoding to agent
> timeout error.
>
> Bye
>
>
>
> Am 03.02.2017 um 11:28 schrieb Simone Tiraboschi:
>
> 3. Three of my hosts have the hosted engine deployed for ha. First all
>>> three where marked by a crown (running was gold and others where silver).
>>> After upgrading the 3 Host deployed hosted engine ha is not active anymore.
>>>
>>> I can't get this host back with working ovirt-ha-agent/broker. I already
>>> rebooted, manually restarted the services but It isn't able to get cluster
>>> state according to
>>> "hosted-engine --vm-status". The other hosts state the host status as
>>> "unknown stale-data"
>>>
>>> I already shut down all agents on all hosts and issued a "hosted-engine
>>> --reinitialize-lockspace" but that didn't help.
>>>
>>> Agents stops working after a timeout-error according to log:
>>>
>>> MainThread::INFO::2017-02-02 19:24:52,040::hosted_engine::8
>>> 41::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
>>> VDSM domain monitor status: PENDING
>>> MainThread::INFO::2017-02-02 19:24:59,185::hosted_engine::8
>>> 41::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
>>> VDSM domain monitor status: PENDING
>>> MainThread::INFO::2017-02-02 19:25:06,333::hosted_engine::8
>>> 41::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
>>> VDSM domain monitor status: PENDING
>>> MainThread::INFO::2017-02-02 19:25:13,554::hosted_engine::8
>>> 41::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
>>> VDSM domain monitor status: PENDING
>>> MainThread::INFO::2017-02-02 19:25:20,710::hosted_engine::8
>>> 41::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
>>> VDSM domain monitor status: PENDING
>>> MainThread::INFO::2017-02-02 19:25:27,865::hosted_engine::8
>>> 41::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
>>> VDSM domain monitor status: PENDING
>>> MainThread::ERROR::2017-02-02 19:25:27,866::hosted_engine::8
>>> 15::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_domain_monitor)
>>> Failed to start monitoring domain 
>>> (sd_uuid=7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96,
>>> host_id=3): timeout during domain acquisition
>>> MainThread::WARNING::2017-02-02 19:25:27,866::hosted_engine::4
>>> 69::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
>>> Error while monitoring engine: Failed to start monitoring domain
>>> (sd_uuid=7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96, host_id=3): timeout
>>> during domain acquisition
>>> MainThread::WARNING::2017-02-02 19:25:27,866::hosted_engine::4
>>> 72::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
>>> Unexpected error
>>> Traceback (most recent call last):
>>>   File 
>>> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py",
>>> line 443, in start_monitoring
>>> self._initialize_domain_monitor()
>>>   File 
>>> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py",
>>> line 816, in _initialize_domain_monitor
>>> raise Exception(msg)
>>> Exception: Failed to start monitoring domain
>>> (sd_uuid=7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96, host_id=3): timeout
>>> during domain acquisition
>>> MainThread::ERROR::2017-02-02 19:25:27,866::hosted_engine::4
>>> 85::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
>>> Shutting down the agent because of 3 failures in a row!
>>> MainThread::INFO::2017-02-02 19:25:32,087::hosted_engine::8
>>> 41::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
>>> VDSM domain monitor status: PENDING
>>> MainThread::INFO::2017-02-02 19:25:34,250::hosted_engine::7
>>> 69::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_domain_monitor)
>>> Failed to stop monitoring domain 
>>> (sd_uuid=7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96):
>>> Storage domain is member of pool: u'domain=7c8deaa8-be02-4aaf-b9
>>> b4-ddc8da99ad96'
>>> MainThread::INFO::2017-02-02 19:25:34,254::agent::143::ovir
>>> t_hosted_engine_ha.agent.agent.Agent::(run) Agent shutting down
>>>
>> Simone, Martin, can you please follow up on this?
>>
>
> Ralph, could you please attach vdsm logs from on of your hosts for the
> relevant time frame?
>
>
> --
>
>
> *Ralf Schenk*
> fon +49 (0) 24 05 / 40 83 70 <+49%202405%20408370>
> fax +49 (0) 24 05 / 40 83 759 

Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-03 Thread Ralf Schenk
Hello,

attached is my vdsm.log from the host with hosted-engine-ha around the
time-frame of agent timeout that is not working anymore for engine (it
works in Ovirt and is active). It simply isn't working for engine-ha
anymore after Update.

At 2017-02-02 19:25:34,248 you'll find an error corresponoding to agent
timeout error.

Bye



Am 03.02.2017 um 11:28 schrieb Simone Tiraboschi:
>
> 3. Three of my hosts have the hosted engine deployed for ha.
> First all three where marked by a crown (running was gold and
> others where silver). After upgrading the 3 Host deployed
> hosted engine ha is not active anymore.
>
> I can't get this host back with working ovirt-ha-agent/broker.
> I already rebooted, manually restarted the services but It
> isn't able to get cluster state according to
> "hosted-engine --vm-status". The other hosts state the host
> status as "unknown stale-data"
>
> I already shut down all agents on all hosts and issued a
> "hosted-engine --reinitialize-lockspace" but that didn't help.
>
> Agents stops working after a timeout-error according to log:
>
> MainThread::INFO::2017-02-02
> 
> 19:24:52,040::hosted_engine::841::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
> VDSM domain monitor status: PENDING
> MainThread::INFO::2017-02-02
> 
> 19:24:59,185::hosted_engine::841::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
> VDSM domain monitor status: PENDING
> MainThread::INFO::2017-02-02
> 
> 19:25:06,333::hosted_engine::841::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
> VDSM domain monitor status: PENDING
> MainThread::INFO::2017-02-02
> 
> 19:25:13,554::hosted_engine::841::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
> VDSM domain monitor status: PENDING
> MainThread::INFO::2017-02-02
> 
> 19:25:20,710::hosted_engine::841::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
> VDSM domain monitor status: PENDING
> MainThread::INFO::2017-02-02
> 
> 19:25:27,865::hosted_engine::841::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
> VDSM domain monitor status: PENDING
> MainThread::ERROR::2017-02-02
> 
> 19:25:27,866::hosted_engine::815::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_domain_monitor)
> Failed to start monitoring domain
> (sd_uuid=7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96, host_id=3):
> timeout during domain acquisition
> MainThread::WARNING::2017-02-02
> 
> 19:25:27,866::hosted_engine::469::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Error while monitoring engine: Failed to start monitoring
> domain (sd_uuid=7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96,
> host_id=3): timeout during domain acquisition
> MainThread::WARNING::2017-02-02
> 
> 19:25:27,866::hosted_engine::472::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Unexpected error
> Traceback (most recent call last):
>   File
> 
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py",
> line 443, in start_monitoring
> self._initialize_domain_monitor()
>   File
> 
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py",
> line 816, in _initialize_domain_monitor
> raise Exception(msg)
> Exception: Failed to start monitoring domain
> (sd_uuid=7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96, host_id=3):
> timeout during domain acquisition
> MainThread::ERROR::2017-02-02
> 
> 19:25:27,866::hosted_engine::485::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Shutting down the agent because of 3 failures in a row!
> MainThread::INFO::2017-02-02
> 
> 19:25:32,087::hosted_engine::841::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
> VDSM domain monitor status: PENDING
> MainThread::INFO::2017-02-02
> 
> 19:25:34,250::hosted_engine::769::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_domain_monitor)
> Failed to stop monitoring domain
> (sd_uuid=7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96): Storage domain
> is member of pool: u'domain=7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96'
> MainThread::INFO::2017-02-02
> 
> 19:25:34,254::agent::143::ovirt_hosted_engine_ha.agent.agent.Agent::(run)
> Agent shutting down
>
> Simone, Martin, can you please follow up on this?
>
>
> 

Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-03 Thread Ramesh Nachimuthu




- Original Message -
> From: "Ralf Schenk" <r...@databay.de>
> To: "Ramesh Nachimuthu" <rnach...@redhat.com>
> Cc: users@ovirt.org
> Sent: Friday, February 3, 2017 4:19:02 PM
> Subject: Re: [ovirt-users] [Call for feedback] did you install/update to 
> 4.1.0?
> 
> Hello,
> 
> in reality my cluster is a hyper-converged cluster. But how do I tell
> this Ovirt Engine ? Of course I activated the checkbox "Gluster"
> (already some versions ago around 4.0.x) but that didn't change anything.
> 

Do you see any error/warning in the engine.log?

Regards,
Ramesh

> Bye
> Am 03.02.2017 um 11:18 schrieb Ramesh Nachimuthu:
> >> 2. I'm missing any gluster specific management features as my gluster is
> >> not
> >> managable in any way from the GUI. I expected to see my gluster now in
> >> dashboard and be able to add volumes etc. What do I need to do to "import"
> >> my existing gluster (Only one volume so far) to be managable ?
> >>
> >>
> > If it is a hyperconverged cluster, then all your hosts are already managed
> > by ovirt. So you just need to enable 'Gluster Service' in the Cluster,
> > gluster volume will be imported automatically when you enable gluster
> > service.
> >
> > If it is not a hyperconverged cluster, then you have to create a new
> > cluster and enable only 'Gluster Service'. Then you can import or add the
> > gluster hosts to this Gluster cluster.
> >
> > You may also need to define a gluster network if you are using a separate
> > network for gluster data traffic. More at
> > http://www.ovirt.org/develop/release-management/features/network/select-network-for-gluster/
> >
> >
> >
> 
> --
> 
> 
> *Ralf Schenk*
> fon +49 (0) 24 05 / 40 83 70
> fax +49 (0) 24 05 / 40 83 759
> mail *r...@databay.de* <mailto:r...@databay.de>
>   
> *Databay AG*
> Jens-Otto-Krag-Straße 11
> D-52146 Würselen
> *www.databay.de* <http://www.databay.de>
> 
> Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
> Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
> Philipp Hermanns
> Aufsichtsratsvorsitzender: Wilhelm Dohmen
> 
> 
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-03 Thread Ralf Schenk
Hello,

in reality my cluster is a hyper-converged cluster. But how do I tell
this Ovirt Engine ? Of course I activated the checkbox "Gluster"
(already some versions ago around 4.0.x) but that didn't change anything.

Bye
Am 03.02.2017 um 11:18 schrieb Ramesh Nachimuthu:
>> 2. I'm missing any gluster specific management features as my gluster is not
>> managable in any way from the GUI. I expected to see my gluster now in
>> dashboard and be able to add volumes etc. What do I need to do to "import"
>> my existing gluster (Only one volume so far) to be managable ?
>>
>>
> If it is a hyperconverged cluster, then all your hosts are already managed by 
> ovirt. So you just need to enable 'Gluster Service' in the Cluster, gluster 
> volume will be imported automatically when you enable gluster service. 
>
> If it is not a hyperconverged cluster, then you have to create a new cluster 
> and enable only 'Gluster Service'. Then you can import or add the gluster 
> hosts to this Gluster cluster. 
>
> You may also need to define a gluster network if you are using a separate 
> network for gluster data traffic. More at 
> http://www.ovirt.org/develop/release-management/features/network/select-network-for-gluster/
>
>
>

-- 


*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *r...@databay.de* 

*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* 

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-03 Thread Simone Tiraboschi
On Fri, Feb 3, 2017 at 11:17 AM, Sandro Bonazzola 
wrote:

>
>
> On Fri, Feb 3, 2017 at 10:54 AM, Ralf Schenk  wrote:
>
>> Hello,
>>
>> I upgraded my cluster of 8 hosts with gluster storage and
>> hosted-engine-ha. They were already Centos 7.3 and using Ovirt 4.0.6 and
>> gluster 3.7.x packages from storage-sig testing.
>>
>> I'm missing the storage listed under storage tab but this is already
>> filed by a bug. Increasing Cluster and Storage Compability level and also
>> "reset emulated machine" after having upgraded one host after another
>> without the need to shutdown vm's works well. (VM's get sign that there
>> will be changes after reboot).
>>
>> Important: you also have to issue a yum update on the host for upgrading
>> additional components like i.e. gluster to 3.8.x. I was frightened of this
>> step but It worked well except a configuration issue I was responsible for
>> in gluster.vol (I had "transport socket, rdma")
>>
>> Bugs/Quirks so far:
>>
>> 1. After restarting a single VM that used RNG-Device I got an error (it
>> was german) but like "RNG Device not supported by cluster". I hat to
>> disable RNG Device save the settings. Again settings and enable RNG Device.
>> Then machine boots up.
>> I think there is a migration step missing from /dev/random to
>> /dev/urandom for exisiting VM's.
>>
>
> Tomas, Francesco, Michal, can you please follow up on this?
>
>
>
>> 2. I'm missing any gluster specific management features as my gluster is
>> not managable in any way from the GUI. I expected to see my gluster now in
>> dashboard and be able to add volumes etc. What do I need to do to "import"
>> my existing gluster (Only one volume so far) to be managable ?
>>
>
> Sahina, can you please follow up on this?
>
>
>> 3. Three of my hosts have the hosted engine deployed for ha. First all
>> three where marked by a crown (running was gold and others where silver).
>> After upgrading the 3 Host deployed hosted engine ha is not active anymore.
>>
>> I can't get this host back with working ovirt-ha-agent/broker. I already
>> rebooted, manually restarted the services but It isn't able to get cluster
>> state according to
>> "hosted-engine --vm-status". The other hosts state the host status as
>> "unknown stale-data"
>>
>> I already shut down all agents on all hosts and issued a "hosted-engine
>> --reinitialize-lockspace" but that didn't help.
>>
>> Agents stops working after a timeout-error according to log:
>>
>> MainThread::INFO::2017-02-02 19:24:52,040::hosted_engine::8
>> 41::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
>> VDSM domain monitor status: PENDING
>> MainThread::INFO::2017-02-02 19:24:59,185::hosted_engine::8
>> 41::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
>> VDSM domain monitor status: PENDING
>> MainThread::INFO::2017-02-02 19:25:06,333::hosted_engine::8
>> 41::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
>> VDSM domain monitor status: PENDING
>> MainThread::INFO::2017-02-02 19:25:13,554::hosted_engine::8
>> 41::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
>> VDSM domain monitor status: PENDING
>> MainThread::INFO::2017-02-02 19:25:20,710::hosted_engine::8
>> 41::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
>> VDSM domain monitor status: PENDING
>> MainThread::INFO::2017-02-02 19:25:27,865::hosted_engine::8
>> 41::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
>> VDSM domain monitor status: PENDING
>> MainThread::ERROR::2017-02-02 19:25:27,866::hosted_engine::8
>> 15::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_domain_monitor)
>> Failed to start monitoring domain 
>> (sd_uuid=7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96,
>> host_id=3): timeout during domain acquisition
>> MainThread::WARNING::2017-02-02 19:25:27,866::hosted_engine::4
>> 69::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
>> Error while monitoring engine: Failed to start monitoring domain
>> (sd_uuid=7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96, host_id=3): timeout
>> during domain acquisition
>> MainThread::WARNING::2017-02-02 19:25:27,866::hosted_engine::4
>> 72::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
>> Unexpected error
>> Traceback (most recent call last):
>>   File 
>> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py",
>> line 443, in start_monitoring
>> self._initialize_domain_monitor()
>>   File 
>> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py",
>> line 816, in _initialize_domain_monitor
>> raise Exception(msg)
>> Exception: Failed to start monitoring domain
>> (sd_uuid=7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96, host_id=3): timeout
>> during domain acquisition
>> MainThread::ERROR::2017-02-02 

Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-03 Thread Ramesh Nachimuthu




- Original Message -
> From: "Ralf Schenk" <r...@databay.de>
> To: users@ovirt.org
> Sent: Friday, February 3, 2017 3:24:55 PM
> Subject: Re: [ovirt-users] [Call for feedback] did you install/update to 
> 4.1.0?
> 
> 
> 
> Hello,
> 
> I upgraded my cluster of 8 hosts with gluster storage and hosted-engine-ha.
> They were already Centos 7.3 and using Ovirt 4.0.6 and gluster 3.7.x
> packages from storage-sig testing.
> 
> 
> I'm missing the storage listed under storage tab but this is already filed by
> a bug. Increasing Cluster and Storage Compability level and also "reset
> emulated machine" after having upgraded one host after another without the
> need to shutdown vm's works well. (VM's get sign that there will be changes
> after reboot).
> 
> Important: you also have to issue a yum update on the host for upgrading
> additional components like i.e. gluster to 3.8.x. I was frightened of this
> step but It worked well except a configuration issue I was responsible for
> in gluster.vol (I had "transport socket, rdma")
> 
> 
> Bugs/Quirks so far:
> 
> 
> 1. After restarting a single VM that used RNG-Device I got an error (it was
> german) but like "RNG Device not supported by cluster". I hat to disable RNG
> Device save the settings. Again settings and enable RNG Device. Then machine
> boots up.
> I think there is a migration step missing from /dev/random to /dev/urandom
> for exisiting VM's.
> 
> 2. I'm missing any gluster specific management features as my gluster is not
> managable in any way from the GUI. I expected to see my gluster now in
> dashboard and be able to add volumes etc. What do I need to do to "import"
> my existing gluster (Only one volume so far) to be managable ?
> 
> 

If it is a hyperconverged cluster, then all your hosts are already managed by 
ovirt. So you just need to enable 'Gluster Service' in the Cluster, gluster 
volume will be imported automatically when you enable gluster service. 

If it is not a hyperconverged cluster, then you have to create a new cluster 
and enable only 'Gluster Service'. Then you can import or add the gluster hosts 
to this Gluster cluster. 

You may also need to define a gluster network if you are using a separate 
network for gluster data traffic. More at 
http://www.ovirt.org/develop/release-management/features/network/select-network-for-gluster/



> 3. Three of my hosts have the hosted engine deployed for ha. First all three
> where marked by a crown (running was gold and others where silver). After
> upgrading the 3 Host deployed hosted engine ha is not active anymore.
> 
> I can't get this host back with working ovirt-ha-agent/broker. I already
> rebooted, manually restarted the services but It isn't able to get cluster
> state according to
> "hosted-engine --vm-status". The other hosts state the host status as
> "unknown stale-data"
> 
> I already shut down all agents on all hosts and issued a "hosted-engine
> --reinitialize-lockspace" but that didn't help.
> 
> 
> Agents stops working after a timeout-error according to log:
> 
> MainThread::INFO::2017-02-02
> 19:24:52,040::hosted_engine::841::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
> VDSM domain monitor status: PENDING
> MainThread::INFO::2017-02-02
> 19:24:59,185::hosted_engine::841::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
> VDSM domain monitor status: PENDING
> MainThread::INFO::2017-02-02
> 19:25:06,333::hosted_engine::841::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
> VDSM domain monitor status: PENDING
> MainThread::INFO::2017-02-02
> 19:25:13,554::hosted_engine::841::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
> VDSM domain monitor status: PENDING
> MainThread::INFO::2017-02-02
> 19:25:20,710::hosted_engine::841::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
> VDSM domain monitor status: PENDING
> MainThread::INFO::2017-02-02
> 19:25:27,865::hosted_engine::841::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
> VDSM domain monitor status: PENDING
> MainThread::ERROR::2017-02-02
> 19:25:27,866::hosted_engine::815::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_domain_monitor)
> Failed to start monitoring domain
> (sd_uuid=7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96, host_id=3): timeout during
> domain acquisition
> MainThread::WARNING::2017-02-02
> 19:25:27,866::hosted_engine::469::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Error while monitori

Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-03 Thread Sandro Bonazzola
On Fri, Feb 3, 2017 at 10:54 AM, Ralf Schenk  wrote:

> Hello,
>
> I upgraded my cluster of 8 hosts with gluster storage and
> hosted-engine-ha. They were already Centos 7.3 and using Ovirt 4.0.6 and
> gluster 3.7.x packages from storage-sig testing.
>
> I'm missing the storage listed under storage tab but this is already filed
> by a bug. Increasing Cluster and Storage Compability level and also "reset
> emulated machine" after having upgraded one host after another without the
> need to shutdown vm's works well. (VM's get sign that there will be changes
> after reboot).
>
> Important: you also have to issue a yum update on the host for upgrading
> additional components like i.e. gluster to 3.8.x. I was frightened of this
> step but It worked well except a configuration issue I was responsible for
> in gluster.vol (I had "transport socket, rdma")
>
> Bugs/Quirks so far:
>
> 1. After restarting a single VM that used RNG-Device I got an error (it
> was german) but like "RNG Device not supported by cluster". I hat to
> disable RNG Device save the settings. Again settings and enable RNG Device.
> Then machine boots up.
> I think there is a migration step missing from /dev/random to /dev/urandom
> for exisiting VM's.
>

Tomas, Francesco, Michal, can you please follow up on this?



> 2. I'm missing any gluster specific management features as my gluster is
> not managable in any way from the GUI. I expected to see my gluster now in
> dashboard and be able to add volumes etc. What do I need to do to "import"
> my existing gluster (Only one volume so far) to be managable ?
>

Sahina, can you please follow up on this?


> 3. Three of my hosts have the hosted engine deployed for ha. First all
> three where marked by a crown (running was gold and others where silver).
> After upgrading the 3 Host deployed hosted engine ha is not active anymore.
>
> I can't get this host back with working ovirt-ha-agent/broker. I already
> rebooted, manually restarted the services but It isn't able to get cluster
> state according to
> "hosted-engine --vm-status". The other hosts state the host status as
> "unknown stale-data"
>
> I already shut down all agents on all hosts and issued a "hosted-engine
> --reinitialize-lockspace" but that didn't help.
>
> Agents stops working after a timeout-error according to log:
>
> MainThread::INFO::2017-02-02 19:24:52,040::hosted_engine::
> 841::ovirt_hosted_engine_ha.agent.hosted_engine.
> HostedEngine::(_get_domain_monitor_status) VDSM domain monitor status:
> PENDING
> MainThread::INFO::2017-02-02 19:24:59,185::hosted_engine::
> 841::ovirt_hosted_engine_ha.agent.hosted_engine.
> HostedEngine::(_get_domain_monitor_status) VDSM domain monitor status:
> PENDING
> MainThread::INFO::2017-02-02 19:25:06,333::hosted_engine::
> 841::ovirt_hosted_engine_ha.agent.hosted_engine.
> HostedEngine::(_get_domain_monitor_status) VDSM domain monitor status:
> PENDING
> MainThread::INFO::2017-02-02 19:25:13,554::hosted_engine::
> 841::ovirt_hosted_engine_ha.agent.hosted_engine.
> HostedEngine::(_get_domain_monitor_status) VDSM domain monitor status:
> PENDING
> MainThread::INFO::2017-02-02 19:25:20,710::hosted_engine::
> 841::ovirt_hosted_engine_ha.agent.hosted_engine.
> HostedEngine::(_get_domain_monitor_status) VDSM domain monitor status:
> PENDING
> MainThread::INFO::2017-02-02 19:25:27,865::hosted_engine::
> 841::ovirt_hosted_engine_ha.agent.hosted_engine.
> HostedEngine::(_get_domain_monitor_status) VDSM domain monitor status:
> PENDING
> MainThread::ERROR::2017-02-02 19:25:27,866::hosted_engine::
> 815::ovirt_hosted_engine_ha.agent.hosted_engine.
> HostedEngine::(_initialize_domain_monitor) Failed to start monitoring
> domain (sd_uuid=7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96, host_id=3): timeout
> during domain acquisition
> MainThread::WARNING::2017-02-02 19:25:27,866::hosted_engine::
> 469::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Error while monitoring engine: Failed to start monitoring domain
> (sd_uuid=7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96, host_id=3): timeout during
> domain acquisition
> MainThread::WARNING::2017-02-02 19:25:27,866::hosted_engine::
> 472::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Unexpected error
> Traceback (most recent call last):
>   File 
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py",
> line 443, in start_monitoring
> self._initialize_domain_monitor()
>   File 
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py",
> line 816, in _initialize_domain_monitor
> raise Exception(msg)
> Exception: Failed to start monitoring domain 
> (sd_uuid=7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96,
> host_id=3): timeout during domain acquisition
> MainThread::ERROR::2017-02-02 19:25:27,866::hosted_engine::
> 485::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Shutting down the agent because of 3 failures in a row!
> 

Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-03 Thread Ralf Schenk
Hello,

I upgraded my cluster of 8 hosts with gluster storage and
hosted-engine-ha. They were already Centos 7.3 and using Ovirt 4.0.6 and
gluster 3.7.x packages from storage-sig testing.

I'm missing the storage listed under storage tab but this is already
filed by a bug. Increasing Cluster and Storage Compability level and
also "reset emulated machine" after having upgraded one host after
another without the need to shutdown vm's works well. (VM's get sign
that there will be changes after reboot).

Important: you also have to issue a yum update on the host for upgrading
additional components like i.e. gluster to 3.8.x. I was frightened of
this step but It worked well except a configuration issue I was
responsible for in gluster.vol (I had "transport socket, rdma")

Bugs/Quirks so far:

1. After restarting a single VM that used RNG-Device I got an error (it
was german) but like "RNG Device not supported by cluster". I hat to
disable RNG Device save the settings. Again settings and enable RNG
Device. Then machine boots up.
I think there is a migration step missing from /dev/random to
/dev/urandom for exisiting VM's.

2. I'm missing any gluster specific management features as my gluster is
not managable in any way from the GUI. I expected to see my gluster now
in dashboard and be able to add volumes etc. What do I need to do to
"import" my existing gluster (Only one volume so far) to be managable ?

3. Three of my hosts have the hosted engine deployed for ha. First all
three where marked by a crown (running was gold and others where
silver). After upgrading the 3 Host deployed hosted engine ha is not
active anymore.

I can't get this host back with working ovirt-ha-agent/broker. I already
rebooted, manually restarted the services but It isn't able to get
cluster state according to
"hosted-engine --vm-status". The other hosts state the host status as
"unknown stale-data"

I already shut down all agents on all hosts and issued a "hosted-engine
--reinitialize-lockspace" but that didn't help.

Agents stops working after a timeout-error according to log:

MainThread::INFO::2017-02-02
19:24:52,040::hosted_engine::841::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
VDSM domain monitor status: PENDING
MainThread::INFO::2017-02-02
19:24:59,185::hosted_engine::841::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
VDSM domain monitor status: PENDING
MainThread::INFO::2017-02-02
19:25:06,333::hosted_engine::841::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
VDSM domain monitor status: PENDING
MainThread::INFO::2017-02-02
19:25:13,554::hosted_engine::841::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
VDSM domain monitor status: PENDING
MainThread::INFO::2017-02-02
19:25:20,710::hosted_engine::841::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
VDSM domain monitor status: PENDING
MainThread::INFO::2017-02-02
19:25:27,865::hosted_engine::841::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
VDSM domain monitor status: PENDING
MainThread::ERROR::2017-02-02
19:25:27,866::hosted_engine::815::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_domain_monitor)
Failed to start monitoring domain
(sd_uuid=7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96, host_id=3): timeout
during domain acquisition
MainThread::WARNING::2017-02-02
19:25:27,866::hosted_engine::469::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Error while monitoring engine: Failed to start monitoring domain
(sd_uuid=7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96, host_id=3): timeout
during domain acquisition
MainThread::WARNING::2017-02-02
19:25:27,866::hosted_engine::472::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Unexpected error
Traceback (most recent call last):
  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py",
line 443, in start_monitoring
self._initialize_domain_monitor()
  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py",
line 816, in _initialize_domain_monitor
raise Exception(msg)
Exception: Failed to start monitoring domain
(sd_uuid=7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96, host_id=3): timeout
during domain acquisition
MainThread::ERROR::2017-02-02
19:25:27,866::hosted_engine::485::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Shutting down the agent because of 3 failures in a row!
MainThread::INFO::2017-02-02
19:25:32,087::hosted_engine::841::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
VDSM domain monitor status: PENDING
MainThread::INFO::2017-02-02
19:25:34,250::hosted_engine::769::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_domain_monitor)
Failed to stop monitoring domain

Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-03 Thread Sergey Kulikov
Title: Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?










On Thu, Feb 2, 2017 at 9:59 PM, <ser...@msm.ru> wrote:


Updated from 4.0.6
Docs are quite incomplete, it's not mentioned about installing ovirt-release41 on centos HV and ovirt-nodes manually, you need to guess.
Also links in release notes are broken ( https://www.ovirt.org/release/4.1.0/ )
They are going to https://www.ovirt.org/release/4.1.0/Hosted_Engine_Howto , but docs for 4.1.0 are absent.


Thanks, opened https://github.com/oVirt/ovirt-site/issues/765
I'd like to ask you if you can push your suggestion on documentation fixes / improvements editing the website following "Edit this page on GitHub" link at the bottom of the page.
Any help getting documentation updated and more useful to users is really appreciated.




Sure, thanks for pointing to that feature, you've already done this work for me)
I'll use github for any new suggestions.









Upgrade went well, everything migrated without problems(I need to restart VMs only to change cluster level to 4.1).
Good news, SPICE HTML 5 client now working for me on Win client with firefox, before on 4.x it was sending connect requests forever.

There is some bugs I've found playing with new version:
1) some storage tabs displaying "No items to display "
for example:
if I'm expanding System\Data centers\[dc name]\ and selecting Storage it displays nothing in main tab, but displays all domains in tree,
if I'm selecting [dc name] and Storage tab, also nothing,
but in System \ Strorage tab all domains present,
also in Clusters\[cluster name]\ Storage tab they present.

Thanks, opened https://bugzilla.redhat.com/show_bug.cgi?id=1418924

 

2) links to embedded files and clients aren't working, engine says 404, examples:
https://[your manager's address]/ovirt-engine/services/files/spice/usbdk-x64.msi
https://[your manager's address]/ovirt-engine/services/files/spice/virt-viewer-x64.msi
and other,
but they are in docs(in ovirt and also in rhel)


Thanks, opened https://bugzilla.redhat.com/show_bug.cgi?id=1418923

 

3) there is also link in "Console options" menu (right click on VM) called "Console Client Resources", it's going to dead location:
http://www.ovirt.org/documentation/admin-guide/virt/console-client-resources 
If you are going to fix issue №2 maybe also adding links directly to installation files embedded will be more helpful for users) 


Thanks, opened https://bugzilla.redhat.com/show_bug.cgi?id=1418921

 
4) little disappointed about "pass discards" on NFS storage, as I've found NFS implementation(even 4.1) in Centos 7 doesn't support
fallocate(FALLOC_FL_PUNCH_HOLE), that quemu uses for file storage, it was added only in kernel 3.18, sparsify also not working, but I'll mail separate
thread with this question.

-- 



Thursday, February 2, 2017, 15:19:29:





Hi,
did you install/update to 4.1.0? Let us know your experience!
We end up knowing only when things doesn't work well, let us know it works fine for you :-)

If you're not planning an update to 4.1.0 in the near future, let us know why.
Maybe we can help.

Thanks!
-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com





-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-03 Thread Sandro Bonazzola
On Fri, Feb 3, 2017 at 9:14 AM, Yura Poltoratskiy 
wrote:

> I've done an upgrade of ovirt-engine tomorrow. There were two problems.
>
> The first - packages from epel repo, solved by disable repo and downgrade
> package to an existing version in ovirt-release40 repo (yes, there is info
> in documentation about epel repo).
>
> The second (and it is not only for current version) - run the engine-setup
> always not complete successfully because cat not start
> ovirt-engine-notifier.service after upgrade, and the error in notifier is
> that there is no MAIL_SERVER. Every time I am upgrading engine I have the
> same error. Than I add MAIL_SERVER=127.0.0.1 to /usr/share/ovirt-engine/
> services/ovirt-engine-notifier/ovirt-engine-notifier.conf and start
> notifier without problem. Is it my mistake?
>

Adding Martin Perina, he may be able to assist you on this.



> And one more question. In Events tab I can see "User vasya@internal
> logged out.", but there are no message that 'vasya' logged in. Could
> someone tell me how to debug this issue?
>

Martin can probably help as well here, adding also Greg and Alexander.




>
> 02.02.2017 14:19, Sandro Bonazzola пишет:
>
> Hi,
> did you install/update to 4.1.0? Let us know your experience!
> We end up knowing only when things doesn't work well, let us know it works
> fine for you :-)
>
> If you're not planning an update to 4.1.0 in the near future, let us know
> why.
> Maybe we can help.
>
> Thanks!
> --
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com
>
>
> ___
> Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>


-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-03 Thread Yura Poltoratskiy

I've done an upgrade of ovirt-engine tomorrow. There were two problems.

The first - packages from epel repo, solved by disable repo and 
downgrade package to an existing version in ovirt-release40 repo (yes, 
there is info in documentation about epel repo).


The second (and it is not only for current version) - run the 
engine-setup always not complete successfully because cat not start 
ovirt-engine-notifier.service after upgrade, and the error in notifier 
is that there is no MAIL_SERVER. Every time I am upgrading engine I have 
the same error. Than I add MAIL_SERVER=127.0.0.1 to 
/usr/share/ovirt-engine/services/ovirt-engine-notifier/ovirt-engine-notifier.conf 
and start notifier without problem. Is it my mistake?


And one more question. In Events tab I can see "User vasya@internal 
logged out.", but there are no message that 'vasya' logged in. Could 
someone tell me how to debug this issue?



02.02.2017 14:19, Sandro Bonazzola пишет:

Hi,
did you install/update to 4.1.0? Let us know your experience!
We end up knowing only when things doesn't work well, let us know it 
works fine for you :-)


If you're not planning an update to 4.1.0 in the near future, let us 
know why.

Maybe we can help.

Thanks!
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com 


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-02 Thread Sandro Bonazzola
On Thu, Feb 2, 2017 at 9:59 PM,  wrote:

>
>
> Updated from 4.0.6
> Docs are quite incomplete, it's not mentioned about installing
> ovirt-release41 on centos HV and ovirt-nodes manually, you need to guess.
> Also links in release notes are broken ( https://www.ovirt.org/release/
> 4.1.0/ )
> They are going to https://www.ovirt.org/release/4.1.0/Hosted_Engine_Howto
> 
> , but docs for 4.1.0 are absent.
>
>
Thanks, opened https://github.com/oVirt/ovirt-site/issues/765
I'd like to ask you if you can push your suggestion on documentation fixes
/ improvements editing the website following "Edit this page on GitHub"
link at the bottom of the page.
Any help getting documentation updated and more useful to users is really
appreciated.


> Upgrade went well, everything migrated without problems(I need to restart
> VMs only to change cluster level to 4.1).
> Good news, SPICE HTML 5 client now working for me on Win client with
> firefox, before on 4.x it was sending connect requests forever.
>
> There is some bugs I've found playing with new version:
> 1) some storage tabs displaying "No items to display "
> for example:
> if I'm expanding System\Data centers\[dc name]\ and selecting Storage it
> displays nothing in main tab, but displays all domains in tree,
> if I'm selecting [dc name] and Storage tab, also nothing,
> but in System \ Strorage tab all domains present,
> also in Clusters\[cluster name]\ Storage tab they present.
>

Thanks, opened https://bugzilla.redhat.com/show_bug.cgi?id=1418924



>
> 2) links to embedded files and clients aren't working, engine says 404,
> examples:
> https://[your manager's address]/ovirt-engine/services/files/spice/usbdk-
> x64.msi
> https://[your manager's address]/ovirt-engine/services/files/spice/virt-
> viewer-x64.msi
> and other,
> but they are in docs(in ovirt and also in rhel)
>


Thanks, opened https://bugzilla.redhat.com/show_bug.cgi?id=1418923



>
> 3) there is also link in "Console options" menu (right click on VM) called
> "Console Client Resources", it's going to dead location:
> http://www.ovirt.org/documentation/admin-guide/
> virt/console-client-resources
> If you are going to fix issue №2 maybe also adding links directly to
> installation files embedded will be more helpful for users)
>
>
Thanks, opened https://bugzilla.redhat.com/show_bug.cgi?id=1418921



> 4) little disappointed about "pass discards" on NFS storage, as I've found
> NFS implementation(even 4.1) in Centos 7 doesn't support
> fallocate(FALLOC_FL_PUNCH_HOLE), that quemu uses for file storage, it was
> added only in kernel 3.18, sparsify also not working, but I'll mail separate
> thread with this question.
>
>
>
>
>
>
>
> *-- Thursday, February 2, 2017, 15:19:29: *
>
> Hi,
> did you install/update to 4.1.0? Let us know your experience!
> We end up knowing only when things doesn't work well, let us know it works
> fine for you :-)
>
> If you're not planning an update to 4.1.0 in the near future, let us know
> why.
> Maybe we can help.
>
> Thanks!
> --
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com
>



-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-02 Thread Sandro Bonazzola
On Fri, Feb 3, 2017 at 5:51 AM, Shalabh Goel 
wrote:

> HI,
>
> I am having the following issue in two of my nodes after upgrading. The
> ovirt engine says that it is not able to find ovirtmgmt network on the
> nodes and hence the nodes are set to non-operational. More details are in
> the following message.
>
>
> Thanks
>
> Shalabh Goel
>
>
>> --
>>
>> Message: 2
>> Date: Thu, 2 Feb 2017 17:40:05 +0530
>> From: Shalabh Goel 
>> To: users 
>> Subject: [ovirt-users] problem after rebooting the node
>> Message-ID:
>> 

Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-02 Thread Sandro Bonazzola
On Fri, Feb 3, 2017 at 7:02 AM, Lars Seipel  wrote:

> On Thu, Feb 02, 2017 at 01:19:29PM +0100, Sandro Bonazzola wrote:
> > did you install/update to 4.1.0? Let us know your experience!
> > We end up knowing only when things doesn't work well, let us know it
> works
> > fine for you :-)
>
> Will do that in a week or so. What's the preferred way to upgrade to
> 4.1.0 starting from a 4.0.x setup with a hosted engine?
>
> Is it recommended to use engine-setup/yum (i.e. chapter 2 of the Upgrade
> Guide) or would you prefer an appliance upgrade using hosted-engine(8)
> as described in the HE guide?
>

Appliance upgrade was designed to help transitioning from 3.6 el6 to 4.0
el7 appliances.
I would recommend  to use engine-setup/yum within the appliance to upgrade
the engine.


-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-02 Thread Lars Seipel
On Thu, Feb 02, 2017 at 01:19:29PM +0100, Sandro Bonazzola wrote:
> did you install/update to 4.1.0? Let us know your experience!
> We end up knowing only when things doesn't work well, let us know it works
> fine for you :-)

Will do that in a week or so. What's the preferred way to upgrade to
4.1.0 starting from a 4.0.x setup with a hosted engine?

Is it recommended to use engine-setup/yum (i.e. chapter 2 of the Upgrade
Guide) or would you prefer an appliance upgrade using hosted-engine(8)
as described in the HE guide?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-02 Thread Shalabh Goel
HI,

I am having the following issue in two of my nodes after upgrading. The
ovirt engine says that it is not able to find ovirtmgmt network on the
nodes and hence the nodes are set to non-operational. More details are in
the following message.


Thanks

Shalabh Goel


> --
>
> Message: 2
> Date: Thu, 2 Feb 2017 17:40:05 +0530
> From: Shalabh Goel 
> To: users 
> Subject: [ovirt-users] problem after rebooting the node
> Message-ID:
> 

Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-02 Thread serg_k
Title: Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?



Updated from 4.0.6
Docs are quite incomplete, it's not mentioned about installing ovirt-release41 on centos HV and ovirt-nodes manually, you need to guess.
Also links in release notes are broken ( https://www.ovirt.org/release/4.1.0/ )
They are going to https://www.ovirt.org/release/4.1.0/Hosted_Engine_Howto , but docs for 4.1.0 are absent.

Upgrade went well, everything migrated without problems(I need to restart VMs only to change cluster level to 4.1).
Good news, SPICE HTML 5 client now working for me on Win client with firefox, before on 4.x it was sending connect requests forever.

There is some bugs I've found playing with new version:
1) some storage tabs displaying "No items to display "
for example:
if I'm expanding System\Data centers\[dc name]\ and selecting Storage it displays nothing in main tab, but displays all domains in tree,
if I'm selecting [dc name] and Storage tab, also nothing,
but in System \ Strorage tab all domains present,
also in Clusters\[cluster name]\ Storage tab they present.

2) links to embedded files and clients aren't working, engine says 404, examples:
https://[your manager's address]/ovirt-engine/services/files/spice/usbdk-x64.msi
https://[your manager's address]/ovirt-engine/services/files/spice/virt-viewer-x64.msi
and other,
but they are in docs(in ovirt and also in rhel)

3) there is also link in "Console options" menu (right click on VM) called "Console Client Resources", it's going to dead location:
http://www.ovirt.org/documentation/admin-guide/virt/console-client-resources 
If you are going to fix issue №2 maybe also adding links directly to installation files embedded will be more helpful for users) 

4) little disappointed about "pass discards" on NFS storage, as I've found NFS implementation(even 4.1) in Centos 7 doesn't support
fallocate(FALLOC_FL_PUNCH_HOLE), that quemu uses for file storage, it was added only in kernel 3.18, sparsify also not working, but I'll mail separate
thread with this question.

-- 



 Thursday, February 2, 2017, 15:19:29:





Hi,
did you install/update to 4.1.0? Let us know your experience!
We end up knowing only when things doesn't work well, let us know it works fine for you :-)

If you're not planning an update to 4.1.0 in the near future, let us know why.
Maybe we can help.

Thanks!
-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-02 Thread Yaniv Kaul
On Thu, Feb 2, 2017 at 4:23 PM, Краснобаев Михаил  wrote:

> Hi,
>
> upgraded my cluster (3 hosts, engine, nfs-share) to the latest 4.1 release
> and Centos 7.3 (from 4.06).
>
> Did the following:
>
> 1. Upgraded engine machine to Centos 7.3
> 2. Upgraded engine packages and ran "engine-setup"
> 3. Upgraded one by one hosts to 7.3 + packages from the new 4.1. repo and
> refreshed hosts capabilities.
> 4. Raised cluster and datacenter compatibility level to 4.1.
> 5. Restarted virtual machines and tested migration.
> 6. Profit! Everything went really smoothly. No errors.
>
> Now trying to figure out how the sparsify function works. I need to run
> trimming from inside the VM first?
>

If you've configured it to use virtio-SCSI, and DISCARD is enabled, you
can. But I believe virt-sparsify does a bit.

BTW, depending on the OS, if DISCARD is enabled, I would not do anything -
for example, in Fedora, there's a systemd timer that once a week runs
fstrim for you.

If not, then it has to be turned off and then you can run virt-sparsify.
Y.


>
> Best regards, Mikhail.
>
>
>
> 02.02.2017, 15:19, "Sandro Bonazzola" :
>
> Hi,
> did you install/update to 4.1.0? Let us know your experience!
> We end up knowing only when things doesn't work well, let us know it works
> fine for you :-)
>
> If you're not planning an update to 4.1.0 in the near future, let us know
> why.
> Maybe we can help.
>
> Thanks!
> --
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com
> ,
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
> --
> С уважением, Краснобаев Михаил.
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-02 Thread Краснобаев Михаил
Hi, >Why did you have to restart VMs for the migration to work ? Is it mandatory for an upgrade ? I had to restart the VMs (even shutdown and start) for the raised compatibility level to kick in.Migration works even if you don't restart VMs. > Is it mandatory for an upgrade ? No. But at some point you will have to or the VMs cluster compatibility level stays at the previous version. Best regard, Mikhail 02.02.2017, 17:25, "Fernando Frediani" :HelloThanks for sharing your procedures.Why did you have to restart VMs for the migration to work ? Is it mandatory for an upgrade ?Fernando On 02/02/2017 12:23, Краснобаев Михаил wrote:Hi, upgraded my cluster (3 hosts, engine, nfs-share) to the latest 4.1 release and Centos 7.3 (from 4.06). Did the following: 1. Upgraded engine machine to Centos 7.32. Upgraded engine packages and ran "engine-setup"3. Upgraded one by one hosts to 7.3 + packages from the new 4.1. repo and refreshed hosts capabilities.4. Raised cluster and datacenter compatibility level to 4.1.5. Restarted virtual machines and tested migration.6. Profit! Everything went really smoothly. No errors. Now trying to figure out how the sparsify function works. I need to run trimming from inside the VM first? Best regards, Mikhail.   02.02.2017, 15:19, "Sandro Bonazzola" :Hi,did you install/update to 4.1.0? Let us know your experience!We end up knowing only when things doesn't work well, let us know it works fine for you :-) If you're not planning an update to 4.1.0 in the near future, let us know why.Maybe we can help. Thanks! --Sandro BonazzolaBetter technology. Faster innovation. Powered by community collaboration.See how it works at redhat.com,___Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users  -- С уважением, Краснобаев Михаил.   ___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
,___Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users  -- С уважением, Краснобаев Михаил.  ___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-02 Thread Fernando Frediani

Hello

Thanks for sharing your procedures.

Why did you have to restart VMs for the migration to work ? Is it 
mandatory for an upgrade ?


Fernando


On 02/02/2017 12:23, Краснобаев Михаил wrote:

Hi,
upgraded my cluster (3 hosts, engine, nfs-share) to the latest 4.1 
release and Centos 7.3 (from 4.06).

Did the following:
1. Upgraded engine machine to Centos 7.3
2. Upgraded engine packages and ran "engine-setup"
3. Upgraded one by one hosts to 7.3 + packages from the new 4.1. repo 
and refreshed hosts capabilities.

4. Raised cluster and datacenter compatibility level to 4.1.
5. Restarted virtual machines and tested migration.
6. Profit! Everything went really smoothly. No errors.
Now trying to figure out how the sparsify function works. I need to 
run trimming from inside the VM first?

Best regards, Mikhail.
02.02.2017, 15:19, "Sandro Bonazzola" :

Hi,
did you install/update to 4.1.0? Let us know your experience!
We end up knowing only when things doesn't work well, let us know it 
works fine for you :-)
If you're not planning an update to 4.1.0 in the near future, let us 
know why.

Maybe we can help.
Thanks!
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com 
,

___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users


--
С уважением, Краснобаев Михаил.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-02 Thread Краснобаев Михаил
Hi, upgraded my cluster (3 hosts, engine, nfs-share) to the latest 4.1 release and Centos 7.3 (from 4.06). Did the following: 1. Upgraded engine machine to Centos 7.32. Upgraded engine packages and ran "engine-setup"3. Upgraded one by one hosts to 7.3 + packages from the new 4.1. repo and refreshed hosts capabilities.4. Raised cluster and datacenter compatibility level to 4.1.5. Restarted virtual machines and tested migration.6. Profit! Everything went really smoothly. No errors. Now trying to figure out how the sparsify function works. I need to run trimming from inside the VM first? Best regards, Mikhail.   02.02.2017, 15:19, "Sandro Bonazzola" :Hi,did you install/update to 4.1.0? Let us know your experience!We end up knowing only when things doesn't work well, let us know it works fine for you :-) If you're not planning an update to 4.1.0 in the near future, let us know why.Maybe we can help. Thanks!--Sandro BonazzolaBetter technology. Faster innovation. Powered by community collaboration.See how it works at redhat.com,___Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users  -- С уважением, Краснобаев Михаил.  ___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-02 Thread Sandro Bonazzola
Hi,
did you install/update to 4.1.0? Let us know your experience!
We end up knowing only when things doesn't work well, let us know it works
fine for you :-)

If you're not planning an update to 4.1.0 in the near future, let us know
why.
Maybe we can help.

Thanks!
-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users