Re: [ovirt-users] Ovirt 4.1 testing backup and restore Self-hosted Engine

2017-08-25 Thread wodel youchi
Hi again,

I found this article
https://keithtenzer.com/2017/05/02/rhev-4-1-lab-installation-and-configuration-guide/
I used the last section to delete the old hosted-engine storage, and it
worked, the minute I deleted the old hosted-storage the system imported the
new one and then imported the new VM-Manager into the Web admin portal.

Regards.

2017-08-25 23:15 GMT+01:00 wodel youchi :

> Hi again,
>
> I redid the test again, I re-read the Self-Hosted Engine documentation,
> there is a link to a RedHat article https://access.redhat.com/
> solutions/1517683 which talks about how to remove the dead hostedEngine
> VM from the web admin portal.
>
> But the article does not talk about how to remove the old hosted engine
> storage, and this is what causes the problem.
>
> This storage is still pointing to the old iscsi disk used by the dead
> Manager, it's is down, but the new manager cannot detach it, saying that
> the storage domain doesn't exist, which is right, but how to force the
> Manager to delete it? I have no idea, I tried to remove it with REST API,
> without luck.
>
> I tried to import the new hosted storage, but the system said: the storage
> name is already in use. So I am stuck.
>
> any idea? do I have to delete it from the database? if yes how?
>
> Regards.
>
> 2017-08-25 20:07 GMT+01:00 wodel youchi :
>
>> Hi,
>>
>> I was able to remove the hostedEngine VM, but I didn't succeed to remove
>> the old hostedEngine Storage domain.
>> I tried several time to remove it, but I couldn't, the VM engine goes in
>> pause mode. All I could do is to detach the hostedEngine from the
>> datacenter. I then put all the other data domains in maintenance mode, the
>> I reactivated my master data domain hoping that it will import the new
>> hostedEngine domain, but without luck.
>>
>> It seems like there is something missing in this procedure.
>>
>> Regards
>>
>> 2017-08-25 9:28 GMT+01:00 Alan Griffiths :
>>
>>> As I recall (a few weeks ago now) it was after restore, once the host
>>> had been registered in the Manager. However, I was testing on 4.0, so maybe
>>> the behaviour is slightly different in 4.1.
>>>
>>> Can you see anything in the Engine or vdsm logs as to why it won't
>>> remove the storage? Perhaps try removing the stale HostedEngine VM ?
>>>
>>> On 25 August 2017 at 09:14, wodel youchi  wrote:
>>>
 Hi and thanks,

 But when to remove the hosted_engine storage ? During the restore
 procedure or after ? Because after I couldn't do it, the manager refused to
 put that storage in maintenance mode.

 Regards

 Le 25 août 2017 08:49, "Alan Griffiths"  a
 écrit :

> As I recall from my testing. If you remove the old hosted_storage
> domain then the new one should get automatically imported.
>
> On 24 August 2017 at 23:03, wodel youchi 
> wrote:
>
>> Hi,
>>
>> I am testing the backup and restore procedure of the Self-hosted
>> Engine, and I have a problem.
>>
>> This haw I did the test.
>>
>> I have two hypervisors hosted-engine. I am used iSCSI disk for the VM
>> engine.
>>
>> I followed the procedure described in the Self-hosted Engine document
>> to execute the backup, I put the first host in maintenance mode, the I
>> create the backup and save it elsewhere.
>>
>> Then I've create a new iscsi disk, I reinstalled the first host with
>> the save IP/hostname, then I followed the restore procedure to get the
>> Manager up and running again.
>> - hosted-engine --deploy
>> - do not execute engine-setup, restore backup first
>> - execute engine-setup
>> - remove the host from the manager
>> - synchronize the restored manger with the host
>> - finalize deployment.
>>
>> all went well till this point, but I have a problem with the
>> VM-engine, it is shown as down in the admin portal. the ovirt-ha-agent
>> cannot retrieve the VM config from the shared storage.
>>
>> I think the problem, is that the hosted-engine storage domain is
>> still pointing to the old disk of the old manager and not the new one. I
>> don't know where is this information is stored, in the DB or in the
>> Manager's config files, but when I click Manager hosted-engine domain, I
>> can see the old LUN grayed and the new one (which is used by the restored
>> Manager) is not grayed.
>>
>> How can I fix this?
>>
>> Regards.
>>
>>
>> 
>>  Garanti
>> sans virus. www.avast.com
>> 
>> <#m_-2590168438000725267_m_7057565286932519348_m_4457919776337353415_m_9177001278217562974_m_7731007781891096843_m_3883279741882476845_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
>>
>> 

Re: [ovirt-users] Ovirt 4.1 testing backup and restore Self-hosted Engine

2017-08-25 Thread wodel youchi
Hi again,

I redid the test again, I re-read the Self-Hosted Engine documentation,
there is a link to a RedHat article
https://access.redhat.com/solutions/1517683 which talks about how to remove
the dead hostedEngine VM from the web admin portal.

But the article does not talk about how to remove the old hosted engine
storage, and this is what causes the problem.

This storage is still pointing to the old iscsi disk used by the dead
Manager, it's is down, but the new manager cannot detach it, saying that
the storage domain doesn't exist, which is right, but how to force the
Manager to delete it? I have no idea, I tried to remove it with REST API,
without luck.

I tried to import the new hosted storage, but the system said: the storage
name is already in use. So I am stuck.

any idea? do I have to delete it from the database? if yes how?

Regards.

2017-08-25 20:07 GMT+01:00 wodel youchi :

> Hi,
>
> I was able to remove the hostedEngine VM, but I didn't succeed to remove
> the old hostedEngine Storage domain.
> I tried several time to remove it, but I couldn't, the VM engine goes in
> pause mode. All I could do is to detach the hostedEngine from the
> datacenter. I then put all the other data domains in maintenance mode, the
> I reactivated my master data domain hoping that it will import the new
> hostedEngine domain, but without luck.
>
> It seems like there is something missing in this procedure.
>
> Regards
>
> 2017-08-25 9:28 GMT+01:00 Alan Griffiths :
>
>> As I recall (a few weeks ago now) it was after restore, once the host had
>> been registered in the Manager. However, I was testing on 4.0, so maybe the
>> behaviour is slightly different in 4.1.
>>
>> Can you see anything in the Engine or vdsm logs as to why it won't remove
>> the storage? Perhaps try removing the stale HostedEngine VM ?
>>
>> On 25 August 2017 at 09:14, wodel youchi  wrote:
>>
>>> Hi and thanks,
>>>
>>> But when to remove the hosted_engine storage ? During the restore
>>> procedure or after ? Because after I couldn't do it, the manager refused to
>>> put that storage in maintenance mode.
>>>
>>> Regards
>>>
>>> Le 25 août 2017 08:49, "Alan Griffiths"  a
>>> écrit :
>>>
 As I recall from my testing. If you remove the old hosted_storage
 domain then the new one should get automatically imported.

 On 24 August 2017 at 23:03, wodel youchi 
 wrote:

> Hi,
>
> I am testing the backup and restore procedure of the Self-hosted
> Engine, and I have a problem.
>
> This haw I did the test.
>
> I have two hypervisors hosted-engine. I am used iSCSI disk for the VM
> engine.
>
> I followed the procedure described in the Self-hosted Engine document
> to execute the backup, I put the first host in maintenance mode, the I
> create the backup and save it elsewhere.
>
> Then I've create a new iscsi disk, I reinstalled the first host with
> the save IP/hostname, then I followed the restore procedure to get the
> Manager up and running again.
> - hosted-engine --deploy
> - do not execute engine-setup, restore backup first
> - execute engine-setup
> - remove the host from the manager
> - synchronize the restored manger with the host
> - finalize deployment.
>
> all went well till this point, but I have a problem with the
> VM-engine, it is shown as down in the admin portal. the ovirt-ha-agent
> cannot retrieve the VM config from the shared storage.
>
> I think the problem, is that the hosted-engine storage domain is still
> pointing to the old disk of the old manager and not the new one. I don't
> know where is this information is stored, in the DB or in the Manager's
> config files, but when I click Manager hosted-engine domain, I can see the
> old LUN grayed and the new one (which is used by the restored Manager) is
> not grayed.
>
> How can I fix this?
>
> Regards.
>
>
> 
>  Garanti
> sans virus. www.avast.com
> 
> <#m_7057565286932519348_m_4457919776337353415_m_9177001278217562974_m_7731007781891096843_m_3883279741882476845_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>

>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt 4.1 testing backup and restore Self-hosted Engine

2017-08-25 Thread wodel youchi
Hi,

I was able to remove the hostedEngine VM, but I didn't succeed to remove
the old hostedEngine Storage domain.
I tried several time to remove it, but I couldn't, the VM engine goes in
pause mode. All I could do is to detach the hostedEngine from the
datacenter. I then put all the other data domains in maintenance mode, the
I reactivated my master data domain hoping that it will import the new
hostedEngine domain, but without luck.

It seems like there is something missing in this procedure.

Regards

2017-08-25 9:28 GMT+01:00 Alan Griffiths :

> As I recall (a few weeks ago now) it was after restore, once the host had
> been registered in the Manager. However, I was testing on 4.0, so maybe the
> behaviour is slightly different in 4.1.
>
> Can you see anything in the Engine or vdsm logs as to why it won't remove
> the storage? Perhaps try removing the stale HostedEngine VM ?
>
> On 25 August 2017 at 09:14, wodel youchi  wrote:
>
>> Hi and thanks,
>>
>> But when to remove the hosted_engine storage ? During the restore
>> procedure or after ? Because after I couldn't do it, the manager refused to
>> put that storage in maintenance mode.
>>
>> Regards
>>
>> Le 25 août 2017 08:49, "Alan Griffiths"  a
>> écrit :
>>
>>> As I recall from my testing. If you remove the old hosted_storage domain
>>> then the new one should get automatically imported.
>>>
>>> On 24 August 2017 at 23:03, wodel youchi  wrote:
>>>
 Hi,

 I am testing the backup and restore procedure of the Self-hosted
 Engine, and I have a problem.

 This haw I did the test.

 I have two hypervisors hosted-engine. I am used iSCSI disk for the VM
 engine.

 I followed the procedure described in the Self-hosted Engine document
 to execute the backup, I put the first host in maintenance mode, the I
 create the backup and save it elsewhere.

 Then I've create a new iscsi disk, I reinstalled the first host with
 the save IP/hostname, then I followed the restore procedure to get the
 Manager up and running again.
 - hosted-engine --deploy
 - do not execute engine-setup, restore backup first
 - execute engine-setup
 - remove the host from the manager
 - synchronize the restored manger with the host
 - finalize deployment.

 all went well till this point, but I have a problem with the VM-engine,
 it is shown as down in the admin portal. the ovirt-ha-agent cannot retrieve
 the VM config from the shared storage.

 I think the problem, is that the hosted-engine storage domain is still
 pointing to the old disk of the old manager and not the new one. I don't
 know where is this information is stored, in the DB or in the Manager's
 config files, but when I click Manager hosted-engine domain, I can see the
 old LUN grayed and the new one (which is used by the restored Manager) is
 not grayed.

 How can I fix this?

 Regards.


 
  Garanti
 sans virus. www.avast.com
 
 <#m_4457919776337353415_m_9177001278217562974_m_7731007781891096843_m_3883279741882476845_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>

 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users


>>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt web interface events console sorting

2017-08-25 Thread Oved Ourfali
What version are you using?

On Aug 24, 2017 5:41 PM, "Misak Khachatryan"  wrote:

> Hello,
>
> my events started appear in reverse order lower part of web interface.
> Anybody have same issues?
>
>
> Best regards,
> Misak Khachatryan
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] host timeout for remote colo

2017-08-25 Thread Bill James
thanks, we found when we switched to a different internet provider that 
the network issues went away.
I'll look into upgrading, but it hard for us to keep up. We just got to 
4.1.0 recently.  :-)



On 8/24/17 10:57 PM, Yaniv Kaul wrote:



On Thu, Aug 24, 2017 at 9:55 PM, Bill James > wrote:


We have an ovirt master (engine) host in Los Angeles and some
remote servers in the UK.
Normally they work fine, but when there is a heavy load on the UK
servers the management engine has problems with heartbeat and ends
up trying to restart the nodes.


Perhaps the mgmt interface is used for traffic other than mgmt? On 
small scale it's OK. For bigger scale and workloads, it's best to 
separate traffic to dedicated NICs.



I saw in this thread that I can change vdsHeartbeatInSeconds
(https://www.mail-archive.com/users@ovirt.org/msg41695.html
)
but I don't really want to change it globally, just for the nodes
in UK.
Also not sure how to get the current setting of that value, only
how to change it. How do I tell current value?  I heard default is
30 seconds.


To change it:
usr/share/ovirt-engine/dbscripts/engine-psql.sh -c "update vdc_options 
set option_value = 90 where option_name = 'vdsHeartbeatInSeconds'



ovirt-engine-4.1.0.4-1.el7.centos.noarch


I recommend upgrade, though not specifically due to the above issue.


Or maybe its not best practice to have a cluster that far from the
engine?


We have an Engien in Israel managing hosts in Europe and the US.
Y.



2017-08-24 11:27:51,921-07 WARN
[org.ovirt.engine.core.vdsbroker.VdsManager]
(DefaultQuartzScheduler3) [feefbf3f-d0e2-4a64-b008-80838d04f130]
Failed to refresh VDS, network error, continuing,
vds='ovirt1.evuk.j2noc.com
'(d0482635-93fd-4cc3-9c78-523078845f11):
VDSGenericException: VDSNetworkException: Heartbeat exceeded

___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users






Cloud Services for Business www.j2.com
j2 | eFax | eVoice | FuseMail | Campaigner | KeepItSafe | Onebox


This email, including its contents and attachments, contains information from 
j2 Global, Inc. and/or its affiliates that may be privileged, confidential, or 
otherwise protected from disclosure.  The information is intended for the 
addressee(s) only.  If you are not an addressee, any disclosure, copy, 
distribution, or use of this message is prohibited.  If you have received this 
email in error, please immediately notify the sender by reply email and delete 
the message and any copies.  �2017 j2 Global, Inc. and affiliates.  All rights 
reserved.  eFax�, eVoice�, Campaigner�, FuseMail�, KeepItSafe�, and Onebox� are 
trademarks of j2 Global, Inc. and affiliates.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt install videos or blogs?

2017-08-25 Thread ovirt

Thanks for the link.

On 2017-08-25 04:40, Jakub Niedermertl wrote:

There is official oVirt youtube channel [1] full of "deep-dive" videos
usually created by authors of described features themselves.

[1]: https://www.youtube.com/channel/UCYZ57Bi2QkmfRrJ0U5m72MQ

On Tue, Aug 22, 2017 at 9:03 PM,  wrote:


Topics to see:
1) Updates (if any) on the "oVirt + Gluster Storage" blog post
2) How to add more nodes, going from 3 nodes to 5 or 9
3) Intro concepts for newbies aka "oVirt for Dummies"

On 2017-08-22 11:32, Jason Brooks wrote:
On Tue, Aug 22, 2017 at 1:52 AM,   wrote:
Are there any other resources, blogs or install videos similar to
this? (see
link)


https://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4.1-and-gluster-storage/

[1]

What are some topics you'd like to see?

Jason

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users [2]

 ___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users [2]



Links:
--
[1]
https://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4.1-and-gluster-storage/
[2] http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Centos 7.3 ovirt 4.0.6 Can't add host to cluster collectd or collectd-disk not found

2017-08-25 Thread Sandro Bonazzola
e=1
> gpgcheck=0
>
> [centos-ovirt40-release]
> name=CentOS-7 - oVirt 4.0
> baseurl=http://mirror.centos.org/centos/7/virt/$basearch/ovirt-4.0/
> gpgcheck=0
> enabled=1
>
> excludepkgs=collectd*
>
> I add also the part of log on the manager relatives to host-deploy where
> the error is logged
>
> [root@ovcmgr host-deploy]# more 
> ovirt-host-deploy-20170825092719-xx-53018ceb.log
> | grep collectd
> 2017-08-25 09:26:49 DEBUG otopi.context context.dumpSequence:744
> METHOD otopi.plugins.ovirt_host_deploy.collectd.packages.Plugin._packages
> (None)
> 2017-08-25 09:26:50 DEBUG otopi.context context.dumpSequence:744
> METHOD otopi.plugins.ovirt_host_deploy.collectd.packages.Plugin._packages
> (None)
> 2017-08-25 09:27:19 DEBUG otopi.context context._executeMethod:128 Stage
> packages METHOD otopi.plugins.ovirt_host_deploy.collectd.packages.
> Plugin._packages
> 2017-08-25 09:27:19 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum queue package collectd for install/update
> 2017-08-25 09:27:19 ERROR otopi.plugins.otopi.packagers.yumpackager
> yumpackager.error:85 Yum Cannot queue package collectd: Package collectd
> cannot be found
>   File 
> "/tmp/ovirt-3rP0BGQm0o/otopi-plugins/ovirt-host-deploy/collectd/packages.py",
> line 53, in _packages
> 'collectd-write_http',
> RuntimeError: Package collectd cannot be found
> 2017-08-25 09:27:19 ERROR otopi.context context._executeMethod:151 Failed
> to execute stage 'Package installation': Package collectd cannot be found
> 2017-08-25 09:27:19 DEBUG otopi.context context.dumpEnvironment:770 ENV
> BASE/exceptionInfo=list:'[(,
> RuntimeError('Package collectd cannot be found',),  0x3514ef0>)]'
> 2017-08-25 09:27:19 DEBUG otopi.context context.dumpEnvironment:770 ENV
> BASE/exceptionInfo=list:'[(,
> RuntimeError('Package collectd cannot be found',),  0x3514ef0>)]'
>
> After this i see that there is and include package for epel-release, that
> will install the epel repository,
>
> so i installed manually the epel-repository
>
> added the excludepkgs line but now the error is Package collectd-disk
> cannot be found
>
> this is the epel.repo modified
>
> [root@ovc2n05 yum.repos.d]# more /etc/yum.repos.d/epel.repo
>
> [epel]
> name=Extra Packages for Enterprise Linux 7 - $basearch
> #baseurl=http://download.fedoraproject.org/pub/epel/7/$basearch
> metalink=https://mirrors.fedoraproject.org/metalink?
> repo=epel-7&arch=$basearch
> failovermethod=priority
> enabled=1
>
> excludepkgs=collectd*
>
> gpgcheck=1
> gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
>
> [epel-debuginfo]
> name=Extra Packages for Enterprise Linux 7 - $basearch - Debug
> #baseurl=http://download.fedoraproject.org/pub/epel/7/$basearch/debug
> metalink=https://mirrors.fedoraproject.org/metalink?
> repo=epel-debug-7&arch=$basearch
> failovermethod=priority
> enabled=0
> gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
> gpgcheck=1
>
> [epel-source]
> name=Extra Packages for Enterprise Linux 7 - $basearch - Source
> #baseurl=http://download.fedoraproject.org/pub/epel/7/SRPMS
> metalink=https://mirrors.fedoraproject.org/metalink?
> repo=epel-source-7&arch=$basearch
> failovermethod=priority
> enabled=0
> gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
> gpgcheck=1
>
> the epel-testing.repo has all disabled
>
> This is the part of log on the manager
>
> [root@ovcmgr host-deploy]# more ovirt-host-deploy-20170825*  | grep
> collectd-disk
> 2017-08-25 10:36:23 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum queue package collectd-disk for install/update
> 2017-08-25 10:36:23 ERROR otopi.plugins.otopi.packagers.yumpackager
> yumpackager.error:85 Yum Cannot queue package collectd-disk: Package
> collectd-disk cannot be found
> RuntimeError: Package collectd-disk cannot be found
> 2017-08-25 10:36:23 ERROR otopi.context context._executeMethod:151 Failed
> to execute stage 'Package installation': Package collectd-disk cannot be
> found
> 2017-08-25 10:36:23 DEBUG otopi.context context.dumpEnvironment:770 ENV
> BASE/exceptionInfo=list:'[(,
> RuntimeError('Package collectd-disk cannot be found',),  at 0x592e290>)]'
> 2017-08-25 10:36:23 DEBUG otopi.context context.dumpEnvironment:770 ENV
> BASE/exceptionInfo=list:'[(,
> RuntimeError('Package collectd-disk cannot be found',),  at 0x592e290>)]'
>
> I don't know what other to try.
>
> Any help would be accepted
>
> Claudio Soprano
>
> --
>
>/|/   _/   /|/   _/|   

[ovirt-users] No SPM after network issue

2017-08-25 Thread Mahdi Adnan
Hi,

Our oVirt DC became unresponsive after networking issue between "Engine, Hosts, 
and Gluster storage" after around 50 seconds network issue resolved but i lost 
SPM.
sanlock log:

2017-08-24 16:00:05+0300 73290 [1127]: s14191 lockspace 
1b34ff4c-5d9d-44f5-a22e-6ca411865833:1:/rhev/data-center/mnt/glusterSD/192.168.209.195:_ovirt__imgs/1b34ff4c-5d9d-44f5-a22e-6ca411865833/dom_md/ids:0
2017-08-24 16:00:05+0300 73290 [1249]: 1b34ff4c aio collect RD 
0x7fa6f40008c0:0x7fa6f40008d0:0x7fa6f4101000 result -5:0 match res
2017-08-24 16:00:05+0300 73290 [1249]: read_sectors delta_leader offset 0 rv -5 
/rhev/data-center/mnt/glusterSD/192.168.209.195:_ovirt__imgs/1b34ff4c-5d9d-44f5-a22e-6ca411865833/dom_md/ids
2017-08-24 16:00:06+0300 73291 [1127]: s14191 add_lockspace fail result -5
2017-08-24 16:00:08+0300 73293 [12039]: s14192 lockspace 
1b34ff4c-5d9d-44f5-a22e-6ca411865833:1:/rhev/data-center/mnt/glusterSD/192.168.209.195:_ovirt__imgs/1b34ff4c-5d9d-44f5-a22e-6ca411865833/dom_md/ids:0
2017-08-24 16:00:08+0300 73293 [1367]: 1b34ff4c aio collect RD 
0x7fa6f40008c0:0x7fa6f40008d0:0x7fa6f4101000 result -5:0 match res
2017-08-24 16:00:08+0300 73293 [1367]: read_sectors delta_leader offset 0 rv -5 
/rhev/data-center/mnt/glusterSD/192.168.209.195:_ovirt__imgs/1b34ff4c-5d9d-44f5-a22e-6ca411865833/dom_md/ids
2017-08-24 16:00:09+0300 73294 [12039]: s14192 add_lockspace fail result -5


---

i cant read anything from ids, it gives mr read io error.
how can i recreate the ids file or reset sanlock without losing the whole DC ?

Thanks.


--

Respectfully
Mahdi A. Mahdi

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 4.1.5 Finally GlusterFS via lbgfapi

2017-08-25 Thread Ralf Schenk
Hello,

Progress: I finally tried to migrate the machine to other hosts in the
cluster. For one this was working !

See attached vdsm.log. The migration to host microcloud25 worked as
expected, migrating back to initial host microloud22 also. Other hosts
(microcloud21, microcloud23,microcloud24 where not working at all as a
migration target.

Perhaps the working ones were the two that I rebooted after upgrading
all hosts to Ovirt 4.1.5. I'll check with another host to reboot it and
try again.Perhaps any other daemon (libvirt/supervdsm or I don't know
has to be restarted)

Bye.

Am 25.08.2017 um 14:14 schrieb Ralf Schenk:
>
> Hello,
>
> setting DNS glusterfs.rxmgmt.databay.de to only one IP didn't change
> anything.
>
> [root@microcloud22 ~]# dig glusterfs.rxmgmt.databay.de
>
> ; <<>> DiG 9.9.4-RedHat-9.9.4-50.el7_3.1 <<>> glusterfs.rxmgmt.databay.de
> ;; global options: +cmd
> ;; Got answer:
> ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 35135
> ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 6
>
> ;; OPT PSEUDOSECTION:
> ; EDNS: version: 0, flags:; udp: 4096
> ;; QUESTION SECTION:
> ;glusterfs.rxmgmt.databay.de.   IN  A
>
> ;; ANSWER SECTION:
> glusterfs.rxmgmt.databay.de. 84600 IN   A   172.16.252.121
>
> ;; AUTHORITY SECTION:
> rxmgmt.databay.de.  84600   IN  NS  ns3.databay.de.
> rxmgmt.databay.de.  84600   IN  NS  ns.databay.de.
>
> vdsm.log still shows:
> 2017-08-25 14:02:38,476+0200 INFO  (periodic/0) [vdsm.api] FINISH
> repoStats return={u'7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96': {'code': 0,
> 'actual': True, 'version': 4, 'acquired': True, 'delay':
> '0.000295126', 'lastCheck': '0.8', 'valid': True},
> u'2b2a44fc-f2bd-47cd-b7af-00be59e30a35': {'code': 0, 'actual': True,
> 'version': 0, 'acquired': True, 'delay': '0.000611748', 'lastCheck':
> '3.6', 'valid': True}, u'5d99af76-33b5-47d8-99da-1f32413c7bb0':
> {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay':
> '0.000324379', 'lastCheck': '3.6', 'valid': True},
> u'a7fbaaad-7043-4391-9523-3bedcdc4fb0d': {'code': 0, 'actual': True,
> 'version': 0, 'acquired': True, 'delay': '0.000718626', 'lastCheck':
> '4.1', 'valid': True}} from=internal,
> task_id=ec205bf0-ff00-4fac-97f0-e6a7f3f99492 (api:52)
> 2017-08-25 14:02:38,584+0200 ERROR (migsrc/ffb71f79) [virt.vm]
> (vmId='ffb71f79-54cd-4f0e-b6b5-3670236cb497') failed to initialize
> gluster connection (src=0x7fd82001fc30 priv=0x7fd820003ac0): Success
> (migration:287)
> 2017-08-25 14:02:38,619+0200 ERROR (migsrc/ffb71f79) [virt.vm]
> (vmId='ffb71f79-54cd-4f0e-b6b5-3670236cb497') Failed to migrate
> (migration:429)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line
> 411, in run
>     self._startUnderlyingMigration(time.time())
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line
> 487, in _startUnderlyingMigration
>     self._perform_with_conv_schedule(duri, muri)
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line
> 563, in _perform_with_conv_schedule
>     self._perform_migration(duri, muri)
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line
> 529, in _perform_migration
>     self._vm._dom.migrateToURI3(duri, params, flags)
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line
> 69, in f
>     ret = attr(*args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py",
> line 123, in wrapper
>     ret = f(*args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 944, in
> wrapper
>     return func(inst, *args, **kwargs)
>   File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1939, in
> migrateToURI3
>     if ret == -1: raise libvirtError ('virDomainMigrateToURI3()
> failed', dom=self)
> libvirtError: failed to initialize gluster connection
> (src=0x7fd82001fc30 priv=0x7fd820003ac0): Success
>
>
> One thing I noticed in destination vdsm.log:
> 2017-08-25 10:38:03,413+0200 ERROR (jsonrpc/7) [virt.vm]
> (vmId='ffb71f79-54cd-4f0e-b6b5-3670236cb497') *Alias not found for
> device type disk during migration at destination host (vm:4587)*
> 2017-08-25 10:38:03,478+0200 INFO  (jsonrpc/7) [root]  (hooks:108)
> 2017-08-25 10:38:03,492+0200 INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer]
> RPC call VM.migrationCreate succeeded in 0.51 seconds (__init__:539)
> 2017-08-25 10:38:03,669+0200 INFO  (jsonrpc/2) [vdsm.api] START
> destroy(gracefulAttempts=1) from=:::172.16.252.122,45736 (api:46)
> 2017-08-25 10:38:03,669+0200 INFO  (jsonrpc/2) [virt.vm]
> (vmId='ffb71f79-54cd-4f0e-b6b5-3670236cb497') Release VM resources
> (vm:4254)
> 2017-08-25 10:38:03,670+0200 INFO  (jsonrpc/2) [virt.vm]
> (vmId='ffb71f79-54cd-4f0e-b6b5-3670236cb497') Stopping connection
> (guestagent:430)
> 2017-08-25 10:38:03,671+0200 INFO  (jsonrpc/2) [vdsm.api] START
> teardownImage(sdUUID=u'5d99af76-33b5-47d8-99da-1f32413c7bb0',
> spUUID=u'0001-0001-0001-0001-00b9',
> img

Re: [ovirt-users] oVirt 4.1.5 Finally GlusterFS via lbgfapi

2017-08-25 Thread Ralf Schenk
Hello,

setting DNS glusterfs.rxmgmt.databay.de to only one IP didn't change
anything.

[root@microcloud22 ~]# dig glusterfs.rxmgmt.databay.de

; <<>> DiG 9.9.4-RedHat-9.9.4-50.el7_3.1 <<>> glusterfs.rxmgmt.databay.de
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 35135
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 6

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;glusterfs.rxmgmt.databay.de.   IN  A

;; ANSWER SECTION:
glusterfs.rxmgmt.databay.de. 84600 IN   A   172.16.252.121

;; AUTHORITY SECTION:
rxmgmt.databay.de.  84600   IN  NS  ns3.databay.de.
rxmgmt.databay.de.  84600   IN  NS  ns.databay.de.

vdsm.log still shows:
2017-08-25 14:02:38,476+0200 INFO  (periodic/0) [vdsm.api] FINISH
repoStats return={u'7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96': {'code': 0,
'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000295126',
'lastCheck': '0.8', 'valid': True},
u'2b2a44fc-f2bd-47cd-b7af-00be59e30a35': {'code': 0, 'actual': True,
'version': 0, 'acquired': True, 'delay': '0.000611748', 'lastCheck':
'3.6', 'valid': True}, u'5d99af76-33b5-47d8-99da-1f32413c7bb0': {'code':
0, 'actual': True, 'version': 4, 'acquired': True, 'delay':
'0.000324379', 'lastCheck': '3.6', 'valid': True},
u'a7fbaaad-7043-4391-9523-3bedcdc4fb0d': {'code': 0, 'actual': True,
'version': 0, 'acquired': True, 'delay': '0.000718626', 'lastCheck':
'4.1', 'valid': True}} from=internal,
task_id=ec205bf0-ff00-4fac-97f0-e6a7f3f99492 (api:52)
2017-08-25 14:02:38,584+0200 ERROR (migsrc/ffb71f79) [virt.vm]
(vmId='ffb71f79-54cd-4f0e-b6b5-3670236cb497') failed to initialize
gluster connection (src=0x7fd82001fc30 priv=0x7fd820003ac0): Success
(migration:287)
2017-08-25 14:02:38,619+0200 ERROR (migsrc/ffb71f79) [virt.vm]
(vmId='ffb71f79-54cd-4f0e-b6b5-3670236cb497') Failed to migrate
(migration:429)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line
411, in run
    self._startUnderlyingMigration(time.time())
  File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line
487, in _startUnderlyingMigration
    self._perform_with_conv_schedule(duri, muri)
  File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line
563, in _perform_with_conv_schedule
    self._perform_migration(duri, muri)
  File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line
529, in _perform_migration
    self._vm._dom.migrateToURI3(duri, params, flags)
  File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line
69, in f
    ret = attr(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py",
line 123, in wrapper
    ret = f(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 944, in
wrapper
    return func(inst, *args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1939, in
migrateToURI3
    if ret == -1: raise libvirtError ('virDomainMigrateToURI3() failed',
dom=self)
libvirtError: failed to initialize gluster connection
(src=0x7fd82001fc30 priv=0x7fd820003ac0): Success


One thing I noticed in destination vdsm.log:
2017-08-25 10:38:03,413+0200 ERROR (jsonrpc/7) [virt.vm]
(vmId='ffb71f79-54cd-4f0e-b6b5-3670236cb497') *Alias not found for
device type disk during migration at destination host (vm:4587)*
2017-08-25 10:38:03,478+0200 INFO  (jsonrpc/7) [root]  (hooks:108)
2017-08-25 10:38:03,492+0200 INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer]
RPC call VM.migrationCreate succeeded in 0.51 seconds (__init__:539)
2017-08-25 10:38:03,669+0200 INFO  (jsonrpc/2) [vdsm.api] START
destroy(gracefulAttempts=1) from=:::172.16.252.122,45736 (api:46)
2017-08-25 10:38:03,669+0200 INFO  (jsonrpc/2) [virt.vm]
(vmId='ffb71f79-54cd-4f0e-b6b5-3670236cb497') Release VM resources (vm:4254)
2017-08-25 10:38:03,670+0200 INFO  (jsonrpc/2) [virt.vm]
(vmId='ffb71f79-54cd-4f0e-b6b5-3670236cb497') Stopping connection
(guestagent:430)
2017-08-25 10:38:03,671+0200 INFO  (jsonrpc/2) [vdsm.api] START
teardownImage(sdUUID=u'5d99af76-33b5-47d8-99da-1f32413c7bb0',
spUUID=u'0001-0001-0001-0001-00b9',
imgUUID=u'9c007b27-0ab7-4474-9317-a294fd04c65f', volUUID=None)
from=:::172.16.252.122,45736,
task_id=4878dd0c-54e9-4bef-9ec7-446b110c9d8b (api:46)
2017-08-25 10:38:03,671+0200 INFO  (jsonrpc/2) [vdsm.api] FINISH
teardownImage return=None from=:::172.16.252.122,45736,
task_id=4878dd0c-54e9-4bef-9ec7-446b110c9d8b (api:52)
2017-08-25 10:38:03,672+0200 INFO  (jsonrpc/2) [virt.vm]
(vmId='ffb71f79-54cd-4f0e-b6b5-3670236cb497') Stopping connection
(guestagent:430)




Am 25.08.2017 um 14:03 schrieb Denis Chaplygin:
> Hello!
>
> On Fri, Aug 25, 2017 at 1:40 PM, Ralf Schenk  > wrote:
>
> Hello,
>
> I'm using the DNS Balancing gluster hostname for years now, not
> only with ovirt. No software so far had a problem. And setting the
> hostname to only one Host of course

Re: [ovirt-users] Bring back "Migrate to other Cluster" feature in GUI

2017-08-25 Thread Matt .
Hi Micheal,

Thanks for your reply.

I know it was there because of the migration between EL6 and EL7 but
you can simply "park" VM's on different clusters which can be very
handy when you need to move clusters, etc or really migrate cluster.

I never had issues with differences between clusters, the GUI warned
me always for it when I by accident selected the wrong cluster. As
that is a human mistake I think you cannot blame such great feature
for admins that don't have their administration in right shape and
rely on everything what's in "their portal".

Maybe we need to start a wide vote to see who thinks it's usable ?

Cheers,

Matt

2017-08-25 13:29 GMT+02:00 Michal Skrivanek :
>
>> On 25 Aug 2017, at 13:20, Matt .  wrote:
>>
>> Hi Guys,
>>
>> As known the feature to Migrate to another cluster is moved from the
>> GUI but available in the API.
>>
>> Is there a possibility to bring it back in the GUI or make it an
>> option we can enable ? When provisioning servers it's nice to migrate
>> to another cluster when you per accident provisioned to the wrong
>> cluster.
>>
>> As this was a real feature, why remove improvements ?
>
> Hi Matt,
> unfortunately it was a very frequent cause of mistakes which lead us to the 
> decision to hide it more. Such change for running VM cannot be supported 
> reliably unless you manually make sure that all the settings match exactly on 
> both clusters. These checks are feasible when the VM is Down, which is still 
> allowed.
> The problems originating from incorrect usage are simply not worth it as they 
> are difficult to diagnose, and sometimes quite obscure. This feature was 
> really only meant for EL 6 to EL 7 transition - which is over for 3 releases 
> now - as a workaround to then-non-existant InClusterMigration policy
>
> We will still keep it in API, though I’d discourage you from using it. If you 
> need to correct the provisioned cluster you better do it the supported way of 
> Edit VM and power cycling the guest, that’s the only way we can effectively 
> test it and guarantee the functionality.
>
> Thanks,
> michal
>>
>> I think many of us will appreciate to have it back.
>>
>> Thanks!
>>
>> Matt
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 4.1.5 Finally GlusterFS via lbgfapi

2017-08-25 Thread Denis Chaplygin
Hello!

On Fri, Aug 25, 2017 at 1:40 PM, Ralf Schenk  wrote:

> Hello,
>
> I'm using the DNS Balancing gluster hostname for years now, not only with
> ovirt. No software so far had a problem. And setting the hostname to only
> one Host of course breaks one advantage of a distributed/replicated Cluster
> File-System like loadbalancing the connections to the storage and/or
> failover if one host is missing. In earlier ovirt it wasn't possible to
> specify something like "backupvolfile-server" for a High-Available
> hosted-engine rollout (which I use).
>

As far as i know, backup-volfile-servers is a recommended way to keep you
filesystem mountable in case of server failure. While fs is mounted,
gluster will automatically provide failover. And you definitely can
specify backup-volfile-servers
in the storage domain configuration.


> I already used live migration in such a setup. This was done with pure
> libvirt setup/virsh and later using OpenNebula.
>
>
> Yes, but it was based on a accessing gluster volume as a mount filesystem,
not directly... And i would like to exclude that from list of possible
causes.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt install videos or blogs?

2017-08-25 Thread Jakub Niedermertl
There is official oVirt youtube channel [1] full of "deep-dive" videos
usually created by authors of described features themselves.

[1]: https://www.youtube.com/channel/UCYZ57Bi2QkmfRrJ0U5m72MQ

On Tue, Aug 22, 2017 at 9:03 PM,  wrote:

> Topics to see:
> 1) Updates (if any) on the "oVirt + Gluster Storage" blog post
> 2) How to add more nodes, going from 3 nodes to 5 or 9
> 3) Intro concepts for newbies aka "oVirt for Dummies"
>
>
> On 2017-08-22 11:32, Jason Brooks wrote:
>
>> On Tue, Aug 22, 2017 at 1:52 AM,   wrote:
>>
>>> Are there any other resources, blogs or install videos similar to this?
>>> (see
>>> link)
>>> https://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt
>>> -4.1-and-gluster-storage/
>>>
>>
>> What are some topics you'd like to see?
>>
>> Jason
>>
>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 4.1.5 Finally GlusterFS via lbgfapi

2017-08-25 Thread Ralf Schenk
Hello,

I'm using the DNS Balancing gluster hostname for years now, not only
with ovirt. No software so far had a problem. And setting the hostname
to only one Host of course breaks one advantage of a
distributed/replicated Cluster File-System like loadbalancing the
connections to the storage and/or failover if one host is missing. In
earlier ovirt it wasn't possible to specify something like
"backupvolfile-server" for a High-Available hosted-engine rollout (which
I use).

I already used live migration in such a setup. This was done with pure
libvirt setup/virsh and later using OpenNebula.

Bye



Am 25.08.2017 um 13:11 schrieb Denis Chaplygin:
> Hello!
>
> On Fri, Aug 25, 2017 at 11:05 AM, Ralf Schenk  > wrote:
>
>
> I replayed migration (10:38:02 local time) and recorded vdsm.log
> of source and destination as attached. I can't find anything in
> the gluster logs that shows an error. One information: my FQDN
> glusterfs.rxmgmt.databay.de 
> points to all the gluster hosts:
>
> glusterfs.rxmgmt.databay.de .
> 84600 IN   A   172.16.252.121
> glusterfs.rxmgmt.databay.de .
> 84600 IN   A   172.16.252.125
> glusterfs.rxmgmt.databay.de .
> 84600 IN   A   172.16.252.127
> glusterfs.rxmgmt.databay.de .
> 84600 IN   A   172.16.252.122
> glusterfs.rxmgmt.databay.de .
> 84600 IN   A   172.16.252.124
> glusterfs.rxmgmt.databay.de .
> 84600 IN   A   172.16.252.123
> glusterfs.rxmgmt.databay.de .
> 84600 IN   A   172.16.252.126
> glusterfs.rxmgmt.databay.de .
> 84600 IN   A   172.16.252.128
>
> I double checked all gluster hosts. They all are configured the
> same regarding "option rpc-auth-allow-insecure on" No iptables
> rules on the host.
>
>
> Do you use 'glusterfs.rxmgmt.databay.de
> " as a storage domain host name?
> I'm not a gluster guru, but i'm afraid that some internal gluster
> client code may go crazy, when it receives different address or
> several ip addresses every time. Is it possible to try with separate
> names? You can create a storage domain using 172.16.252.121 for
> example and it should work bypassing your DNS. If it is possible to
> make that, could you please do that and retry live migration?

-- 


*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *r...@databay.de* 
    
*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* 

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Bring back "Migrate to other Cluster" feature in GUI

2017-08-25 Thread Michal Skrivanek

> On 25 Aug 2017, at 13:20, Matt .  wrote:
> 
> Hi Guys,
> 
> As known the feature to Migrate to another cluster is moved from the
> GUI but available in the API.
> 
> Is there a possibility to bring it back in the GUI or make it an
> option we can enable ? When provisioning servers it's nice to migrate
> to another cluster when you per accident provisioned to the wrong
> cluster.
> 
> As this was a real feature, why remove improvements ?

Hi Matt,
unfortunately it was a very frequent cause of mistakes which lead us to the 
decision to hide it more. Such change for running VM cannot be supported 
reliably unless you manually make sure that all the settings match exactly on 
both clusters. These checks are feasible when the VM is Down, which is still 
allowed.
The problems originating from incorrect usage are simply not worth it as they 
are difficult to diagnose, and sometimes quite obscure. This feature was really 
only meant for EL 6 to EL 7 transition - which is over for 3 releases now - as 
a workaround to then-non-existant InClusterMigration policy

We will still keep it in API, though I’d discourage you from using it. If you 
need to correct the provisioned cluster you better do it the supported way of 
Edit VM and power cycling the guest, that’s the only way we can effectively 
test it and guarantee the functionality.

Thanks,
michal
> 
> I think many of us will appreciate to have it back.
> 
> Thanks!
> 
> Matt
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
> 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Bring back "Migrate to other Cluster" feature in GUI

2017-08-25 Thread Matt .
Hi Guys,

As known the feature to Migrate to another cluster is moved from the
GUI but available in the API.

Is there a possibility to bring it back in the GUI or make it an
option we can enable ? When provisioning servers it's nice to migrate
to another cluster when you per accident provisioned to the wrong
cluster.

As this was a real feature, why remove improvements ?

I think many of us will appreciate to have it back.

Thanks!

Matt
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 4.1.5 Finally GlusterFS via lbgfapi

2017-08-25 Thread Denis Chaplygin
Hello!

On Fri, Aug 25, 2017 at 11:05 AM, Ralf Schenk  wrote:

>
> I replayed migration (10:38:02 local time) and recorded vdsm.log of source
> and destination as attached. I can't find anything in the gluster logs that
> shows an error. One information: my FQDN glusterfs.rxmgmt.databay.de
> points to all the gluster hosts:
>
> glusterfs.rxmgmt.databay.de. 84600 IN   A   172.16.252.121
> glusterfs.rxmgmt.databay.de. 84600 IN   A   172.16.252.125
> glusterfs.rxmgmt.databay.de. 84600 IN   A   172.16.252.127
> glusterfs.rxmgmt.databay.de. 84600 IN   A   172.16.252.122
> glusterfs.rxmgmt.databay.de. 84600 IN   A   172.16.252.124
> glusterfs.rxmgmt.databay.de. 84600 IN   A   172.16.252.123
> glusterfs.rxmgmt.databay.de. 84600 IN   A   172.16.252.126
> glusterfs.rxmgmt.databay.de. 84600 IN   A   172.16.252.128
>
> I double checked all gluster hosts. They all are configured the same
> regarding "option rpc-auth-allow-insecure on" No iptables rules on the
> host.
>

Do you use 'glusterfs.rxmgmt.databay.de" as a storage domain host name? I'm
not a gluster guru, but i'm afraid that some internal gluster client code
may go crazy, when it receives different address or several ip addresses
every time. Is it possible to try with separate names? You can create a
storage domain using 172.16.252.121 for example and it should work
bypassing your DNS. If it is possible to make that, could you please do
that and retry live migration?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirtmgmt, webinterfaces and VLANs

2017-08-25 Thread Barak Korren
Barak Korren
bkor...@redhat.com
RHCE, RHCi, RHV-DevOps Team
https://ifireball.wordpress.com/

בתאריך 25 באוג׳ 2017 01:05 PM,‏ "Alexis HAUSER" <
alexis.hau...@imt-atlantique.fr> כתב:

Using self-hosted engine.
I thought about using several interfaces on the engine VM.

The reason why I want to do that : I would like the users accessing the web
interface not to be on the same network that ovirt is using to communicate
betweem hosts and engine.
But it would mean that 2 different FQDN are necessary, right ? I heard HA
requires to access to the engine FQDN...

Do you have any idea how to solve this situation ?


AFAIK the main issue would be with the SSL certificate for the UI/API. But
you can add more FQDNs to it during the installation.

I'm not an HA expert but I think it would probably only need acccess to the
engine port that is connected to the ovirtmgmt network.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirtmgmt, webinterfaces and VLANs

2017-08-25 Thread Alexis HAUSER
Using self-hosted engine. 
I thought about using several interfaces on the engine VM. 

The reason why I want to do that : I would like the users accessing the web 
interface not to be on the same network that ovirt is using to communicate 
betweem hosts and engine. 
But it would mean that 2 different FQDN are necessary, right ? I heard HA 
requires to access to the engine FQDN... 

Do you have any idea how to solve this situation ? 


Alexis 




On 24 August 2017 at 15:39, Alexis HAUSER 
 wrote: 
> 
> In the way Ovirt is currently designed, is there a way to separate the 
> following elements in different VLANs : 
> 
> 1) Communication betweem nodes (hypervisors) and engine (manager) 
> 2) Access to webadmin interface 
> 3) access to user web interface 
> 
> It seems that the following elements all rely on ovirtmgmt, right ? 

Only #1. #2 and #3 could be changed AFAIK, depending on where and how 
you run the engine (Fir e.g. if you run it on a separate host, you 
could attach other interfaces with other VLANs to it). 


-- 
Barak Korren 
RHV DevOps team , RHCE, RHCi 
Red Hat EMEA 
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Centos 7.3 ovirt 4.0.6 Can't add host to cluster collectd or collectd-disk not found

2017-08-25 Thread Claudio Soprano
er.error:85 Yum Cannot queue package collectd: Package collectd 
cannot be found
  File 
"/tmp/ovirt-3rP0BGQm0o/otopi-plugins/ovirt-host-deploy/collectd/packages.py", 
line 53, in _packages

'collectd-write_http',
RuntimeError: Package collectd cannot be found
2017-08-25 09:27:19 ERROR otopi.context context._executeMethod:151 
Failed to execute stage 'Package installation': Package collectd cannot 
be found
2017-08-25 09:27:19 DEBUG otopi.context context.dumpEnvironment:770 ENV 
BASE/exceptionInfo=list:'[(, 
RuntimeError('Package collectd cannot be found',), 0x3514ef0>)]'
2017-08-25 09:27:19 DEBUG otopi.context context.dumpEnvironment:770 ENV 
BASE/exceptionInfo=list:'[(, 
RuntimeError('Package collectd cannot be found',), 0x3514ef0>)]'


After this i see that there is and include package for epel-release, 
that will install the epel repository,


so i installed manually the epel-repository

added the excludepkgs line but now the error is Package collectd-disk 
cannot be found


this is the epel.repo modified

[root@ovc2n05 yum.repos.d]# more /etc/yum.repos.d/epel.repo

[epel]
name=Extra Packages for Enterprise Linux 7 - $basearch
#baseurl=http://download.fedoraproject.org/pub/epel/7/$basearch
metalink=https://mirrors.fedoraproject.org/metalink?repo=epel-7&arch=$basearch
failovermethod=priority
enabled=1

excludepkgs=collectd*

gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7

[epel-debuginfo]
name=Extra Packages for Enterprise Linux 7 - $basearch - Debug
#baseurl=http://download.fedoraproject.org/pub/epel/7/$basearch/debug
metalink=https://mirrors.fedoraproject.org/metalink?repo=epel-debug-7&arch=$basearch
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
gpgcheck=1

[epel-source]
name=Extra Packages for Enterprise Linux 7 - $basearch - Source
#baseurl=http://download.fedoraproject.org/pub/epel/7/SRPMS
metalink=https://mirrors.fedoraproject.org/metalink?repo=epel-source-7&arch=$basearch
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
gpgcheck=1

the epel-testing.repo has all disabled

This is the part of log on the manager

[root@ovcmgr host-deploy]# more ovirt-host-deploy-20170825*  | grep 
collectd-disk
2017-08-25 10:36:23 DEBUG otopi.plugins.otopi.packagers.yumpackager 
yumpackager.verbose:76 Yum queue package collectd-disk for install/update
2017-08-25 10:36:23 ERROR otopi.plugins.otopi.packagers.yumpackager 
yumpackager.error:85 Yum Cannot queue package collectd-disk: Package 
collectd-disk cannot be found

RuntimeError: Package collectd-disk cannot be found
2017-08-25 10:36:23 ERROR otopi.context context._executeMethod:151 
Failed to execute stage 'Package installation': Package collectd-disk 
cannot be found
2017-08-25 10:36:23 DEBUG otopi.context context.dumpEnvironment:770 ENV 
BASE/exceptionInfo=list:'[(, 
RuntimeError('Package collectd-disk cannot be found',), object at 0x592e290>)]'
2017-08-25 10:36:23 DEBUG otopi.context context.dumpEnvironment:770 ENV 
BASE/exceptionInfo=list:'[(, 
RuntimeError('Package collectd-disk cannot be found',), object at 0x592e290>)]'


I don't know what other to try.

Any help would be accepted

Claudio Soprano

--

   /|/   _/   /|/   _/|/
  /   / |   /   //   / |   /   // |   /
 /   /  |  /   ___/   _//   /  |  /   ___/ /  |  /
/   /   | /   //   /   | /   //   | /
  __/ _/   __/  _/   _/  _/   __/  _/   _/   __/

Claudio Sopranophone:  (+39)-06-9403.2349/2355
Computing Service  fax:(+39)-06-9403.2649
LNF-INFN   e-mail: claudio.sopr...@lnf.infn.it
Via Enrico Fermi, 40   www:http://www.lnf.infn.it/
I-00044 Frascati, Italy

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt 4.1 testing backup and restore Self-hosted Engine

2017-08-25 Thread Alan Griffiths
As I recall (a few weeks ago now) it was after restore, once the host had
been registered in the Manager. However, I was testing on 4.0, so maybe the
behaviour is slightly different in 4.1.

Can you see anything in the Engine or vdsm logs as to why it won't remove
the storage? Perhaps try removing the stale HostedEngine VM ?

On 25 August 2017 at 09:14, wodel youchi  wrote:

> Hi and thanks,
>
> But when to remove the hosted_engine storage ? During the restore
> procedure or after ? Because after I couldn't do it, the manager refused to
> put that storage in maintenance mode.
>
> Regards
>
> Le 25 août 2017 08:49, "Alan Griffiths"  a
> écrit :
>
>> As I recall from my testing. If you remove the old hosted_storage domain
>> then the new one should get automatically imported.
>>
>> On 24 August 2017 at 23:03, wodel youchi  wrote:
>>
>>> Hi,
>>>
>>> I am testing the backup and restore procedure of the Self-hosted Engine,
>>> and I have a problem.
>>>
>>> This haw I did the test.
>>>
>>> I have two hypervisors hosted-engine. I am used iSCSI disk for the VM
>>> engine.
>>>
>>> I followed the procedure described in the Self-hosted Engine document to
>>> execute the backup, I put the first host in maintenance mode, the I create
>>> the backup and save it elsewhere.
>>>
>>> Then I've create a new iscsi disk, I reinstalled the first host with the
>>> save IP/hostname, then I followed the restore procedure to get the Manager
>>> up and running again.
>>> - hosted-engine --deploy
>>> - do not execute engine-setup, restore backup first
>>> - execute engine-setup
>>> - remove the host from the manager
>>> - synchronize the restored manger with the host
>>> - finalize deployment.
>>>
>>> all went well till this point, but I have a problem with the VM-engine,
>>> it is shown as down in the admin portal. the ovirt-ha-agent cannot retrieve
>>> the VM config from the shared storage.
>>>
>>> I think the problem, is that the hosted-engine storage domain is still
>>> pointing to the old disk of the old manager and not the new one. I don't
>>> know where is this information is stored, in the DB or in the Manager's
>>> config files, but when I click Manager hosted-engine domain, I can see the
>>> old LUN grayed and the new one (which is used by the restored Manager) is
>>> not grayed.
>>>
>>> How can I fix this?
>>>
>>> Regards.
>>>
>>>
>>> 
>>>  Garanti
>>> sans virus. www.avast.com
>>> 
>>> <#m_9177001278217562974_m_7731007781891096843_m_3883279741882476845_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users