Re: [ovirt-users] Ovirt Hosted-Engine not installing ERROR: 'OVEHOSTED_NETWORK/host_name'

2016-07-27 Thread Florian Nolden
Hi Andy,
I have 2 Interfaces Bonded as Bond0 for the public network and 2 Interfaces 
bonded as Bond1 for gluster. The Bond0 is used for ovirtmgmt bridge. I have 
currently no VLANS set up.



Am 27. Juli 2016 21:35, um 21:35, Andy <farkey_2...@yahoo.com> schrieb:
>Florian,
>I am having the exact same problem that you are having.  Are the server
>NICS bonded with a management VLAN?  
>thanks  
>
>   
>
>On Wednesday, July 27, 2016 1:42 PM, Florian Nolden
><f.nol...@xilloc.com> wrote:
> 
>
> Hello, 
>I try to install Ovirt 4.0.1-1 on a fresh installed CentOS 7.2 using a
>replica 3 glusterfs. But I have trouble deploying the hosted engine.
>hosted-engine --deploy 
>/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py:15:
>DeprecationWarning: vdscli uses xmlrpc. since ovirt 3.6 xmlrpc is
>deprecated, please use vdsm.jsonrpcvdscli  import vdsm.vdscli
>[ ERROR ] Failed to execute stage 'Environment customization':
>'OVEHOSTED_NETWORK/host_name'
>
>
>VDSM also did not create the ovirtmgmt bridge or the routing tables.
>I used the CentOS 7 minimal, and selected Infrastructure Server. I
>added the Puppet 4 repo and the Ovirt 4.0 Repo, no EPEL. I can
>reproduce it on 3 similar installed servers.
>Any Ideas?
>
>
>___
>Users mailing list
>Users@ovirt.org
>http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt Hosted-Engine not installing ERROR: 'OVEHOSTED_NETWORK/host_name'

2016-07-28 Thread Florian Nolden
Im using the the Ovirt 4.0 Release repo:

http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm

But the ovirt-4.0-dependencies.repo contains:

[centos-ovirt40-candidate]
name=CentOS-7 - oVirt 4.0
baseurl=http://cbs.centos.org/repos/virt7-ovirt-40-candidate/x86_64/os/
gpgcheck=0
enabled=1

I believe that shouldn't be there or?

2016-07-28 10:17 GMT+02:00 Simone Tiraboschi <stira...@redhat.com>:

> On Thu, Jul 28, 2016 at 9:22 AM, Simone Tiraboschi <stira...@redhat.com>
> wrote:
> > On Thu, Jul 28, 2016 at 7:50 AM, Yedidyah Bar David <d...@redhat.com>
> wrote:
> >> On Wed, Jul 27, 2016 at 8:42 PM, Florian Nolden <f.nol...@xilloc.com>
> wrote:
> >>> Hello,
> >>>
> >>> I try to install Ovirt 4.0.1-1 on a fresh installed CentOS 7.2 using a
>
> Another thing, either the bugged version (2.0.1.2) and the fixed one
> (2.0.1.3) are available just in the 4.0.2 Second Release Candidate
> repo which has not still reached the GA status.
> The latest release is Ovirt 4.0.1 so maybe you are also using the
> wrong repo if you want that.
>
> >>> replica 3 glusterfs. But I have trouble deploying the hosted engine.
> >>>
> >>> hosted-engine --deploy
> >>>
> >>>
> /usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py:15:
> >>> DeprecationWarning: vdscli uses xmlrpc. since ovirt 3.6 xmlrpc is
> >>> deprecated, please use vdsm.jsonrpcvdscli
> >>>   import vdsm.vdscli
> >>>
> >>> [ ERROR ] Failed to execute stage 'Environment customization':
> >>> 'OVEHOSTED_NETWORK/host_name'
> >
> > The issue was caused by this patch
> >  https://gerrit.ovirt.org/#/c/61078/
> > yesterday we reverted it and built a new version (2.0.1.3) of
> > hosted-engine-setup without that.
> > It's already available:
> >
> http://resources.ovirt.org/pub/ovirt-4.0-pre/rpm/el7/noarch/ovirt-hosted-engine-setup-2.0.1.3-1.el7.centos.noarch.rpm
> >
> >>> VDSM also did not create the ovirtmgmt bridge or the routing tables.
> >>>
> >>> I used the CentOS 7 minimal, and selected Infrastructure Server. I
> added the
> >>> Puppet 4 repo and the Ovirt 4.0 Repo, no EPEL.
> >>> I can reproduce it on 3 similar installed servers.
> >>>
> >>> Any Ideas?
> >>
> >> Please share the setup log. Thanks.
> >>
> >> Best,
> >> --
> >> Didi
> >> ___
> >> Users mailing list
> >> Users@ovirt.org
> >> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] hosted_engine on gluster storage for all the same fqdn?

2016-08-04 Thread Florian Nolden
Hi,

I try to install the hosted_engine on 3 Servers which host also the gluster
data storage.
When I install now the hosted_engine, which storage path should I use?

Setup 1:
server1: server1.san:/hosted_engine
server2: server1.san:/hosted_engine
server3: server1.san:/hosted_engine

Setup 2:
server1: localhost:/hosted_engine
server2: localhost:/hosted_engine
server3: localhost:/hosted_engine

Setup 3:
/etc/hosts: 127.0.0.1 gluster
server1: gluster:/hosted_engine
server2: gluster:/hosted_engine
server3: gluster:/hosted_engine

Other Setup? What is the best practice there?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted_engine on gluster storage for all the same fqdn?

2016-08-04 Thread Florian Nolden
Thank's allot Sahina,

I wouldn't come up with that solution. Would be nice to have that info also
in the Wiki.

https://www.ovirt.org/develop/release-management/features/
engine/self-hosted-engine/

2016-08-04 17:36 GMT+02:00 Sahina Bose <sab...@redhat.com>:

>
>
> On Thu, Aug 4, 2016 at 8:13 PM, Florian Nolden <f.nol...@xilloc.com>
> wrote:
>
>> Hi,
>>
>> I try to install the hosted_engine on 3 Servers which host also the
>> gluster data storage.
>> When I install now the hosted_engine, which storage path should I use?
>>
>> Setup 1:
>> server1: server1.san:/hosted_engine
>> server2: server1.san:/hosted_engine
>> server3: server1.san:/hosted_engine
>>
>
>
> Use this along with an answers.conf file with following content
> [environment:default]
> OVEHOSTED_STORAGE/mntOptions=str:backup-volfile-servers=
> server2.san:server3.san
>
> and deploy hosted engine using "hosted-engine --deploy
> --config-append=answers.conf"
>
> This will ensure that hosted-engine domain can be mounted on other servers
> even when server1 is not reachable.
>
>
>
>>
>> Setup 2:
>> server1: localhost:/hosted_engine
>> server2: localhost:/hosted_engine
>> server3: localhost:/hosted_engine
>>
>> Setup 3:
>> /etc/hosts: 127.0.0.1 gluster
>> server1: gluster:/hosted_engine
>> server2: gluster:/hosted_engine
>> server3: gluster:/hosted_engine
>>
>> Other Setup? What is the best practice there?
>>
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted-engine cant access engine appliance

2016-11-04 Thread Florian Nolden
Hello Steffen,

can your nodes resolve the FQDN "ovirtengine.com" to the hosted engine ip (
nslookup ovirtengine.com) ?
If that works have you tried to disable the firewall temporarily?

2016-11-04 14:11 GMT+01:00 Steffen Nolden <
steffen.nol...@alumni.fh-aachen.de>:

> Hello,
>
> i tried to deploy hosted-engine on a testing environment. First of all i
> tried to deploy with the option
>
> "Automatically execute engine-setup on the engine appliance on first
> boot (Yes, No)[Yes]? Yes"
>
> but it got stuck.
>
> [ INFO  ] Running engine-setup on the appliance
> [ ERROR ] Engine setup got stuck on the appliance
> [ ERROR ] Failed to execute stage 'Closing up': Engine setup is stalled on
> the appliance since 600 seconds ago. Please check its log on the appliance.
> [ INFO  ] Stage: Clean up
> [ INFO  ] Generating answer file '/var/lib/ovirt-hosted-engine-
> setup/answers/answers-20161104112913.conf'
> [ INFO  ] Stage: Pre-termination
> [ INFO  ] Stage: Termination
> [ ERROR ] Hosted Engine deployment failed: this system is not reliable,
> please check the issue,fix and redeploy
>   Log file is located at /var/log/ovirt-hosted-engine-s
> etup/ovirt-hosted-engine-setup-20161104110104-kyhq1e.log
>
> Next try i said 'No' and i tried to execute it myself. But i can't access
> the engine appliance.
>
> [nolden@oVirtNode01 ~]$ sudo hosted-engine --console
> /usr/share/vdsm/vdsClient.py:33: DeprecationWarning: vdscli uses
> xmlrpc. since ovirt 3.6 xmlrpc is deprecated, please use vdsm.jsonrpcvdscli
>from vdsm import utils, vdscli, constants
> The engine VM is running on this host
> Verbunden mit der Domain: HostedEngine
> Escape-Zeichen ist ^]
> Fehler: Interner Fehler: Charakter-Einheit console0 verwendet nicht
> ein PTY
>
> Additionally i can't ping or access the engine appliance via ssh.
>
> Did i forget to install a packege or configure something?
>
> Below my infrastructure and configs. Attached the log file.
>
>
> Thanks for help.
>
>
>
> My infrastructure is a nestedVM system:
> All systems have nested visualization activated
>
> HW with LinuxMint x86_64;
> - VM with CentOS-7-x86_64; 24GB Ram; 7 cores;
> - Nested VM with CentOS-7-x86_64; 12288GB Ram; 4 cores; Hostname:
> oVirtNode01.com (192.168.122.101): Here i deploy hosted-engine, Hostname:
> oVirtEngine.com (192.168.122.201)
> - Nested VM with CentOS-7-x86_64; 4096GB  Ram; 1 cores; Hostname:
> oVirtNode02.com (192.168.122.102):
> - Nested VM with CentOS-7-x86_64; 4096GB  Ram; 1 cores; Hostname:
> oVirtNode03.com (192.168.122.103):
>
> All three NestedVMs are updated, have installed the ovirt-release40.rpm.
> Additional installed:
> screen (4.1.0), ovirt-hosted-engine-setup (2.0.2.2), vdsm-gluster
> (4.18.13), bridge-utils (1.5),
> vdsm (4.18.13), vdsm-cli (4.18.13), glusterfs-server (3.7.16), samba.
>
> First NestedVM additional installed ovirt-engine-appliance
> 4.0-20160928.1.el7.centos
>
> The three VMs building a glusterfs volume "engine" 3 replica, with the
> options...
>
> Volume Name: engine
> Type: Replicate
> Volume ID: e92849b7-af3b-4ccd-bd0d-69a5ab3b6214
> Status: Started
> Number of Bricks: 1 x 3 = 3
> Transport-type: tcp
> Bricks:
> Brick1: oVirtNode01.com:/gluster/engine/brick1
> Brick2: oVirtNode02.com:/gluster/engine/brick1
> Brick3: oVirtNode03.com:/gluster/engine/brick1
> Options Reconfigured:
> auth.allow: 192.168.122.*
> storage.owner-gid: 36
> storage.owner-uid: 36
> performance.readdir-ahead: on
>
> ## Session 1 (hosted-engine deploy without automatically execute
> engine-setup)
> [nolden@oVirtNode01 ~]$ sudo hosted-engine --deploy
> /usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py:15:
> DeprecationWarning: vdscli uses xmlrpc. since ovirt 3.6 xmlrpc is
> deprecated, please use vdsm.jsonrpcvdscli
>   import vdsm.vdscli
> [ INFO  ] Stage: Initializing
> [ INFO  ] Generating a temporary VNC password.
> [ INFO  ] Stage: Environment setup
>   During customization use CTRL-D to abort.
>   Continuing will configure this host for serving as hypervisor
> and create a VM where you have to install the engine afterwards.
>   Are you sure you want to continue? (Yes, No)[Yes]:
> [ INFO  ] Hardware supports virtualization
>   Configuration files: []
>   Log file: /var/log/ovirt-hosted-engine-s
> etup/ovirt-hosted-engine-setup-20161104123834-chwikf.log
>   Version: otopi-1.5.2 (otopi-1.5.2-1.el7.centos)
> [ INFO  ] Stage: Environment packages setup
> [ INFO  ] Stage: Programs detection
> [ INFO  ] Stage: Environment setup
> [ INFO  ] Generating libvirt-spice certificates
> [ INFO  ] Stage: Environment customization
>
>   --== STORAGE CONFIGURATION ==--
>
>   Please specify the storage you would like to use (glusterfs,
> iscsi, fc, nfs3, nfs4)[nfs3]: glusterfs
> [ INFO  ] Please note that Replica 3 support is required for the shared
> storage.
>   Please specify the full shared storage 

[ovirt-users] Backups take sooo long, how can I improve the situation, where is the bottleneck?

2017-01-05 Thread Florian Nolden
Hallo everyone,

I would like to update my Ovirt installation but, before that I would like
to create proper VM backups. Currently I try to automate the backups and I
use the implementation of the following project.
https://github.com/wefixit-AT/oVirtBackup

The main idea is to use the Ovirt-API to trigger a snapshot, create a new
VM from the snapshot, delete the snapshot, export the vm to the export
domain, delete the vm.

This process takes even with only a 20 GB vm disk 55 Minutes. I have now 21
VMs with average 30 GB HDD space, that would mean that the backup job runs
22 hours.

Is there something wrong with my setup?
Config?
Is there a better way to automate backups?
Is anyone else doing VM Exports as backup or are you just using snapshots?


Specs:
HA Cluster 3 Nodes each has:
2x 6x 2.96 GHz, 96 GB RAM, 120 GB HDD for CentOS7, 1 TB SSD VM-Storage
(Gluster FS), 4x 1 GBit LAN Bond, 4x 1GBit SAN Bond (Gluster Network,
connection to Backup NAS/ Export Domain)
Ovirt 4.0.3

The script output:
Jan 05 14:12:16: Start backup for: wiki (20 GB)
Jan 05 14:12:17: Snapshot creation started ...
Jan 05 14:12:30: Snapshot created
Jan 05 14:12:40: Clone into VM started ...
Jan 05 14:36:37: Cloning finished
Jan 05 14:36:38: Snapshot deletion started ...
Jan 05 14:59:27: Snapshots deleted
Jan 05 14:59:28: Export started ...
Jan 05 15:07:09: Exporting finished
Jan 05 15:07:09: Delete cloned VM started ...
Jan 05 15:07:15: Cloned VM deleted
Jan 05 15:07:15: Duration: 55:0 minutes
Jan 05 15:07:15: VM exported as wiki_BACKUP_010516
Jan 05 15:07:15: Backup done for: wiki
Jan 05 15:07:15: All backups done


Thanks,
Florain
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Huge Glsuter Issues - oVirt 4.1.7

2017-11-24 Thread Florian Nolden
Hallo Kasturi,

I have the issues only when a backup job is running.
I have not tested it with fuse or gfapi directly.

What would be a good test for fuse? Just DD to the mounted gluster storage?

How can I test it with gfapi?

Thanks for your support

Kind regards,

Florian Nolden

Head of IT at Xilloc Medical B.V.

———

Disclaimer: The content of this e-mail, including any attachments, are
confidential and are intended for the sole use of the individual or entity
to which it is addressed. If you have received it by mistake please let us
know by reply and then delete it from your system. Any distribution,
copying or dissemination of this message is expected to conform to all
legal stipulations governing the use of information.

2017-11-24 9:40 GMT+01:00 Kasturi Narra <kna...@redhat.com>:

> Hi Florian,
>
>Are you seeing these issues with gfapi or fuse access as well ?
>
> Thanks
> kasturi
>
> On Fri, Nov 24, 2017 at 3:06 AM, Florian Nolden <f.nol...@xilloc.com>
> wrote:
>
>> I have the same issue when I run backup tasks during the night.
>>
>> I have a Gluster setup with a 1TB SSD on each of the tree nodes. Maybe
>> its related to bug: https://bugzilla.redhat.com/show_bug.cgi?id=1430847
>>
>> sanlock.log:
>> 2017-11-23 00:46:42 3410597 [1114]: s15 check_our_lease warning 60
>> last_success 3410537
>> 2017-11-23 00:46:43 3410598 [1114]: s15 check_our_lease warning 61
>> last_success 3410537
>> 2017-11-23 00:46:44 3410599 [1114]: s15 check_our_lease warning 62
>> last_success 3410537
>> 2017-11-23 00:46:45 3410600 [1114]: s15 check_our_lease warning 63
>> last_success 3410537
>> 2017-11-23 00:46:46 3410601 [1114]: s15 check_our_lease warning 64
>> last_success 3410537
>> 2017-11-23 00:46:47 3410602 [1114]: s15 check_our_lease warning 65
>> last_success 3410537
>> 2017-11-23 00:46:48 3410603 [1114]: s15 check_our_lease warning 66
>> last_success 3410537
>> 2017-11-23 00:46:49 3410603 [28384]: s15 delta_renew long write time 46
>> sec
>> 2017-11-23 00:46:49 3410603 [28384]: s15 renewed 3410557 delta_length 46
>> too long
>> 2017-11-23 02:48:04 3417878 [28384]: s15 delta_renew long write time 10
>> sec
>> 2017-11-23 02:57:23 3418438 [28384]: s15 delta_renew long write time 34
>> sec
>> 2017-11-23 02:57:23 3418438 [28384]: s15 renewed 3418404 delta_length 34
>> too long
>>
>>
>> vdsm.log | grep "WARN"
>> 017-11-23 00:20:05,544+0100 WARN  (jsonrpc/0) [virt.vm]
>> (vmId='0a83954f-56d1-42d0-88b9-825435055fd0') monitor became
>> unresponsive (command timeout, age=63.719997) (vm:5109)
>> 2017-11-23 00:20:06,840+0100 WARN  (check/loop) [storage.check] Checker
>> u'/rhev/data-center/mnt/glusterSD/x-c01-n03:_fastIO/f0e21aae
>> -1237-4dd3-88ec-81254d29c372/dom_md/metadata' is blocked for 10.00
>> seconds (check:279)
>> 2017-11-23 00:20:13,853+0100 WARN  (periodic/170)
>> [virt.periodic.VmDispatcher] could not run > 'vdsm.virt.periodic.UpdateVolumes'> on 
>> [u'e1f26ea9-9294-4d9c-8f70-d59f96dec5f7']
>> (periodic:308)
>> 2017-11-23 00:20:15,031+0100 WARN  (jsonrpc/2) [virt.vm]
>> (vmId='0a83954f-56d1-42d0-88b9-825435055fd0') monitor became
>> unresponsive (command timeout, age=73.21) (vm:5109)
>> 2017-11-23 00:20:20,586+0100 WARN  (jsonrpc/4) [virt.vm]
>> (vmId='0a83954f-56d1-42d0-88b9-825435055fd0') monitor became
>> unresponsive (command timeout, age=78.759998) (vm:5109)
>> 2017-11-23 00:21:06,849+0100 WARN  (check/loop) [storage.check] Checker
>> u'/rhev/data-center/mnt/glusterSD/x-c01-n03:_fastIO/f0e21aae
>> -1237-4dd3-88ec-81254d29c372/dom_md/metadata' is blocked for 10.01
>> seconds (check:279)
>> 2017-11-23 00:21:13,847+0100 WARN  (periodic/167)
>> [virt.periodic.VmDispatcher] could not run > 'vdsm.virt.periodic.UpdateVolumes'> on 
>> [u'd8f22423-9fe3-4c06-97dc-5c9e9f5b33c8']
>> (periodic:308)
>> 2017-11-23 00:22:13,854+0100 WARN  (periodic/172)
>> [virt.periodic.VmDispatcher] could not run > 'vdsm.virt.periodic.UpdateVolumes'> on 
>> [u'd8f22423-9fe3-4c06-97dc-5c9e9f5b33c8']
>> (periodic:308)
>> 2017-11-23 00:22:16,846+0100 WARN  (check/loop) [storage.check] Checker
>> u'/rhev/data-center/mnt/glusterSD/x-c01-n03:_fastIO/f0e21aae
>> -1237-4dd3-88ec-81254d29c372/dom_md/metadata' is blocked for 9.99
>> seconds (check:279)
>> 2017-11-23 00:23:06,040+0100 WARN  (jsonrpc/6) [virt.vm]
>> (vmId='0a83954f-56d1-42d0-88b9-825435055fd0') monitor became
>> unresponsive (command timeout, age=64.219997) (vm:5109)
>> 2017-11-23 00:23:06,850+0100 WARN  (check/loop) [storage.check] Checker
>> 

Re: [ovirt-users] Huge Glsuter Issues - oVirt 4.1.7

2017-11-23 Thread Florian Nolden
1-23 00:46:55,488+0100 WARN  (jsonrpc/0) [virt.vm]
(vmId='0a83954f-56d1-42d0-88b9-825435055fd0') monitor became unresponsive
(command timeout, age=83.569994) (vm:5109)
2017-11-23 00:46:55,489+0100 WARN  (jsonrpc/0) [virt.vm]
(vmId='245e104f-2bd5-4f77-81de-d75a593d77c5') monitor became unresponsive
(command timeout, age=83.569994) (vm:5109)
2017-11-23 00:46:55,491+0100 WARN  (jsonrpc/0) [virt.vm]
(vmId='5ef506de-44b9-4ced-9b7f-b90ee098f4f7') monitor became unresponsive
(command timeout, age=83.569994) (vm:5109)
2017-11-23 00:47:01,742+0100 WARN  (jsonrpc/1) [virt.vm]
(vmId='e1f26ea9-9294-4d9c-8f70-d59f96dec5f7') monitor became unresponsive
(command timeout, age=89.819994) (vm:5109)
2017-11-23 00:47:01,743+0100 WARN  (jsonrpc/1) [virt.vm]
(vmId='0a83954f-56d1-42d0-88b9-825435055fd0') monitor became unresponsive
(command timeout, age=89.819994) (vm:5109)
2017-11-23 00:47:01,744+0100 WARN  (jsonrpc/1) [virt.vm]
(vmId='245e104f-2bd5-4f77-81de-d75a593d77c5') monitor became unresponsive
(command timeout, age=89.819994) (vm:5109)
2017-11-23 00:47:01,746+0100 WARN  (jsonrpc/1) [virt.vm]
(vmId='5ef506de-44b9-4ced-9b7f-b90ee098f4f7') monitor became unresponsive
(command timeout, age=89.819994) (vm:5109)
2017-11-23 00:47:10,531+0100 WARN  (jsonrpc/6) [virt.vm]
(vmId='0a83954f-56d1-42d0-88b9-825435055fd0') monitor became unresponsive
(command timeout, age=98.609994) (vm:5109)
2017-11-23 00:47:10,532+0100 WARN  (jsonrpc/6) [virt.vm]
(vmId='245e104f-2bd5-4f77-81de-d75a593d77c5') monitor became unresponsive
(command timeout, age=98.609994) (vm:5109)
2017-11-23 00:47:10,534+0100 WARN  (jsonrpc/6) [virt.vm]
(vmId='5ef506de-44b9-4ced-9b7f-b90ee098f4f7') monitor became unresponsive
(command timeout, age=98.609994) (vm:5109)
2017-11-23 00:47:16,950+0100 WARN  (jsonrpc/7) [virt.vm]
(vmId='0a83954f-56d1-42d0-88b9-825435055fd0') monitor became unresponsive
(command timeout, age=105.02999) (vm:5109)
2017-11-23 00:47:16,951+0100 WARN  (jsonrpc/7) [virt.vm]
(vmId='245e104f-2bd5-4f77-81de-d75a593d77c5') monitor became unresponsive
(command timeout, age=105.02999) (vm:5109)
2017-11-23 00:47:16,953+0100 WARN  (jsonrpc/7) [virt.vm]
(vmId='5ef506de-44b9-4ced-9b7f-b90ee098f4f7') monitor became unresponsive
(command timeout, age=105.02999) (vm:5109)
2017-11-23 00:47:25,578+0100 WARN  (jsonrpc/4) [virt.vm]
(vmId='245e104f-2bd5-4f77-81de-d75a593d77c5') monitor became unresponsive
(command timeout, age=113.65999) (vm:5109)
2017-11-23 00:47:25,581+0100 WARN  (jsonrpc/4) [virt.vm]
(vmId='5ef506de-44b9-4ced-9b7f-b90ee098f4f7') monitor became unresponsive
(command timeout, age=113.65999) (vm:5109)

Kind regards,


Florian Nolden

Head of IT at Xilloc Medical B.V.

———

Disclaimer: The content of this e-mail, including any attachments, are
confidential and are intended for the sole use of the individual or entity
to which it is addressed. If you have received it by mistake please let us
know by reply and then delete it from your system. Any distribution,
copying or dissemination of this message is expected to conform to all
legal stipulations governing the use of information.

2017-11-23 11:25 GMT+01:00 Sven Achtelik <sven.achte...@eps.aero>:

> Hi All,
>
>
>
> I’m experiencing huge issues when working with big VMs on Gluster volumes.
> Doing a Snapshot or removing a big Disk lead to the effect that the SPM
> node is getting non responsive. Fencing is than kicking in and taking the
> node down with the hard reset/reboot.
>
>
>
> My setup has three nodes with 10Gbit/s NICs for the Gluster network. The
> Bricks are on Raid-6 with a 1GB cache on the raid controller and the
> volumes are setup as follows:
>
>
>
> Volume Name: data
>
> Type: Replicate
>
> Volume ID: c734d678-91e3-449c-8a24-d26b73bef965
>
> Status: Started
>
> Snapshot Count: 0
>
> Number of Bricks: 1 x 3 = 3
>
> Transport-type: tcp
>
> Bricks:
>
> Brick1: ovirt-node01-gfs.storage.lan:/gluster/brick2/data
>
> Brick2: ovirt-node02-gfs.storage.lan:/gluster/brick2/data
>
> Brick3: ovirt-node03-gfs.storage.lan:/gluster/brick2/data
>
> Options Reconfigured:
>
> features.barrier: disable
>
> cluster.granular-entry-heal: enable
>
> performance.readdir-ahead: on
>
> performance.quick-read: off
>
> performance.read-ahead: off
>
> performance.io-cache: off
>
> performance.stat-prefetch: on
>
> cluster.eager-lock: enable
>
> network.remote-dio: off
>
> cluster.quorum-type: auto
>
> cluster.server-quorum-type: server
>
> storage.owner-uid: 36
>
> storage.owner-gid: 36
>
> features.shard: on
>
> features.shard-block-size: 512MB
>
> performance.low-prio-threads: 32
>
> cluster.data-self-heal-algorithm: full
>
> cluster.locking-scheme: granular
>
> cluster.shd-wait-qlength: 1
&

[ovirt-users] Re: hosted-engine --deploy fails after "Wait for the host to be up" task

2020-02-14 Thread Florian Nolden
Am Fr., 14. Feb. 2020 um 12:21 Uhr schrieb Fredy Sanchez <
fredy.sanc...@modmed.com>:

> Hi Florian,
>

> In my case, Didi's suggestions got me thinking, and I ultimately traced
> this to the ssh banners; they must be disabled. You can do this in
> sshd_config. I do think that logging could be better for this issue, and
> that the host up check should incorporate things other than ssh, even if
> just a ping. Good luck.
>
> Hi Fredy,

thanks for the reply.

I just have to uncomment "Banners none" in the /etc/ssh/sshd_config on all
3 nodes, and run redeploy in the cockpit?
Or have you also reinstalled the nodes and the gluster storage?

> --
> Fredy
>
> On Fri, Feb 14, 2020, 4:55 AM Florian Nolden  wrote:
>
>> I'also stuck with that issue.
>>
>> I have
>> 3x  HP ProLiant DL360 G7
>>
>> 1x 1gbit => as control network
>> 3x 1gbit => bond0 as Lan
>> 2x 10gbit => bond1 as gluster network
>>
>> I installed on all 3 servers Ovirt Node 4.3.8
>> configured the networks using cockpit.
>> followed this guide for the gluster setup with cockpit:
>> https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hyperconverged.html
>>
>> the installed the hosted engine with cockpit ->:
>>
>> [ INFO ] TASK [ovirt.hosted_engine_setup : Wait for the host to be up]
>> [ ERROR ] fatal: [localhost]: FAILED! => {"ansible_facts": {"ovirt_hosts": 
>> [{"address": "x-c01-n01.lan.xilloc.com", "affinity_labels": [], 
>> "auto_numa_status": "unknown", "certificate": {"organization": 
>> "lan.xilloc.com", "subject": 
>> "O=lan.xilloc.com,CN=x-c01-n01.lan.xilloc.com"}, "cluster": {"href": 
>> "/ovirt-engine/api/clusters/3dff6890-4e7b-11ea-90cb-00163e6a7afe", "id": 
>> "3dff6890-4e7b-11ea-90cb-00163e6a7afe"}, "comment": "", "cpu": {"speed": 
>> 0.0, "topology": {}}, "device_passthrough": {"enabled": false}, "devices": 
>> [], "external_network_provider_configurations": [], "external_status": "ok", 
>> "hardware_information": {"supported_rng_sources": []}, "hooks": [], "href": 
>> "/ovirt-engine/api/hosts/ded7aa60-4a5e-456e-b899-dd7fc25cc7b3", "id": 
>> "ded7aa60-4a5e-456e-b899-dd7fc25cc7b3", "katello_errata": [], 
>> "kdump_status": "unknown", "ksm": {"enabled": false}, 
>> "max_scheduling_memory": 0, "memory": 0, "name": "x-c01-n01.lan.xilloc.com", 
>> "network_attachments": [], "nics": [], "numa_nodes": [], "numa_supported": 
>> false, "os": {"custom_kernel_cmdline": ""}, "permissions": [], "port": 
>> 54321, "power_management": {"automatic_pm_enabled": true, "enabled": false, 
>> "kdump_detection": true, "pm_proxies": []}, "protocol": "stomp", "se_linux": 
>> {}, "spm": {"priority": 5, "status": "none"}, "ssh": {"fingerprint": 
>> "SHA256:lWc/BuE5WukHd95WwfmFW2ee8VPJ2VugvJeI0puMlh4", "port": 22}, 
>> "statistics": [], "status": "non_responsive", 
>> "storage_connection_extensions": [], "summary": {"total": 0}, "tags": [], 
>> "transparent_huge_pages": {"enabled": false}, "type": "ovirt_node", 
>> "unmanaged_networks": [], "update_available": false, "vgpu_placement": 
>> "consolidated"}]}, "attempts": 120, "changed": false, "deprecations": 
>> [{"msg": "The 'ovirt_host_facts' module has been renamed to 
>> 'ovirt_host_info', and the renamed one no longer returns ansible_facts", 
>> "version": "2.13"}]}
>>
>>
>>
>> What is the best approach now to install a Ovirt Hostedengine?
>>
>>
>> Kind regards,
>>
>> *Florian Nolden*
>>
>> *Head of IT at Xilloc Medical B.V.*
>>
>> www.xilloc.com* “Get aHead with patient specific implants” *
>>
>> Xilloc Medical B.V., Urmonderbaan 22
>> <https://maps.google.com/?q=Xilloc%20Medical+B.V.,+Urmonderbaan+22=gmail=g>
>>  Gate
>> 2, Building 110, 61

[ovirt-users] Re: hosted-engine --deploy fails after "Wait for the host to be up" task

2020-02-14 Thread Florian Nolden
Thanks, Fredy for your great help. Setting the Banner and PrintMotd options
on all 3 nodes helped me to succeed with the installation.
Am Fr., 14. Feb. 2020 um 16:23 Uhr schrieb Fredy Sanchez <
fredy.sanc...@modmed.com>:

> Banner none
> PrintMotd no
>
> # systemctl restart sshd
>

That should be fixed in the ovirt-node images.


> If gluster installed successfully, you don't have to reinstall it.
> Just run the hyperconverged install again from cockpit, and it will detect
> the existing gluster install, and ask you if you want to re-use it;
> re-using worked for me. Only thing I'd point out here is that gluster
> didn't enable in my servers automagically; I had to enable it and start it
> by hand before cockpit picked it up.
> # systemctl enable glusterd --now
> # systemctl status glusterd
>
> Gluster was running fine for me. For me that was not needed.

Also,
> # tail -f /var/log/secure
> while the install is going will help you see if there's a problem with
> ssh, other than the banners.
>
> --
> Fredy
>
> On Fri, Feb 14, 2020 at 9:32 AM Florian Nolden 
> wrote:
>
>>
>> Am Fr., 14. Feb. 2020 um 12:21 Uhr schrieb Fredy Sanchez <
>> fredy.sanc...@modmed.com>:
>>
>>> Hi Florian,
>>>
>>
>>> In my case, Didi's suggestions got me thinking, and I ultimately traced
>>> this to the ssh banners; they must be disabled. You can do this in
>>> sshd_config. I do think that logging could be better for this issue, and
>>> that the host up check should incorporate things other than ssh, even if
>>> just a ping. Good luck.
>>>
>>> Hi Fredy,
>>
>> thanks for the reply.
>>
>> I just have to uncomment "Banners none" in the /etc/ssh/sshd_config on
>> all 3 nodes, and run redeploy in the cockpit?
>> Or have you also reinstalled the nodes and the gluster storage?
>>
>>> --
>>> Fredy
>>>
>>> On Fri, Feb 14, 2020, 4:55 AM Florian Nolden 
>>> wrote:
>>>
>>>> I'also stuck with that issue.
>>>>
>>>> I have
>>>> 3x  HP ProLiant DL360 G7
>>>>
>>>> 1x 1gbit => as control network
>>>> 3x 1gbit => bond0 as Lan
>>>> 2x 10gbit => bond1 as gluster network
>>>>
>>>> I installed on all 3 servers Ovirt Node 4.3.8
>>>> configured the networks using cockpit.
>>>> followed this guide for the gluster setup with cockpit:
>>>> https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hyperconverged.html
>>>>
>>>> the installed the hosted engine with cockpit ->:
>>>>
>>>> [ INFO ] TASK [ovirt.hosted_engine_setup : Wait for the host to be up]
>>>> [ ERROR ] fatal: [localhost]: FAILED! => {"ansible_facts": {"ovirt_hosts": 
>>>> [{"address": "x-c01-n01.lan.xilloc.com", "affinity_labels": [], 
>>>> "auto_numa_status": "unknown", "certificate": {"organization": 
>>>> "lan.xilloc.com", "subject": 
>>>> "O=lan.xilloc.com,CN=x-c01-n01.lan.xilloc.com"}, "cluster": {"href": 
>>>> "/ovirt-engine/api/clusters/3dff6890-4e7b-11ea-90cb-00163e6a7afe", "id": 
>>>> "3dff6890-4e7b-11ea-90cb-00163e6a7afe"}, "comment": "", "cpu": {"speed": 
>>>> 0.0, "topology": {}}, "device_passthrough": {"enabled": false}, "devices": 
>>>> [], "external_network_provider_configurations": [], "external_status": 
>>>> "ok", "hardware_information": {"supported_rng_sources": []}, "hooks": [], 
>>>> "href": "/ovirt-engine/api/hosts/ded7aa60-4a5e-456e-b899-dd7fc25cc7b3", 
>>>> "id": "ded7aa60-4a5e-456e-b899-dd7fc25cc7b3", "katello_errata": [], 
>>>> "kdump_status": "unknown", "ksm": {"enabled": false}, 
>>>> "max_scheduling_memory": 0, "memory": 0, "name": 
>>>> "x-c01-n01.lan.xilloc.com", "network_attachments": [], "nics": [], 
>>>> "numa_nodes": [], "numa_supported": false, "os": {"custom_kernel_cmdline": 
>>>> ""}, "permissions": [], "port": 54321, "power_management": 
>>>> {"automatic_pm_enabled": true,

[ovirt-users] Re: hosted-engine --deploy fails after "Wait for the host to be up" task

2020-02-14 Thread Florian Nolden
I'also stuck with that issue.

I have
3x  HP ProLiant DL360 G7

1x 1gbit => as control network
3x 1gbit => bond0 as Lan
2x 10gbit => bond1 as gluster network

I installed on all 3 servers Ovirt Node 4.3.8
configured the networks using cockpit.
followed this guide for the gluster setup with cockpit:
https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hyperconverged.html

the installed the hosted engine with cockpit ->:

[ INFO ] TASK [ovirt.hosted_engine_setup : Wait for the host to be up]
[ ERROR ] fatal: [localhost]: FAILED! => {"ansible_facts":
{"ovirt_hosts": [{"address": "x-c01-n01.lan.xilloc.com",
"affinity_labels": [], "auto_numa_status": "unknown", "certificate":
{"organization": "lan.xilloc.com", "subject":
"O=lan.xilloc.com,CN=x-c01-n01.lan.xilloc.com"}, "cluster": {"href":
"/ovirt-engine/api/clusters/3dff6890-4e7b-11ea-90cb-00163e6a7afe",
"id": "3dff6890-4e7b-11ea-90cb-00163e6a7afe"}, "comment": "", "cpu":
{"speed": 0.0, "topology": {}}, "device_passthrough": {"enabled":
false}, "devices": [], "external_network_provider_configurations": [],
"external_status": "ok", "hardware_information":
{"supported_rng_sources": []}, "hooks": [], "href":
"/ovirt-engine/api/hosts/ded7aa60-4a5e-456e-b899-dd7fc25cc7b3", "id":
"ded7aa60-4a5e-456e-b899-dd7fc25cc7b3", "katello_errata": [],
"kdump_status": "unknown", "ksm": {"enabled": false},
"max_scheduling_memory": 0, "memory": 0, "name":
"x-c01-n01.lan.xilloc.com", "network_attachments": [], "nics": [],
"numa_nodes": [], "numa_supported": false, "os":
{"custom_kernel_cmdline": ""}, "permissions": [], "port": 54321,
"power_management": {"automatic_pm_enabled": true, "enabled": false,
"kdump_detection": true, "pm_proxies": []}, "protocol": "stomp",
"se_linux": {}, "spm": {"priority": 5, "status": "none"}, "ssh":
{"fingerprint": "SHA256:lWc/BuE5WukHd95WwfmFW2ee8VPJ2VugvJeI0puMlh4",
"port": 22}, "statistics": [], "status": "non_responsive",
"storage_connection_extensions": [], "summary": {"total": 0}, "tags":
[], "transparent_huge_pages": {"enabled": false}, "type":
"ovirt_node", "unmanaged_networks": [], "update_available": false,
"vgpu_placement": "consolidated"}]}, "attempts": 120, "changed":
false, "deprecations": [{"msg": "The 'ovirt_host_facts' module has
been renamed to 'ovirt_host_info', and the renamed one no longer
returns ansible_facts", "version": "2.13"}]}



What is the best approach now to install a Ovirt Hostedengine?


Kind regards,

*Florian Nolden*

*Head of IT at Xilloc Medical B.V.*

www.xilloc.com* “Get aHead with patient specific implants” *

Xilloc Medical B.V., Urmonderbaan 22
<https://maps.google.com/?q=Xilloc%20Medical+B.V.,+Urmonderbaan+22=gmail=g>
Gate
2, Building 110, 6167 RD Sittard-Geleen

—

Disclaimer: The content of this e-mail, including any attachments, are
confidential and are intended for the sole use of the individual or entity
to which it is addressed. If you have received it by mistake please let us
know by reply and then delete it from your system. Any distribution,
copying or dissemination of this message is expected to conform to all
legal stipulations governing the use of information.


Am Mo., 27. Jan. 2020 um 07:56 Uhr schrieb Yedidyah Bar David <
d...@redhat.com>:

> On Sun, Jan 26, 2020 at 8:45 PM Fredy Sanchez 
> wrote:
>
>> *Hi all,*
>>
>> *[root@bric-ovirt-1 ~]# cat /etc/*release**
>> CentOS Linux release 7.7.1908 (Core)
>> *[root@bric-ovirt-1 ~]# yum info ovirt-engine-appliance*
>> Installed Packages
>> Name: ovirt-engine-appliance
>> Arch: x86_64
>> Version : 4.3
>> Release : 20191121.1.el7
>> Size: 1.0 G
>> Repo: installed
>> From repo   : ovirt-4.3
>>
>> *Same situation as https://bugzilla.redhat.com/show_bug.cgi?id=1787267
>> <https://bugzilla.redhat.com/show_bug.cgi?id=1787267>. The error message
>> almost everywhere is some red herring message about ansible*
>>
>
&