[ovirt-users] Gluster FS & hosted engine fails to set up

2019-12-08 Thread rob . downer
I have set up 3 new servers and as you can see Gluster is working well, however 
the hosted engine deployment fails 

can anyone suggest a reason ?

I have wiped and set up all three servers again and set up Gluster first.
This is the gluster congig I have used for the setup.

Please review the configuration. Once you click the 'Finish Deployment' button, 
the management VM will be transferred to the configured storage and the 
configuration of your hosted engine cluster will be finalized. You will be able 
to use your hosted engine once this step finishes.
* StorageStorage Type:glusterfs
Storage Domain 
Connection:gfs3.gluster.private:/engine
Mount 
Options:backup-volfile-servers=gfs2.gluster.private:gfs1.gluster.private
Disk 
Size (GiB):58


[ INFO ] TASK [ovirt.hosted_engine_setup : Add glusterfs storage domain]
[ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is 
"[Unexpected exception]". HTTP response code is 400.
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault 
reason is \"Operation Failed\". Fault detail is \"[Unexpected exception]\". 
HTTP response code is 400."}


root@ovirt3 ~]# gluster volume status 
Status of volume: data
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick gfs3.gluster.private:/gluster_bricks/
data/data   49152 0  Y   3756 
Brick gfs2.gluster.private:/gluster_bricks/
data/data   49153 0  Y   3181 
Brick gfs1.gluster.private:/gluster_bricks/
data/data   49152 0  Y   15548
Self-heal Daemon on localhost   N/A   N/AY   17602
Self-heal Daemon on gfs1.gluster.privateN/A   N/AY   15706
Self-heal Daemon on gfs2.gluster.privateN/A   N/AY   3348 
 
Task Status of Volume data
--
There are no active volume tasks
 
Status of volume: engine
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick gfs3.gluster.private:/gluster_bricks/
engine/engine   49153 0  Y   3769 
Brick gfs2.gluster.private:/gluster_bricks/
engine/engine   49154 0  Y   3194 
Brick gfs1.gluster.private:/gluster_bricks/
engine/engine   49153 0  Y   15559
Self-heal Daemon on localhost   N/A   N/AY   17602
Self-heal Daemon on gfs1.gluster.privateN/A   N/AY   15706
Self-heal Daemon on gfs2.gluster.privateN/A   N/AY   3348 
 
Task Status of Volume engine
--
There are no active volume tasks
 
Status of volume: vmstore
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick gfs3.gluster.private:/gluster_bricks/
vmstore/vmstore 49154 0  Y   3786 
Brick gfs2.gluster.private:/gluster_bricks/
vmstore/vmstore 49152 0  Y   2901 
Brick gfs1.gluster.private:/gluster_bricks/
vmstore/vmstore 49154 0  Y   15568
Self-heal Daemon on localhost   N/A   N/AY   17602
Self-heal Daemon on gfs1.gluster.privateN/A   N/AY   15706
Self-heal Daemon on gfs2.gluster.privateN/A   N/AY   3348 
 
Task Status of Volume vmstore
--
There are no active volume tasks
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/T4TP2PHEHFU7QFLE7RXWGCGNJLSFTQ2N/


[ovirt-users] Gluster mount still fails on Engine deployment - any suggestions...

2019-12-08 Thread rob . downer
Hi Engine deployment fails here...

[ INFO ] TASK [ovirt.hosted_engine_setup : Add glusterfs storage domain]
[ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is 
"[Unexpected exception]". HTTP response code is 400.
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault 
reason is \"Operation Failed\". Fault detail is \"[Unexpected exception]\". 
HTTP response code is 400."}

However Gluster looks good...

I have reinstalled all nodes from scratch.

root@ovirt3 ~]# gluster volume status 
Status of volume: data
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick gfs3.gluster.private:/gluster_bricks/
data/data   49152 0  Y   3756 
Brick gfs2.gluster.private:/gluster_bricks/
data/data   49153 0  Y   3181 
Brick gfs1.gluster.private:/gluster_bricks/
data/data   49152 0  Y   15548
Self-heal Daemon on localhost   N/A   N/AY   17602
Self-heal Daemon on gfs1.gluster.privateN/A   N/AY   15706
Self-heal Daemon on gfs2.gluster.privateN/A   N/AY   3348 
 
Task Status of Volume data
--
There are no active volume tasks
 
Status of volume: engine
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick gfs3.gluster.private:/gluster_bricks/
engine/engine   49153 0  Y   3769 
Brick gfs2.gluster.private:/gluster_bricks/
engine/engine   49154 0  Y   3194 
Brick gfs1.gluster.private:/gluster_bricks/
engine/engine   49153 0  Y   15559
Self-heal Daemon on localhost   N/A   N/AY   17602
Self-heal Daemon on gfs1.gluster.privateN/A   N/AY   15706
Self-heal Daemon on gfs2.gluster.privateN/A   N/AY   3348 
 
Task Status of Volume engine
--
There are no active volume tasks
 
Status of volume: vmstore
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick gfs3.gluster.private:/gluster_bricks/
vmstore/vmstore 49154 0  Y   3786 
Brick gfs2.gluster.private:/gluster_bricks/
vmstore/vmstore 49152 0  Y   2901 
Brick gfs1.gluster.private:/gluster_bricks/
vmstore/vmstore 49154 0  Y   15568
Self-heal Daemon on localhost   N/A   N/AY   17602
Self-heal Daemon on gfs1.gluster.privateN/A   N/AY   15706
Self-heal Daemon on gfs2.gluster.privateN/A   N/AY   3348 
 
Task Status of Volume vmstore
--
There are no active volume tasks
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ACVH5XGIYYTEXNT4RLU47JAE3FASTKYM/


[ovirt-users] VDSM Errors see below

2019-11-28 Thread rob . downer
I have removed all deployment of the hosted engine by running the following 
commands.
ovirt-hosted-engine-cleanup
vdsm-tool configure --force
systemctl restart libvirtd
systemctl restart vdsm

on my hosts I have the following ovirt 1 is the host I ran the hosted engine 
setup.

I have set the Gluster network to use the same subnet and set up forward and 
reverse DNS for the Gluster port network NIC's

I had this working using a separate subnet but thought to try it on the same 
subnet to avoid any issues that may have occurred while using a separate 
network subnet.

the main Host IP address is still showing in Unamanaged Connections on Ovirt 1 
.. is this anything to be concerned about after running the commands above...

I have restarted all machines.

All come back with these VDSM errors...

Node 1


[root@ovirt1 ~]# systemctl status vdsmd
● vdsmd.service - Virtual Desktop Server Manager
   Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor 
preset: enabled)
   Active: inactive (dead) since Thu 2019-11-28 13:29:40 UTC; 37min ago
  Process: 31178 ExecStopPost=/usr/libexec/vdsm/vdsmd_init_common.sh 
--post-stop (code=exited, status=0/SUCCESS)
  Process: 30721 ExecStart=/usr/share/vdsm/daemonAdapter -0 /dev/null -1 
/dev/null -2 /dev/null /usr/share/vdsm/vdsmd (code=exited, status=0/SUCCESS)
 Main PID: 30721 (code=exited, status=0/SUCCESS)

Nov 26 22:42:49 ovirt1.kvm.private vdsm[30721]: WARN MOM not available, KSM 
stats will be missing.
Nov 26 22:42:49 ovirt1.kvm.private vdsm[30721]: WARN Not ready yet, ignoring 
event '|virt|VM_status|871ce9d5-417a-4278-8446-28b681760c1b' 
args={'871ce9d5-417a-4278-8446-28b681760c1b': {'status': 'Poweri...
Nov 28 13:28:43 ovirt1.kvm.private vdsm[30721]: WARN File: 
/var/run/vdsm/trackedInterfaces/eno2 already removed
Nov 28 13:29:26 ovirt1.kvm.private vdsm[30721]: WARN File: 
/var/lib/libvirt/qemu/channels/871ce9d5-417a-4278-8446-28b681760c1b.com.redhat.rhevm.vdsm
 already removed
Nov 28 13:29:26 ovirt1.kvm.private vdsm[30721]: WARN File: 
/var/lib/libvirt/qemu/channel/target/domain-1-HostedEngineLocal/org.qemu.guest_agent.0
 already removed
Nov 28 13:29:39 ovirt1.kvm.private vdsm[30721]: WARN MOM not available.
Nov 28 13:29:39 ovirt1.kvm.private vdsm[30721]: WARN MOM not available, KSM 
stats will be missing.
Nov 28 13:29:39 ovirt1.kvm.private systemd[1]: Stopping Virtual Desktop Server 
Manager...
Nov 28 13:29:39 ovirt1.kvm.private vdsmd_init_common.sh[31178]: vdsm: Running 
run_final_hooks
Nov 28 13:29:40 ovirt1.kvm.private systemd[1]: Stopped Virtual Desktop Server 
Manager.
Hint: Some lines were ellipsized, use -l to show in full.
[root@ovirt1 ~]# nodectl check
Status: WARN
Bootloader ... OK
  Layer boot entries ... OK
  Valid boot entries ... OK
Mount points ... OK
  Separate /var ... OK
  Discard is used ... OK
Basic storage ... OK
  Initialized VG ... OK

  Initialized Thin Pool ... OK
  Initialized LVs ... OK
Thin storage ... OK
  Checking available space in thinpool ... OK
  Checking thinpool auto-extend ... OK
vdsmd ... BAD

NODE 2
[root@ovirt2 ~]# systemctl status vdsmd
● vdsmd.service - Virtual Desktop Server Manager
   Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor 
preset: enabled)
   Active: active (running) since Thu 2019-11-28 13:57:30 UTC; 1min 13s ago
  Process: 3626 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh --pre-start 
(code=exited, status=0/SUCCESS)
 Main PID: 5418 (vdsmd)
Tasks: 38
   CGroup: /system.slice/vdsmd.service
   └─5418 /usr/bin/python2 /usr/share/vdsm/vdsmd

Nov 28 13:57:30 ovirt2.kvm.private vdsmd_init_common.sh[3626]: libvirt: Network 
Filter Driver error : Network filter not found: no nwfilter with matching name 
'vdsm-no-mac-spoofing'
Nov 28 13:57:30 ovirt2.kvm.private vdsmd_init_common.sh[3626]: vdsm: Running 
dummybr
Nov 28 13:57:30 ovirt2.kvm.private vdsmd_init_common.sh[3626]: vdsm: Running 
tune_system
Nov 28 13:57:30 ovirt2.kvm.private vdsmd_init_common.sh[3626]: vdsm: Running 
test_space
Nov 28 13:57:30 ovirt2.kvm.private vdsmd_init_common.sh[3626]: vdsm: Running 
test_lo
Nov 28 13:57:30 ovirt2.kvm.private systemd[1]: Started Virtual Desktop Server 
Manager.
Nov 28 13:57:32 ovirt2.kvm.private vdsm[5418]: WARN File: 
/var/run/vdsm/trackedInterfaces/eno1 already removed
Nov 28 13:57:32 ovirt2.kvm.private vdsm[5418]: WARN File: 
/var/run/vdsm/trackedInterfaces/eno2 already removed
Nov 28 13:57:32 ovirt2.kvm.private vdsm[5418]: WARN MOM not available.
Nov 28 13:57:32 ovirt2.kvm.private vdsm[5418]: WARN MOM not available, KSM 
stats will be missing.
[root@ovirt2 ~]# nodectl check
Status: OK
Bootloader ... OK
  Layer boot entries ... OK
  Valid boot entries ... OK
Mount points ... OK
  Separate /var ... OK
  Discard is used ... OK
Basic storage ... OK
  Initialized VG ... OK
  Initialized Thin Pool ... OK
  Initialized LVs ... OK
Thin storage ... OK
  Checking available space in thinpool ... OK
  Checking thinpool auto-extend ... OK
vdsmd ... OK
[root@ovirt2 ~]# 


[ovirt-users] Re: Gluster setup 3 Node - Now only showing single node setup in setup Wizard

2019-11-25 Thread rob . downer
Sorry not sure what happened there I opened a new thread for the last issue I 
have above... 
I am using 4.3.6
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6Q2JCTKJBN3725P625OFGIZMHOSOBP6I/


[ovirt-users] Re: Gluster setup 3 Node - Now only showing single node setup in setup Wizard

2019-11-25 Thread rob . downer
I am using
4.3.6
I got deployment working but failed at the last step

I detailed that in this thread 

https://lists.ovirt.org/archives/list/users@ovirt.org/thread/NL4HS6MIKWQAGI36NMSXGESBMB433SPL/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/B4HOFIGR5SPX7KQUSTQ3G2Y6VXETB2RO/


[ovirt-users] Engine deployment last step.... Can anyone help ?

2019-11-25 Thread rob . downer
So...

I have got to the last step 

3 Machines with Gluster Storage configured however at the last screen

Deploying the Engine to Gluster and the wizard does not auto fill the two 
fields

Hosted Engine Deployment

Storage Connection 
and
Mount Options

I also had to expand /tmp as it was not big enough to fit the engine before 
moving...

What can I do to get the auto complete sorted out ?

I have tried entering ovirt1.kvm.private:/gluster_lv_engine  - The Volume name
and
ovirt1.kvm.private:/gluster_bricks/engine

Ovirt1 being the actual machine I'm running this on.

Thanks
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NL4HS6MIKWQAGI36NMSXGESBMB433SPL/


[ovirt-users] Re: Gluster set up fails - Nearly there I think...

2019-11-24 Thread rob . downer
I deployed Gluster FS by first applying the vdo Volume on each host that 
Gluster tried to create and failed with

vdo create --name=vdo_sdb --device=/dev/sdb --force

re deploying the gluster wizard then completed without error
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VV7PSIA6KJP6F4LG5XLTWHOW7W7XLSY5/


[ovirt-users] Re: Gluster set up fails - Nearly there I think...

2019-11-24 Thread rob . downer
Ok so i found that the System was set to MBR ...

there are no filters in LVM.conf

However disk creation still fails on VDO disk creation with the same error...

I have set the disk format to

Do you want to proceed? (Y/N): y
OK; writing new GUID partition table (GPT) to /dev/sdb.
The operation has completed successfully.
[root@ovirt2 ~]# 

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OKUWKHUKINIOQPZEQXYI4PVIKFYPKXTF/


[ovirt-users] Re: Gluster set up fails - Nearly there I think...

2019-11-24 Thread rob . downer
So no Filters in there...
Also Gluster / Engin wizard only shows setup for single node.
I have 3 nodes in Dashboard and have shared passwordless SSH keys between the 
host and the other two hosts via the backend Gluster network
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JDPXXZ2EIJT74F7N4SS5BBCPUOFOMS34/


[ovirt-users] Re: Gluster set up fails - Nearly there I think...

2019-11-23 Thread rob . downer
Nope not filters there
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DBKRKQGPDN4CCVPYCG64CZN7KSWDCU6C/


[ovirt-users] Re: Gluster set up fails - Nearly there I think...

2019-11-23 Thread rob . downer
Full log of point of failure
TASK [gluster.infra/roles/backend_setup : Create VDO with specified size] **
task path: 
/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vdo_create.yml:9
failed: [gfs1.gluster.private] (item={u'writepolicy': u'auto', u'name': 
u'vdo_sdb', u'readcachesize': u'20M', u'readcache': u'enabled', u'emulate512': 
u'on', u'logicalsize': u'11000G', u'device': u'/dev/sdb', u'slabsize': u'32G', 
u'blockmapcachesize': u'128M'}) => {"ansible_loop_var": "item", "changed": 
false, "err": "vdo: ERROR - Device /dev/sdb excluded by a filter.\n", "item": 
{"blockmapcachesize": "128M", "device": "/dev/sdb", "emulate512": "on", 
"logicalsize": "11000G", "name": "vdo_sdb", "readcache": "enabled", 
"readcachesize": "20M", "slabsize": "32G", "writepolicy": "auto"}, "msg": 
"Creating VDO vdo_sdb failed.", "rc": 1}
failed: [gfs2.gluster.private] (item={u'writepolicy': u'auto', u'name': 
u'vdo_sdb', u'readcachesize': u'20M', u'readcache': u'enabled', u'emulate512': 
u'on', u'logicalsize': u'11000G', u'device': u'/dev/sdb', u'slabsize': u'32G', 
u'blockmapcachesize': u'128M'}) => {"ansible_loop_var": "item", "changed": 
false, "err": "vdo: ERROR - Device /dev/sdb excluded by a filter.\n", "item": 
{"blockmapcachesize": "128M", "device": "/dev/sdb", "emulate512": "on", 
"logicalsize": "11000G", "name": "vdo_sdb", "readcache": "enabled", 
"readcachesize": "20M", "slabsize": "32G", "writepolicy": "auto"}, "msg": 
"Creating VDO vdo_sdb failed.", "rc": 1}
failed: [gfs3.gluster.private] (item={u'writepolicy': u'auto', u'name': 
u'vdo_sdb', u'readcachesize': u'20M', u'readcache': u'enabled', u'emulate512': 
u'on', u'logicalsize': u'11000G', u'device': u'/dev/sdb', u'slabsize': u'32G', 
u'blockmapcachesize': u'128M'}) => {"ansible_loop_var": "item", "changed": 
false, "err": "vdo: ERROR - Device /dev/sdb excluded by a filter.\n", "item": 
{"blockmapcachesize": "128M", "device": "/dev/sdb", "emulate512": "on", 
"logicalsize": "11000G", "name": "vdo_sdb", "readcache": "enabled", 
"readcachesize": "20M", "slabsize": "32G", "writepolicy": "auto"}, "msg": 
"Creating VDO vdo_sdb failed.", "rc": 1}

NO MORE HOSTS LEFT *

NO MORE HOSTS LEFT *

PLAY RECAP *
gfs1.gluster.private   : ok=12   changed=1unreachable=0failed=1
skipped=9rescued=0ignored=0   
gfs2.gluster.private   : ok=13   changed=2unreachable=0failed=1
skipped=9rescued=0ignored=0   
gfs3.gluster.private   : ok=12   changed=1unreachable=0failed=1
skipped=9rescued=0ignored=0   

Please check /var/log/cockpit/ovirt-dashboard/gluster-deployment.log for more 
informations.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YNJJOXCPQ54U6ZPEUXOKP345RAKF3OWT/


[ovirt-users] Gluster set up fails - Nearly there I think...

2019-11-23 Thread rob . downer
Gluster fails with
vdo: ERROR - Device /dev/sdb excluded by a filter.\n", 

however I have run

[root@ovirt1 ~]# vdo create --name=vdo1 --device=/dev/sdb --force
Creating VDO vdo1
Starting VDO vdo1
Starting compression on VDO vdo1
VDO instance 1 volume is ready at /dev/mapper/vdo1
[root@ovirt1 ~]# 

there are no filters in lvm.conf

I have run

wipefs -a /dev/sdb —force

on all hosts before start
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RPVWGWIP35QWFNCAGABMF4GC24IEZWX5/


[ovirt-users] Gluster setup 3 Node - Now only showing single node setup in setup Wizard

2019-11-23 Thread rob . downer
I have set up 3 Nodes with a separate volume for Gluster, I have set up the two 
networks and DNS works fine SSH has been set up for Gluster and you can login 
via ssh to the other two hosts from the host used to set up.

When going to Virtualisation > Setup Gluster and Hosted Engine only single node 
shows up.

I have restarted all 3 machines.

All nodes machines show up in Dashboard etc...

I have set this up before but it all worked, I erased everything and set up 
again with the separate volume to be used for Gluster storage 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4BVLMM6L6I7LEVKXVKQCQIT4DLSYRADX/


[ovirt-users] Dm-VDO in Gluster

2019-11-23 Thread rob . downer
I have 3 nodes set up with a 2TB Partition on each for Gluster set up.

The drives are all SSD should I enable DM-VDO ?

has anyone one got a simple / Pro / Con list ?

I understand response is slower if enabled also some CPU hit both are 
acceptable.

Thanks
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DTQHHB2OYIRTSR2AMENCBDBTEOV6PEY5/


[ovirt-users] Re: Gluster & Hyper Converged setup

2019-11-19 Thread rob . downer
Hi,

I believe I need to create a Storage Block which I was unaware of as I thought 
one would be able to use part of the free space on the disks automatically 
provisioned by the node installer, I believe this requires a reinstall and 
creation of a new volume or reduce the current size of the volume used and 
create a new volume on the live system. Additionally on another note how do I 
remove my email address from showing on posts this is not great to have it 
showing.

that file you wanted is below...
[root@ovirt1 ~]# cat 
/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vg_create.yml
---
# We have to set the dataalignment for physical volumes, and physicalextentsize
# for volume groups. For JBODs we use a constant alignment value of 256K
# however, for RAID we calculate it by multiplying the RAID stripe unit size
# with the number of data disks. Hence in case of RAID stripe_unit_size and data
# disks are mandatory parameters.

- name: Check if valid disktype is provided
  fail:
msg: "Unknown disktype. Allowed disktypes: JBOD, RAID6, RAID10, RAID5."
  when: gluster_infra_disktype not in [ 'JBOD', 'RAID6', 'RAID10', 'RAID5' ]


# Set data alignment for JBODs, by default it is 256K. This set_fact is not
# needed if we can always assume 256K for JBOD, however we provide this extra
# variable to override it.
- name: Set PV data alignment for JBOD
  set_fact:
pv_dataalign: "{{ gluster_infra_dalign | default('256K') }}"
  when: gluster_infra_disktype == 'JBOD'

# Set data alignment for RAID
# We need KiB: ensure to keep the trailing `K' in the pv_dataalign calculation.
- name: Set PV data alignment for RAID
  set_fact:
pv_dataalign: >
{{ gluster_infra_diskcount|int *
   gluster_infra_stripe_unit_size|int }}K
  when: >
  gluster_infra_disktype == 'RAID6' or
  gluster_infra_disktype == 'RAID10' or
  gluster_infra_disktype == 'RAID5'

- name: Set VG physical extent size for RAID
  set_fact:
vg_pesize: >
 {{ gluster_infra_diskcount|int *
gluster_infra_stripe_unit_size|int }}K
  when: >
 gluster_infra_disktype == 'RAID6' or
 gluster_infra_disktype == 'RAID10' or
 gluster_infra_disktype == 'RAID5'

# Tasks to create a volume group
# The devices in `pvs' can be a regular device or a VDO device
- name: Create volume groups
  lvg:
state: present
vg: "{{ item.vgname }}"
pvs: "{{ item.pvname }}"
pv_options: "--dataalignment {{ pv_dataalign }}"
# pesize is 4m by default for JBODs
pesize: "{{ vg_pesize | default(4) }}"
  with_items: "{{ gluster_infra_volume_groups }}"
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/T6YPHITYF3TRE2FLKILBOADSFNZ5HMOS/


[ovirt-users] Re: Gluster & Hyper Converged setup

2019-11-18 Thread rob . downer

[root@ovirt1 ~]#  lsblk
NAME  MAJ:MIN RM  SIZE RO 
TYPE MOUNTPOINT
sda 8:00  1.5T  0 
disk 
├─sda1  8:101G  0 
part /boot
└─sda2  8:20  1.5T  0 
part 
  ├─onn_ovirt1-swap   253:004G  0 
lvm  [SWAP]
  ├─onn_ovirt1-pool00_tmeta   253:101G  0 
lvm  
  │ └─onn_ovirt1-pool00-tpool 253:30  1.4T  0 
lvm  
  │   ├─onn_ovirt1-ovirt--node--ng--4.3.6--0.20190926.0+1 253:40  1.3T  0 
lvm  /
  │   ├─onn_ovirt1-pool00 253:50  1.4T  0 
lvm  
  │   ├─onn_ovirt1-var_log_audit  253:602G  0 
lvm  /var/log/audit
  │   ├─onn_ovirt1-var_log253:708G  0 
lvm  /var/log
  │   ├─onn_ovirt1-var253:80   15G  0 
lvm  /var
  │   ├─onn_ovirt1-tmp253:901G  0 
lvm  /tmp
  │   ├─onn_ovirt1-home   253:10   01G  0 
lvm  /home
  │   └─onn_ovirt1-var_crash  253:11   0   10G  0 
lvm  /var/crash
  └─onn_ovirt1-pool00_tdata   253:20  1.4T  0 
lvm  
└─onn_ovirt1-pool00-tpool 253:30  1.4T  0 
lvm  
  ├─onn_ovirt1-ovirt--node--ng--4.3.6--0.20190926.0+1 253:40  1.3T  0 
lvm  /
  ├─onn_ovirt1-pool00 253:50  1.4T  0 
lvm  
  ├─onn_ovirt1-var_log_audit  253:602G  0 
lvm  /var/log/audit
  ├─onn_ovirt1-var_log253:708G  0 
lvm  /var/log
  ├─onn_ovirt1-var253:80   15G  0 
lvm  /var
  ├─onn_ovirt1-tmp253:901G  0 
lvm  /tmp
  ├─onn_ovirt1-home   253:10   01G  0 
lvm  /home
  └─onn_ovirt1-var_crash  253:11   0   10G  0 
lvm  /var/crash
[root@ovirt1 ~]# 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HGGFGNRMHNLN3GTFR4SJX7AWUAQYN7TD/


[ovirt-users] Re: Gluster & Hyper Converged setup

2019-11-18 Thread rob . downer
Can i create that from the current SSD storage I have or do I need to reinstall 
?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FJLESDSBMFEACIAVSTYDSE2HFZ7CESL5/


[ovirt-users] Re: Gluster & Hyper Converged setup

2019-11-18 Thread rob . downer
Logical Volumes Create new Logical Volume
1.35 TiB Pool for Thin Volumes  pool00  
1 GiB ext4 File System  /dev/onn_ovirt1/home
1.32 TiB Inactive volumeovirt-node-ng-4.3.6-0.20190926.0
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5ZP4B7IPBDUYCBSLNUWDA5VGLK67UDZ5/


[ovirt-users] Re: Gluster & Hyper Converged setup

2019-11-18 Thread rob . downer
No,

Is this the issue ?
1.32 TiB Inactive volumeovirt-node-ng-4.3.6-0.20190926.0
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XDTOJXJ4HL4ZLVM74QHKVLFDGH6D3G5R/


[ovirt-users] Re: Gluster setup

2019-11-18 Thread rob . downer
Fixed that thanks it still fails
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QTYQU5OHZSSAKVTS7S73F5BXDRX5MUZL/


[ovirt-users] Gluster & Hyper Converged setup

2019-11-18 Thread rob . downer
Hi,

Gluster will not set up and fails... can anyone see why ?

/etc/hosts set up for both backend Gluster network and front end, also LAN DNS 
set up on the subnet for the front end.


TASK [gluster.infra/roles/backend_setup : Set PV data alignment for JBOD] **
task path: 
/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vg_create.yml:17
skipping: [gfs2.gluster.private] => {"changed": false,
"skip_reason": "Conditional result was False"}
skipping: [gfs1.gluster.private] => {"changed": false,
"skip_reason": "Conditional result was False"}
skipping: [gfs3.gluster.private] => {"changed": false,
"skip_reason": "Conditional result was False"}

TASK [gluster.infra/roles/backend_setup : Set PV data alignment for RAID] **
task path: 
/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vg_create.yml:24
ok: [gfs2.gluster.private] => {"ansible_facts": {"pv_dataalign":
"3072K\n"}, "changed": false}
ok: [gfs1.gluster.private] => {"ansible_facts": {"pv_dataalign":
"3072K\n"}, "changed": false}
ok: [gfs3.gluster.private] => {"ansible_facts": {"pv_dataalign":
"3072K\n"}, "changed": false}

TASK [gluster.infra/roles/backend_setup : Set VG physical extent size for RAID] 
***
task path: 
/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vg_create.yml:34
ok: [gfs2.gluster.private] => {"ansible_facts": {"vg_pesize":
"3072K\n"}, "changed": false}
ok: [gfs1.gluster.private] => {"ansible_facts": {"vg_pesize":
"3072K\n"}, "changed": false}
ok: [gfs3.gluster.private] => {"ansible_facts": {"vg_pesize":
"3072K\n"}, "changed": false}

TASK [gluster.infra/roles/backend_setup : Create volume groups] 
task path: 
/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vg_create.yml:46
failed: [gfs1.gluster.private] (item={u'vgname': u'gluster_vg_sdb',
u'pvname': u'/dev/sdb'}) => {"ansible_loop_var":
"item", "changed": false, "item": {"pvname":
"/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg":
"Device /dev/sdb not found."}
failed: [gfs3.gluster.private] (item={u'vgname': u'gluster_vg_sdb',
u'pvname': u'/dev/sdb'}) => {"ansible_loop_var":
"item", "changed": false, "item": {"pvname":
"/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg":
"Device /dev/sdb not found."}
failed: [gfs2.gluster.private] (item={u'vgname': u'gluster_vg_sdb',
u'pvname': u'/dev/sdb'}) => {"ansible_loop_var":
"item", "changed": false, "item": {"pvname":
"/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg":
"Device /dev/sdb not found."}

NO MORE HOSTS LEFT *

NO MORE HOSTS LEFT *

PLAY RECAP *
gfs1.gluster.private : ok=10 changed=0 unreachable=0 failed=1 skipped=16
rescued=0 ignored=0
gfs2.gluster.private : ok=11 changed=1 unreachable=0 failed=1 skipped=16
rescued=0 ignored=0
gfs3.gluster.private : ok=10 changed=0 unreachable=0 failed=1 skipped=16
rescued=0 ignored=0
 0 /  0
 Reply
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WN7U626EMZCWOTXOOMHGPD3S2X5IA6SJ/


[ovirt-users] Re: Gluster setup

2019-11-16 Thread rob . downer
OK so I found that even though DNS was set correctly havng put IP address' in 
additioanl hosts and adding them to /etc/hosts Deployment does not immediately 
fail...

however it does fail...
TASK [gluster.infra/roles/backend_setup : Set PV data alignment for JBOD] **
task path: 
/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vg_create.yml:17
skipping: [gfs2.gluster.private] => {"changed": false, "skip_reason": 
"Conditional result was False"}
skipping: [gfs1.gluster.private] => {"changed": false, "skip_reason": 
"Conditional result was False"}
skipping: [gfs3.gluster.private] => {"changed": false, "skip_reason": 
"Conditional result was False"}

TASK [gluster.infra/roles/backend_setup : Set PV data alignment for RAID] **
task path: 
/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vg_create.yml:24
ok: [gfs2.gluster.private] => {"ansible_facts": {"pv_dataalign": "3072K\n"}, 
"changed": false}
ok: [gfs1.gluster.private] => {"ansible_facts": {"pv_dataalign": "3072K\n"}, 
"changed": false}
ok: [gfs3.gluster.private] => {"ansible_facts": {"pv_dataalign": "3072K\n"}, 
"changed": false}

TASK [gluster.infra/roles/backend_setup : Set VG physical extent size for RAID] 
***
task path: 
/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vg_create.yml:34
ok: [gfs2.gluster.private] => {"ansible_facts": {"vg_pesize": "3072K\n"}, 
"changed": false}
ok: [gfs1.gluster.private] => {"ansible_facts": {"vg_pesize": "3072K\n"}, 
"changed": false}
ok: [gfs3.gluster.private] => {"ansible_facts": {"vg_pesize": "3072K\n"}, 
"changed": false}

TASK [gluster.infra/roles/backend_setup : Create volume groups] 
task path: 
/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vg_create.yml:46
failed: [gfs1.gluster.private] (item={u'vgname': u'gluster_vg_sdb', u'pvname': 
u'/dev/sdb'}) => {"ansible_loop_var": "item", "changed": false, "item": 
{"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Device /dev/sdb not 
found."}
failed: [gfs3.gluster.private] (item={u'vgname': u'gluster_vg_sdb', u'pvname': 
u'/dev/sdb'}) => {"ansible_loop_var": "item", "changed": false, "item": 
{"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Device /dev/sdb not 
found."}
failed: [gfs2.gluster.private] (item={u'vgname': u'gluster_vg_sdb', u'pvname': 
u'/dev/sdb'}) => {"ansible_loop_var": "item", "changed": false, "item": 
{"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Device /dev/sdb not 
found."}

NO MORE HOSTS LEFT *

NO MORE HOSTS LEFT *

PLAY RECAP *
gfs1.gluster.private   : ok=10   changed=0unreachable=0failed=1
skipped=16   rescued=0ignored=0   
gfs2.gluster.private   : ok=11   changed=1unreachable=0failed=1
skipped=16   rescued=0ignored=0   
gfs3.gluster.private   : ok=10   changed=0unreachable=0failed=1
skipped=16   rescued=0ignored=0   
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SWE6XO34AL7NY65IXTFDT2NOFPH6M6FC/


[ovirt-users] Re: Gluster setup

2019-11-15 Thread rob . downer
yes

see below...

still getting 
FQDN is not added in known_hosts

on Additional hosts screen...

Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter 
out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are 
prompted now it is to install the new keys
root@gfs3.gluster.private's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'gfs3.gluster.private'"
and check to make sure that only the key(s) you wanted were added.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/B2UCARQ4TQI3OJ52AGXQH5FRRV2QYBZ4/


[ovirt-users] Re: Gluster setup

2019-11-15 Thread rob . downer
[root@ovirt3 ~]# nmcli general hostname
ovirt3.kvm.private
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IGMFTFC44JBZLWGZLU74UTNRZA6UIT2E/


[ovirt-users] Re: Gluster setup

2019-11-15 Thread rob . downer
So using FQDN or ip I get this
task path: /usr/share/cockpit/ovirt-dashboard/ansible/hc_wizard.yml:4
fatal: [10.10.45.13]: UNREACHABLE! => {"changed": false, "msg": "Failed to 
connect to the host via ssh: Permission denied 
(publickey,gssapi-keyex,gssapi-with-mic,password).", "unreachable": true}
fatal: [10.10.45.11]: UNREACHABLE! => {"changed": false, "msg": "Failed to 
connect to the host via ssh: Permission denied 
(publickey,gssapi-keyex,gssapi-with-mic,password).", "unreachable": true}
fatal: [10.10.45.12]: UNREACHABLE! => {"changed": false, "msg": "Failed to 
connect to the host via ssh: Permission denied 
(publickey,gssapi-keyex,gssapi-with-mic,password).", "unreachable": true}

but on CLI it's reachable looks like a password issue... ?
[root@ovirt3 ~]# ping 10.10.45.12
PING 10.10.45.12 (10.10.45.12) 56(84) bytes of data.
64 bytes from 10.10.45.12: icmp_seq=1 ttl=64 time=0.159 ms
64 bytes from 10.10.45.12: icmp_seq=2 ttl=64 time=1.42 ms
64 bytes from 10.10.45.12: icmp_seq=3 ttl=64 time=0.157 ms
64 bytes from 10.10.45.12: icmp_seq=4 ttl=64 time=0.141 ms
64 bytes from 10.10.45.12: icmp_seq=5 ttl=64 time=0.140 ms
64 bytes from 10.10.45.12: icmp_seq=6 ttl=64 time=0.172 ms
^C
--- 10.10.45.12 ping statistics ---
6 packets transmitted, 6 received, 0% packet loss, time 5001ms
rtt min/avg/max/mdev = 0.140/0.366/1.429/0.475 ms
[root@ovirt3 ~]# 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UBPCM6YVCW5LISQRM72RPPEGQ3AZR4YD/


[ovirt-users] Gluster setup

2019-11-15 Thread rob . downer
I have set up a 3 node system.

Gluster has its own backend network and I have tried entering the FQDN hosts 
via ssh as follows...
gfs1.gluster.private10.10.45.11
gfs2.gluster.private10.10.45.12 
gfs3.gluster.private10.10.45.13

I entered at /etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
gfs1.gluster.private10.10.45.11
gfs2.gluster.private10.10.45.12
gfs3.gluster.private10.10.45.13

but on the CLI 

host gfs1.gluster.private

returns 

[root@ovirt1 etc]# host gfs1.gluster.private
Host gfs1.gluster.private not found: 3(NXDOMAIN)
[root@ovirt1 etc]# 

I guess this is the wrong hosts file, resolver.conf lists files first for 
lookup...
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ILABGNZFOH5BP6JW7VZIEA4GIZE7DXUT/


[ovirt-users] Gluster Network Issues....

2019-11-14 Thread rob . downer
I have set up 3 SuperMicro's with Ovirt Node and all pretty sweet.

FQDN set up for LAN and also after setup I have enabled a second NIC with FQDN 
for a Gluster network.

The issue is the second ports seem to be unavailable for network access by ping 
or login if you login on root the system says that the ports are available 
for login on the bash shell after login and node check comes back fine.

I have IPMI set up on the systems as well for access.

am I missing something ?

I realise Gluster should be on a seperate LAN and will put it on a 10BGe 
network but I'm just testing.

I have the lastest stable build.

Any help would be appreciated.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IWBYBLLUXOM7XMDSKZ7EDZRJVKVUQ47V/