[ovirt-users] Re: Single Node Hyperconverged - Failing Engine Deployment - Network setup?

2021-02-14 Thread jhamiltonactually
Hi

This is a far as I could get today. frustrating!  I can't connect to 
ovirt-gluster.whichelo.com after install. I can ping the ip no successfully, 
everything is correct in DNS



[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Execute just a specific set of 
steps]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Force facts gathering]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Wait for the storage interface 
to be up]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Check local VM dir stat]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Enforce local VM dir existence]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : include_tasks]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Obtain SSO token using 
username/password credentials]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch host facts]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch cluster ID]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch cluster facts]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch Datacenter facts]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch Datacenter ID]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch Datacenter name]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Add NFS storage domain]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Add glusterfs storage domain]
[ ERROR ] ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is 
"[Failed to fetch Gluster Volume List]". HTTP response code is 400.
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault 
reason is \"Operation Failed\". Fault detail is \"[Failed to fetch Gluster 
Volume List]\". HTTP response code is 400."}
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7ZCXBA7QAKCNKJYYM3LZUPGDTCDA3MAC/


[ovirt-users] Re: Single Node Hyperconverged - Failing Engine Deployment - Network setup?

2021-02-13 Thread jhamiltonactually
Hi
Sorry about the delay! Here is what I did - which for some reason got further 
than before, but still failed at the last hurdle! Maybe just doing this 
exercise of commenting my workflow helped me to fix errors I'd not seen before 
- I don't know! Have any packages been updated I don't know about?

https://docs.google.com/document/d/1SslZJYokA-Ho8SN8Vr5IzTnItuIDA6ACjilvLgCLcQ0/edit?usp=sharing

Here are the logs from the failed gluster deploy, before I commented out the 
filter in lvm.conf 
https://docs.google.com/document/d/1uvICx4NBMjC68hhPN98f-LAykyHUPvX5ipr-dUKk32s/edit?usp=sharing

I will update this thread as may be of use to some. Is there anywhere I should 
post it that other might find easier? Any advice on this final hurdle would be 
very gratefully received!

I have to go do some family stuff, but will update later if I find a solution!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/T5PG64I23UALXBCE2HAW7OKO4TWYTFH5/


[ovirt-users] Re: Single Node Hyperconverged - Failing Engine Deployment - Network setup?

2021-02-11 Thread jhamiltonactually
Many thanks Didi  I will do this later - I think it may be of benefit to others.

I'll start a fresh install on my server and detailed (I'm basically following 
the Ovirt flow)  flow. I'll try get the logs into my gdrive so as not to leave 
messy posts.

I've got my desktop plugged into the same switch as the server. The switch is 
running DNSMasq and runs the DNS on 192.168.0.2. Network gateway and DHCP 
server is 192.168.0.1, which is where my internet comes in. DHCP runs from 
192.168.0.3 - 30.

I assign static IPs for the server services from 192.168.0.40 onwards.  I have 
2 x 2 port 1Gb nics (4 ports total) on the server. 2 are connected to the 
router.

I'll post everything else later today. Most grateful!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/POTQ4DBRFMDX6WTAKAZHO72GAFPEHAAC/


[ovirt-users] Single Node Hyperconverged - Failing Engine Deployment - Network setup?

2021-02-10 Thread jhamiltonactually
Ovirt newbie here - installing v 4.4.4 (well, trying to!)

Following my post about failing Gluster setup, I can't get the self hosted 
engine to deploy.  I'm installing on my HP DL380p G6. I have 2 disk 170GB Raid 
0 for OS and 6 x 330GB disk Raid 5 for Gluster. DNS all set up, but Im 
realizing I'm missing something here.  I have a feeling my problems with 
Gluster/Engine were caused  by incorrect network setup.  Seems I'm not the 1st 
to fall at this hurdle - some people in the community are saying RH have made 
it's deliberately difficult to get these 'free' set-ups working.  It does feel 
a little like that!

Most instructions just say that a 'self hosted, hyperconverged, single node 
setup requires 2 nics' - and that's about it! That is the 'Networking 
Pre-requisites'!

I've had some help on here and Reddit which eventually made me find a solution 
some had had to use - editing lvm.conf to make sure my drive (sdb) was being 
blacklisted correctly.

So, with that, Gluster installs, with following DNS settings:
The kvm host (ovirt-kvm.whichelo.com) is fixed ip 192.168.0.40 on my 1st nic 
(Enp0s7)
ovirt-engine.whichelo.com - 192.168.0.50
and ovirt-gluster.whichelo.com on 192.168.0.60 - I created I VLAN linked to the 
nic i want to use for gluster (Enp0s8) and gluster install worked.

So now hosted engine won't install,, and pretty sure it's because I don't know 
how to set the network up properly. I'm seeing virbr0 coming up with different 
ip's - sometimes 192.168.1.1, sometimes 192.168.222.1.  From what I've read, 
this is something to do with the Engine's network, but I really don't know!

Am I still missing something? I can't find any decent instructions on how to do 
this - how (exactly) to configure the 2 'minimum required' nics?

I came to Ovirt after realizing Oracle were dumping their own Virtualization 
platform in favour of KVM.  Couldn't set it up from Oracle so moved to Red Hat 
- that was a no, and I was at the point where it made most sense to just run an 
Ovirt node for my KVM.  I've got much further with Ovirt, but instructions do 
not work as easily as they looked! (example 'just click on single node 
hyperconverged' which didn't work straight off the bat until removing that LVM 
filter, which took days to find out!

The single node HC server is a great match for a home lab/server like mine. I 
don't want another server, let alone another 2!  I'm doing this partly as 
hobby, but also to update my skills during lockdown. Surely if we can get 
people using things like this at home, they're more likely to end up using it 
for work one day?

Any help, or just pointing in the right direction, gratefully received!  
Hopefully I'm a biit clearer here. Happy to provide any logs or anything 
else

Thank you!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UDPW5EVTNWSBRLHAS6SOBPQXJWM5OZ3J/


[ovirt-users] Re: Single Node Hyperconverged - Failing Gluster Setup

2021-02-09 Thread jhamiltonactually
Thanks -for some reason this part of it now makes sense!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JU53JWFMY5EZC24BCRVWJUZTMSTFYMQP/


[ovirt-users] Re: Single Node Hyperconverged - Failing Gluster Setup

2021-02-09 Thread jhamiltonactually
Hi
I've learned a lot today reading, but still not smooth sailing at all!

I'm just doing a fresh install as I did find one way out of my above problem - 
to comment out the filters in  /etc/lvm/lvm.conf - but I couldn't see why this 
worked!?

I'm going to clean sdb once install finishes then look at the lsblk output to 
check all is well. I had found that sdb was being tagged as type mpath, which I 
managed to fix with wipefs.

So hopefully that will be ok this time round.

What I have found is that I'm still having problems working ou t the logic of 
the network

The kvm host (ovirt-kvm.whichelo.com) is fixed ip 192.168.0.40 on my 1st nic
Also in my DNS  are ovirt-engine.whichelo.com - 192.168.0.50
and ovirt-gluster.whichelo.com on 192.168.0.60

I discovered that adding gluster's ip to that adapter allowed me to move on 
with provisioning, but if I add the engine's (.50) I get the he_FQDN resolves 
to this host message.

I'm just working out how to use the nic's (machine has 4), bonds and vlans.

Installation always stalls at "waiting for host to be up" - I'm guessing my 
dodgy networking is causing problems.

Any help for a newbie would be very welcome! Is there any way to post 
screenshots?

Thank you so much for trying to help me out! I've found people in the ovirt 
community very helpful.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TUCMED2CEZBLATO7JZIBG6TWUBSBCLZ5/


[ovirt-users] Single Node Hyperconverged - Failing Gluster Setup

2021-02-09 Thread jhamiltonactually
Ovirt newbie here - using v 4.4.4

Have been trying for days to get this installed on my HP DL380p G6. I have 2 
disk 170GB Raid 0 for OS and 6 x 330GB disk Raid 5 for Gluster. DNS all set up 
(that took some working out), but I just can't fathom out whats (not) happening 
here. Block size is returned as 512.

I've had some help on Reddit where I've been told that Ovirt is seeing my 
single local disk ass an multipath device, which it is not/??!  I think I 
removed the flag, but it still fails here.

So, Gluster install fails quite early through, though it carries on creating 
all the volumes (with default settings) but then gives me the 'Deployment 
Failed' message :( Here is where it fails

Any help gratefully received!

TASK [fail] 

task path: /usr/share/cockpit/ovirt-dashboard/ansible/hc_wizard.yml:62

skipping: [ovirt-gluster.whichelo.com] => (item=[{'cmd': 'blockdev --getss 
/dev/sdb | grep -Po -q "512" && echo true || echo false\n', 'stdout': 'true', 
'stderr': '', 'rc': 0, 'start': '2021-02-07 13:21:10.237701', 'end': 
'2021-02-07 13:21:10.243111', 'delta': '0:00:00.005410', 'changed': True, 
'invocation': {'module_args': {'_raw_params': 'blockdev --getss /dev/sdb | grep 
-Po -q "512" && echo true || echo false\n', '_uses_shell': True, 'warn': True, 
'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': 
None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 
'stdout_lines': ['true'], 'stderr_lines': [], 'failed': False, 'item': 
{'vgname': 'gluster_vg_sdb', 'pvname': '/dev/sdb'}, 'ansible_loop_var': 
'item'}, {'cmd': 'blockdev --getss /dev/sdb | grep -Po -q "4096" && echo true 
|| echo false\n', 'stdout': 'false', 'stderr': '', 'rc': 0, 'start': 
'2021-02-07 13:21:14.760897', 'end': '2021-02-07 13:21:14.766395', 'delta': 
'0:00:00.005498', 'chang
 ed': True, 'invocation': {'module_args': {'_raw_params': 'blockdev --getss 
/dev/sdb | grep -Po -q "4096" && echo true || echo false\n', '_uses_shell': 
True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 
'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': 
None, 'stdin': None}}, 'stdout_lines': ['false'], 'stderr_lines': [], 'failed': 
False, 'item': {'vgname': 'gluster_vg_sdb', 'pvname': '/dev/sdb'}, 
'ansible_loop_var': 'item'}]) => {"ansible_loop_var": "item", "changed": false, 
"item": [{"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss 
/dev/sdb | grep -Po -q \"512\" && echo true || echo false\n", "delta": 
"0:00:00.005410", "end": "2021-02-07 13:21:10.243111", "failed": false, 
"invocation": {"module_args": {"_raw_params": "blockdev --getss /dev/sdb | grep 
-Po -q \"512\" && echo true || echo false\n", "_uses_shell": true, "argv": 
null, "chdir": null, "creates": null, "executable": null, "removes": null, 
"stdin": null, "
 stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": 
{"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "rc": 0, "start": 
"2021-02-07 13:21:10.237701", "stderr": "", "stderr_lines": [], "stdout": 
"true", "stdout_lines": ["true"]}, {"ansible_loop_var": "item", "changed": 
true, "cm

d": "blockdev --getss /dev/sdb | grep -Po -q \"4096\" && echo true || echo 
false\n", "delta": "0:00:00.005498", "end": "2021-02-07 13:21:14.766395", 
"failed": false, "invocation": {"module_args": {"_raw_params": "blockdev 
--getss /dev/sdb | grep -Po -q \"4096\" && echo true || echo false\n", 
"_uses_shell": true, "argv": null, "chdir": null, "creates": null, 
"executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, 
"strip_empty_ends": true, "warn": true}}, "item": {"pvname": "/dev/sdb", 
"vgname": "gluster_vg_sdb"}, "rc": 0, "start": "2021-02-07 13:21:14.760897", 
"stderr": "", "stderr_lines": [], "stdout": "false", "stdout_lines": 
["false"]}], "skip_reason": "Conditional result was False"}


hc_wizard.yml excerpt:

- name: Check if block device is 4KN
  shell: >
 blockdev --getss {{ item.pvname }} | grep -Po -q "4096"  && echo true 
|| echo false
  register: is4KN
  with_items: "{{ gluster_infra_volume_groups }}"

- fail:  THIS IS LINE 62 
#
msg: "Mix of 4K and 512 Block devices are not allowed"
  with_nested:
- "{{ is512.results }}"
- "{{ is4KN.results }}"
  when: item[0].stdout|bool and item[1].stdout|bool

# logical block size of 512 bytes. To disable the check set
# gluster_features_512B_check to false. DELETE the below task once
# OVirt limitation is fixed
- name: Check if disks have logical block size of 512B
  command: blockdev --getss {{ item.pvname }}
  register: logical_blk_size
  when: gluster_infra_volume_groups is defined and
item.pvname is not search("/dev/mapper") and
gluster_features_512B_check|default(false)

Can anyone help?