[ovirt-users] Re: Ovirt Hyperconverged Storage

2024-04-29 Thread Thomas Hoberg
And I might have misread where your problems actually are...

Because oVirt was born on SAN but tries to be storage agnostic, it creates its 
own overlay abstraction, a block layer that is then managed within oVirt even 
when you use NFS or GlusterFS underneath.

"The ISO domain" has actually been deprecated and ISO images can be put into 
any domain type (e.g. also data).

But they still have to be uploaded to that domain via the management engine 
GUI, you can't just copy the ISO images somewhere within the files and 
directories oVirt might create and expect them to be visible to the GUI or the 
VMs.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LX4JECCT6FSUCQFLV7V2Z6NZUN3P4AP7/


[ovirt-users] Re: Ovirt Hyperconverged Storage

2024-04-29 Thread Thomas Hoberg
Hi Tim,

HA, HCI and failover either require or at least benefit from consistent storage.

The original NFS reduce the risk of inconsistency to single files, Gluster puts 
the onus of consistency mostly the clients and I guess Ceph is similar.

iSCSI has been described as a bit the worst of everything in storage and I can 
appreciate that view in a HA scenario because it doesn't help with consistency.

Of course, its block layer abstraction isn't really that different from SAN or 
NFS 4.x object storage.

I last experimented with iSCSI 20 years ago, mostly because it seemed so great 
for booting even less cooperative diskless hosts than Sun workstations over the 
network.

But if I had a reliable TrueNAS and wanted to run oVirt, I'd just go with NFS.

AFAIK oVirt was born on SAN but with SAN outside of oVirt's purvue. So if your 
iSCSI setup behaves like a SAN, oVirt should be easy to get going, but I've 
never tried myself.

And the lack of tried and tested tutorials or videos from 20 different sources 
might be the reason oVirt didn't quite push out everybody else.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3K4I3JLHGTHCEQCUSE5JP47VFZRJQT5F/


[ovirt-users] Re: Ovirt Hyperconverged Storage

2024-04-25 Thread Tim Walsh
I'm trying to set up an HA / Hyperconverged failover cluster using a couple 
4.3.10 nodes and a TrueNAS/iSCSI (modeling on VMs on a server 2019 Hyper-V 
system to work out the process before building it all out on bare metal).

Are there any good articles on how to get that set up?  I seem to have added 
the storage in a "DATA" storage domain but can't figure out how to add the 
"ISO" domain.  It creates a ton of folders too (like IDS, inbox, leases, 
master, etc)  under:

── mnt
├── blockSD
│   ├── 5727577b-1647-4664-9a70-84a03873d009
│   │   ├── dom_md

I can only imagine VMs would then be created in this "images" folder..??

│   │   └── images
│   │   ├── 0e8cc064-969d-4760-a9da-8e0016f35477


I have set up a complete setup running local storage (complete with Data, ISO, 
and Export storage domains), but now I want to figure out the hyperconverged 
failover cluster model.

Thanks in advance,

Tim Walsh




___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CXJJO3OIIN3735BUTPXALKWVFV5M7XXZ/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VBJFS4WRNIY6NF3AFQ3Y5I7QVT6BLPJ7/


[ovirt-users] Re: Ovirt Hyperconverged

2024-04-17 Thread Thomas Hoberg
I've tried to re-deploy oVirt 4.3 on CentOS7 servers because I had managed to 
utterly destroy a HCI farm, where most VMs had migrated to Oracles variant of 
RHV 4.4 on Oracle Linux. I guess I grew a bit careless towards its end.

Mostly it was just an academic exercise to see if it could be resurrected... I 
was much happier with the Oracle variant anyway.

And I've hit similar issues all over the place: the ansible scripts and/or the 
python packages they interact with are utterly broken with now years of 
completely disjunct bug-fixing going on.

The underlying CentOS 7 packages continue in maintenance (some more weeks to 
go..), but the oVirt 4.3 on top has been unmaintained for years.

Since these are just sanity checks, I deleted all of them, one after the other 
(and there is lots of them!), and I eventually got it to work again.

Don't have a single VM on it, though, because you can't trust it, the hardware 
is ancient and it really was just a finger exercise at that stage. With CentOS 
7 going out of support now, it's really messing with a corpse.

I'm currently operating Oracle's 4.4 variant running on their Linux, too, which 
still has Gluster based HCI built-in, even if they don't mention it at all.

Just make sure you switch their Unbreakable Linux kernel for the Redhat variant 
everywhere, otherwise you'll risk all kinds of nasties.

It's been way more stable than oVirt 4.3 ever was, but that doesn't mean it's 
"enterprise": that was always one fat big exaggeration, withful thinking, 
whatever.

And don't fall for their 4.5 variant, that came out end of last year: that one 
doesn't support HCI any more and actually seems to fail withOUT their 
Enterprise Linux kernels.

And no, it doesn't run on EL9 either, that might take another year or so, as 
Oracle's oVirt implementation is almost a year behind oVirt at the moment.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/X435DYEU5BMHQ7ZHSTJPYF4ZCXQ2GJXU/


[ovirt-users] Re: Ovirt Hyperconverged

2024-04-15 Thread Ricardo OT
Hi,
I have recently tried to install the hosted engine on a node recently installed 
with Rocky Linux 8.9 with glusterfs and with the nightly oVirt master snapshot 
repository, and I have also tried to do the same with the Oracle distribution 
and I have not been able to get the installation of the hosted engine to work. 
Before this worked. Now I have only managed to install the engine as 
standalone. Does anyone know what could be the reason?

:(

greep -i error 
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20240415201836-r0gt54.log
-bash: greep: no se encontró la orden
[root@host-01 ~]# grep -i error 
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20240415201836-r0gt54.log
2024-04-15 20:18:36,751+0200 DEBUG otopi.context context.dumpEnvironment:775 
ENV BASE/error=bool:'False'
2024-04-15 20:18:37,219+0200 DEBUG otopi.context context.dumpEnvironment:775 
ENV BASE/error=bool:'False'
errorlevel = 3
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 54291
socket.gaierror: [Errno -2] Nombre o servicio desconocido
2024-04-15 20:21:25,754+0200 DEBUG otopi.context context.dumpEnvironment:775 
ENV BASE/error=bool:'False'
2024-04-15 20:24:45,966+0200 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 
{'changed': False, 'stdout': '', 'stderr': "error: Falló al obtener la red 
'default'\nerror: No se ha encontrado la red: No existe una red que coincida 
con el nombre 'default'", 'rc': 1, 'cmd': ['virsh', 'net-undefine', 'default'], 
'start': '2024-04-15 20:24:45.653112', 'end': '2024-04-15 20:24:45.737169', 
'delta': '0:00:00.084057', 'msg': 'non-zero return code', 'invocation': 
{'module_args': {'_raw_params': 'virsh net-undefine default', '_uses_shell': 
False, 'warn': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 
'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': 
None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["error: Falló al 
obtener la red 'default'", "error: No se ha encontrado la red: No existe una 
red que coincida con el nombre 'default'"], '_ansible_no_log': None}
2024-04-15 20:24:46,067+0200 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 
ignored: [localhost]: FAILED! => {"changed": false, "cmd": ["virsh", 
"net-undefine", "default"], "delta": "0:00:00.084057", "end": "2024-04-15 
20:24:45.737169", "msg": "non-zero return code", "rc": 1, "start": "2024-04-15 
20:24:45.653112", "stderr": "error: Falló al obtener la red 'default'\nerror: 
No se ha encontrado la red: No existe una red que coincida con el nombre 
'default'", "stderr_lines": ["error: Falló al obtener la red 'default'", 
"error: No se ha encontrado la red: No existe una red que coincida con el 
nombre 'default'"], "stdout": "", "stdout_lines": []}
2024-04-15 20:24:50,986+0200 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 
{'changed': False, 'stdout': '', 'stderr': "error: Falló al identificarse la 
red default como autoiniciable\nerror: Falló al crear enlace simbólico 
'/etc/libvirt/qemu/networks/autostart/default.xml' en 
'/etc/libvirt/qemu/networks/default.xml': El fichero ya existe", 'rc': 1, 
'cmd': ['virsh', 'net-autostart', 'default'], 'start': '2024-04-15 
20:24:49.654500', 'end': '2024-04-15 20:24:50.741081', 'delta': 
'0:00:01.086581', 'msg': 'non-zero return code', 'invocation': {'module_args': 
{'_raw_params': 'virsh net-autostart default', '_uses_shell': False, 'warn': 
False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 
'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': 
None}}, 'stdout_lines': [], 'stderr_lines': ['error: Falló al identificarse la 
red default como autoiniciable', "error: Falló al crear enlace simbólico 
'/etc/libvirt/qemu/networks/autostart/default.xml' en 
'/etc/libvirt/qemu/networks/default.xml': El fichero ya existe"], 
'_ansible_no_log': None}
2024-04-15 20:24:51,087+0200 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 
ignored: [localhost]: FAILED! => {"changed": false, "cmd": ["virsh", 
"net-autostart", "default"], "delta": "0:00:01.086581", "end": "2024-04-15 
20:24:50.741081", "msg": "non-zero return code", "rc": 1, "start": "2024-04-15 
20:24:49.654500", "stderr": "error: Falló al identificarse la red default como 
autoiniciable\nerror: Falló al crear enlace simbólico 
'/etc/libvirt/qemu/networks/autostart/default.xml' en 
'/etc/libvirt/qemu/networks/default.xml': El fichero ya existe", 
"stderr_lines": ["error: Falló al identificarse la red default como 
autoiniciable", "error: Falló al crear enlace simbólico 
'/etc/libvirt/qemu/networks/autostart/default.xml' en 
'/etc/libvirt/qemu/networks/default.xml': El fichero ya existe"], "stdout": "", 
"stdout_lines": []}
2024-04-15 20:53:01,873+0200 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 
{'changed': True, 'stdout': "[ INFO  

[ovirt-users] Re: oVirt/Hyperconverged issue

2021-09-30 Thread Peje Anna
Thank you for the clarification.

On Tue, Sep 28, 2021 at 9:05 PM Jayme  wrote:

> With 4 servers only three would be used for hyperconverged storage, the
> 4th would be added as a compute node which would not participate in
> GlusterFS storage.
>
> To expand hyper-converged to more than 3 servers you have to add hosts in
> multiples of 3
>
> On Tue, Sep 28, 2021 at 9:49 AM  wrote:
>
>> Kindly share also for the latest ovirt OS 4.4 is it possible the
>> Hyperconverged to scaling up till 4 nodes? or only can use 3 nodes?
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/SG725S4G57UAVBTBV5QLBO7V2AOF2MCO/
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KYVREKY24USAWWV3W5RS2LLON6BNS7VO/


[ovirt-users] Re: oVirt/Hyperconverged issue

2021-09-28 Thread Jayme
With 4 servers only three would be used for hyperconverged storage, the 4th
would be added as a compute node which would not participate in GlusterFS
storage.

To expand hyper-converged to more than 3 servers you have to add hosts in
multiples of 3

On Tue, Sep 28, 2021 at 9:49 AM  wrote:

> Kindly share also for the latest ovirt OS 4.4 is it possible the
> Hyperconverged to scaling up till 4 nodes? or only can use 3 nodes?
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/SG725S4G57UAVBTBV5QLBO7V2AOF2MCO/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QC6QKVLXXHJ7SZZ4ZGS7HCDNRZ5R7NWY/


[ovirt-users] Re: oVirt/Hyperconverged issue

2021-09-28 Thread topoigerm
Kindly share also for the latest ovirt OS 4.4 is it possible the Hyperconverged 
to scaling up till 4 nodes? or only can use 3 nodes? 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SG725S4G57UAVBTBV5QLBO7V2AOF2MCO/


[ovirt-users] Re: oVirt / Hyperconverged

2021-09-28 Thread Strahil Nikolov via Users
Yes, you can use with 4 nodes. 
You have to check what has caused the crash before starting over or loosing the 
logs.

Best Regards,
Strahil Nikolov






В вторник, 28 септември 2021 г., 09:56:30 ч. Гринуич+3,  
написа: 





I have 4 servers of identical hardware. The documentation says "you need 3", 
not "you need 3 or more"; is it possible to run hyperconverged with 4 servers. 
Currently all the 4 nodes server has been crashed n after the 4th node try 
joining the hyperconverged 3nodes cluster. Kindly advise.
FYI currently i'm trying to reinstall back all the OS back due mentioned 
incident happen.

/BR
Faizal
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7JZIURV7ZHTTT4BLDGFUX475ICWNEUBC/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CEIFT6LHNEQD73VZS6BN2JA74XJ4C4CK/


[ovirt-users] Re: oVirt Hyperconverged question

2020-08-10 Thread Benedetto Vassallo

 Thank you very much.
I am planning to run a 3-nodes glusterfs cluster + 3-nodes compute  
only that will be permanently running.

Best regards.

Def. Quota tho...@hoberg.net:

I have done that, even added five nodes that contribute a separate  
Gluster file system using dispersed (erasure codes, more efficient)  
mode.


But in another cluster with such a 3-node-HCI base, I had a lot (3  
or 4) of compute nodes, that were actually dual-boot or just shut  
off when not used: Even used the GUI to do that properly.


This caused strange issues as I shut down all three compute-only  
nodes: Gluster reported loss of quorum, and essentially the entire  
HCI lost storage, even if these compute nodes didn't add bricks to  
the Gluster at all. In fact the compute nodes probably shouldn't  
have even participated in the Gluster, since they were only clients,  
but the Cockpit wizard added them anyway.


I believe this is because HCI is designed to support adding extra  
nodes in sets of three e.g. for a 9-node setup, which should be  
really nice with 7+2 disperse encoding.


I didn't dare reproduce the situation intentionally, but if you  
should come across this, perhaps you can document and report it. If  
the (most of) extra nodes are permanently running, you don't need to  
worry.


In terms of regaining control, you mostly have to make sure you turn  
the missing nodes back on, oVirt can be astonishingly resilient. If  
you then remove the nodes prior to shutdown, the quorum issue goes  
away.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:  
https://www.ovirt.org/community/about/community-guidelines/List  
Archives:  
https://lists.ovirt.org/archives/list/users@ovirt.org/message/A4EDM3RYVIYXZ5QAJO4VOYKQUDWYDA4P/

 --
Benedetto Vassallo
Responsabile U.O. Sviluppo e manutenzione dei sistemi
Sistema Informativo di Ateneo
Università degli studi di Palermo

Phone: +3909123860056
Fax: +3909123860880
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/T4FADKGTK2ZTBXYRLAV6JJXI7CXQIK25/


[ovirt-users] Re: oVirt Hyperconverged question

2020-08-06 Thread thomas
I have done that, even added five nodes that contribute a separate Gluster file 
system using dispersed (erasure codes, more efficient) mode.

But in another cluster with such a 3-node-HCI base, I had a lot (3 or 4) of 
compute nodes, that were actually dual-boot or just shut off when not used: 
Even used the GUI to do that properly.

This caused strange issues as I shut down all three compute-only nodes: Gluster 
reported loss of quorum, and essentially the entire HCI lost storage, even if 
these compute nodes didn't add bricks to the Gluster at all. In fact the 
compute nodes probably shouldn't have even participated in the Gluster, since 
they were only clients, but the Cockpit wizard added them anyway.

I believe this is because HCI is designed to support adding extra nodes in sets 
of three e.g. for a 9-node setup, which should be really nice with 7+2 disperse 
encoding.

I didn't dare reproduce the situation intentionally, but if you should come 
across this, perhaps you can document and report it. If the (most of) extra 
nodes are permanently running, you don't need to worry.

In terms of regaining control, you mostly have to make sure you turn the 
missing nodes back on, oVirt can be astonishingly resilient. If you then remove 
the nodes prior to shutdown, the quorum issue goes away.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/A4EDM3RYVIYXZ5QAJO4VOYKQUDWYDA4P/


[ovirt-users] Re: oVirt Hyperconverged question

2020-08-04 Thread Edward Berger
Yes.  You can add compute only nodes to a hyperconverged cluster to use the
same storage.


On Tue, Aug 4, 2020 at 7:02 AM Benedetto Vassallo <
benedetto.vassa...@unipa.it> wrote:

> Hi all,
> I am planning to build a 3 nodes hyperconverged system with oVirt, but I
> have a question.
> After having the system up and running with 3 nodes (compute and storage),
> if I need some extra compute power can I add some other "compute" (with no
> storage) nodes as glusterfs clients to enhance the total compute power of
> the cluster and using the actual storage?
> Thank you and Best Regards.
> --
> Benedetto Vassallo
> Responsabile U.O. Sviluppo e manutenzione dei sistemi
> Sistema Informativo di Ateneo
> Università degli studi di Palermo
>
> Phone: +3909123860056
> Fax: +3909123860880
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/23PVOVF3TP3ESEUROB466VGEKCULGMNI/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2MTS4RBV2HNEUODQT7FLA3NBZBBNZUOX/


[ovirt-users] Re: Ovirt Hyperconverged Cluster

2019-07-10 Thread Sahina Bose
Did you manage to get past this error?

On Sat, Jun 29, 2019 at 3:32 AM Edward Berger  wrote:

> Maybe there is something already on the disk from before?
> gluster setup wants it completely blank, no detectable filesystem, no
> raid, etc.
> see what is there with fdisk.-l  see what PVs exist with pvs.
> Manually wipe, reboot and try again?
>
> On Fri, Jun 28, 2019 at 5:37 AM  wrote:
>
>> I have added the lvm global filter and  I have configured the
>> multipath.conf but gdploy stay blocked on the PV creation on sda.
>> I don't have any log in /var/log/ovirt-hosted-engine-setup
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/SHZOZFQXM6G6U3ZTMQ4EVR4YF3JI6XWK/
>>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/CR3DS2MIQWSI7XRKWFUABR44IFCVSOS3/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WK4UDG5PVIWKIKALIDZF4LJFVBHMP4VF/


[ovirt-users] Re: Ovirt Hyperconverged Cluster

2019-07-01 Thread Stefanile Raffaele
I will try, thanks

Raffaele
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TBFQ5ISPHHB5VN7CGZWPHQFP7MZIUTOE/


[ovirt-users] Re: Ovirt Hyperconverged Cluster

2019-06-29 Thread Strahil
I think raid is OK. At least my setup has a raid0 of 2 HDDs.
In the error message there is the command that is being executed by the 
playbook.
Run it manually and if it fails - run again with more verbosely.

Best Regards,
Strahil NikolovOn Jun 29, 2019 01:00, Edward Berger  wrote:
>
> Maybe there is something already on the disk from before?
> gluster setup wants it completely blank, no detectable filesystem, no raid, 
> etc.
> see what is there with fdisk.-l  see what PVs exist with pvs.
> Manually wipe, reboot and try again?
>
> On Fri, Jun 28, 2019 at 5:37 AM  wrote:
>>
>> I have added the lvm global filter and  I have configured the multipath.conf 
>> but gdploy stay blocked on the PV creation on sda.
>> I don't have any log in /var/log/ovirt-hosted-engine-setup
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/SHZOZFQXM6G6U3ZTMQ4EVR4YF3JI6XWK/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PZDBJRVM27MZZQOOS6MCCQX2CEOF7H7U/


[ovirt-users] Re: Ovirt Hyperconverged Cluster

2019-06-28 Thread Edward Berger
Maybe there is something already on the disk from before?
gluster setup wants it completely blank, no detectable filesystem, no raid,
etc.
see what is there with fdisk.-l  see what PVs exist with pvs.
Manually wipe, reboot and try again?

On Fri, Jun 28, 2019 at 5:37 AM  wrote:

> I have added the lvm global filter and  I have configured the
> multipath.conf but gdploy stay blocked on the PV creation on sda.
> I don't have any log in /var/log/ovirt-hosted-engine-setup
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/SHZOZFQXM6G6U3ZTMQ4EVR4YF3JI6XWK/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CR3DS2MIQWSI7XRKWFUABR44IFCVSOS3/


[ovirt-users] Re: Ovirt Hyperconverged Cluster

2019-06-28 Thread raffaele
I have added the lvm global filter and  I have configured the multipath.conf 
but gdploy stay blocked on the PV creation on sda.
I don't have any log in /var/log/ovirt-hosted-engine-setup
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SHZOZFQXM6G6U3ZTMQ4EVR4YF3JI6XWK/


[ovirt-users] Re: Ovirt Hyperconverged Cluster

2019-06-27 Thread Edward Berger
You might be hitting a multipath config issue.

On a 4.3.4. node-ng cluster I had a similar problem.  spare disk was on
/dev/sda  (boot disk was /dev/sdb)

I found this link
https://stackoverflow.com/questions/45889799/pvcreate-failing-to-create-pv-device-not-found-dev-sdxy-or-ignored-by-filteri

which led me to lsblk /dev/sda and then add the "displayed wwid" to the
multipath.conf blacklist.
after restarting multipathd, I was able to use the cockpit wizard to
install gluster on /dev/sda.


On Thu, Jun 27, 2019 at 10:05 AM  wrote:

> I am installing ovirt 4.2 on three nodes. I am using centos with ovirt
> cockpit. My /sdb is my boot disk and /sda is my local storage,
>
> When I started the setup with cockpit I selected the sda but the ansible
> script stay on sdb. I have fix all changing the playbook manually but in
> the execution the gdeploy tool stay blocked on TASK [Create Physical
> Volume] .
>
> I don't find any logs. How I can fix that?
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/FL5E26QTE4GCLB6JFPNBVYC3XE6AWBWN/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OE2W77N5PI7HAWPA56A6I2MA7EOFS35Q/


[ovirt-users] Re: Ovirt Hyperconverged Cluster

2019-06-26 Thread Simone Tiraboschi
On Wed, Jun 26, 2019 at 10:58 AM Simone Tiraboschi 
wrote:

> You issue is here:
> 2019-06-20 11:25:53,200+02 WARN
>  [org.ovirt.engine.core.bll.network.host.HostSetupNetworksCommand]
> (EE-ManagedThreadFactory-engine-Thread-1) [2598128e] Validation of action
> 'HostSetupNetworks' failed for user admin@internal-authz. Reasons:
> VAR__ACTION__SETUP,VAR__TYPE__NETWORKS,INVALID_BOND_MODE_FOR_BOND_WITH_VM_NETWORK,$BondName
> bond0,$networkName ovirtmgmt
>
> Please use a valid bond mode for the bond selected on the management
> network (I'll try to understand why the setup tool didn't detected it
> before).
>

Ok, the issue is here:
2019-06-17 13:50:04,014+0200 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:204 DIALOG:SEND Please indicate a nic to
set ovirtmgmt bridge on: (team0, team0.13, team0.19) [team0.13]:
2019-06-17 13:50:14,978+0200 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:204 DIALOG:RECEIVEteam0.19

oVirt doesn't support at all the teamed devices but just bonds, please see:
https://www.ovirt.org/documentation/admin-guide/chap-Logical_Networks.html#bonding-modes

Unfortunately ansible facts module as for
https://github.com/ansible/ansible/issues/43129 also fails discriminating a
teamed interface from a plain one and so you were able to select it.


>
> On Wed, Jun 26, 2019 at 10:46 AM  wrote:
>
>> Yes, I am using the same two interface in bond configuration with one
>> vlan for the storage and annoter one for the mgmt.
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/56UBS4RALC2QQ6SVYSEKLSYV5NOFN5XO/
>>
>
>
> --
>
> Simone Tiraboschi
>
> He / Him / His
>
> Principal Software Engineer
>
> Red Hat 
>
> stira...@redhat.com
> @redhatjobs    redhatjobs
>  @redhatjobs
> 
> 
> 
>


-- 

Simone Tiraboschi

He / Him / His

Principal Software Engineer

Red Hat 

stira...@redhat.com
@redhatjobs    redhatjobs
 @redhatjobs



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MDRZWDUVFRLCMKX2CZU7RW3CBFUAE3JQ/


[ovirt-users] Re: Ovirt Hyperconverged Cluster

2019-06-26 Thread Simone Tiraboschi
You issue is here:
2019-06-20 11:25:53,200+02 WARN
 [org.ovirt.engine.core.bll.network.host.HostSetupNetworksCommand]
(EE-ManagedThreadFactory-engine-Thread-1) [2598128e] Validation of action
'HostSetupNetworks' failed for user admin@internal-authz. Reasons:
VAR__ACTION__SETUP,VAR__TYPE__NETWORKS,INVALID_BOND_MODE_FOR_BOND_WITH_VM_NETWORK,$BondName
bond0,$networkName ovirtmgmt

Please use a valid bond mode for the bond selected on the management
network (I'll try to understand why the setup tool didn't detected it
before).

On Wed, Jun 26, 2019 at 10:46 AM  wrote:

> Yes, I am using the same two interface in bond configuration with one vlan
> for the storage and annoter one for the mgmt.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/56UBS4RALC2QQ6SVYSEKLSYV5NOFN5XO/
>


-- 

Simone Tiraboschi

He / Him / His

Principal Software Engineer

Red Hat 

stira...@redhat.com
@redhatjobs    redhatjobs
 @redhatjobs



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MWQI2YNGLGPMGOM7ULGOH362KR63NFMP/


[ovirt-users] Re: Ovirt Hyperconverged Cluster

2019-06-26 Thread raffaele
Yes, I am using the same two interface in bond configuration with one vlan for 
the storage and annoter one for the mgmt.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/56UBS4RALC2QQ6SVYSEKLSYV5NOFN5XO/


[ovirt-users] Re: Ovirt Hyperconverged Cluster

2019-06-20 Thread Slobodan Stevanovic
 I had issues with installation too, but I end up separating the interfaces. 
My current setup has one interface dedicated to storage and the other one is 
connected to a trunk port with a couple of VLANs



On Thursday, June 20, 2019, 10:53:15 AM PDT, Raffaele Stefanile 
 wrote:  
 
 Yes is two interface in bound configuration with 2 vlan
On Thu, Jun 20, 2019, 19:32 Slobodan Stevanovic  wrote:

 Are you using the same interface for gluster and the ovirt engine?

On Thursday, June 20, 2019, 08:53:57 AM PDT, raffa...@stefanile.com 
 wrote:  
 
 Many thanks for your help

https://drive.google.com/drive/folders/1HmKccGRAI0ggyuyZrQ2OA42G24-26bsx?usp=sharing
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/B2VRA7FUZCXXVWJPK2TQHMF7MCD2PIAR/
  
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ARLPPQVC7TCNGAP6NV7I4HTGSUWY4YUO/


[ovirt-users] Re: Ovirt Hyperconverged Cluster

2019-06-20 Thread Slobodan Stevanovic
 Are you using the same interface for gluster and the ovirt engine?

On Thursday, June 20, 2019, 08:53:57 AM PDT, raffa...@stefanile.com 
 wrote:  
 
 Many thanks for your help

https://drive.google.com/drive/folders/1HmKccGRAI0ggyuyZrQ2OA42G24-26bsx?usp=sharing
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/B2VRA7FUZCXXVWJPK2TQHMF7MCD2PIAR/
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MR3FLT7KQ5GGNOS4DNZ5T7FYPJ6TUKWP/


[ovirt-users] Re: Ovirt Hyperconverged Cluster

2019-06-20 Thread raffaele
Many thanks for your help

https://drive.google.com/drive/folders/1HmKccGRAI0ggyuyZrQ2OA42G24-26bsx?usp=sharing
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/B2VRA7FUZCXXVWJPK2TQHMF7MCD2PIAR/


[ovirt-users] Re: Ovirt Hyperconverged Cluster

2019-06-19 Thread Yedidyah Bar David
On Wed, Jun 19, 2019 at 5:36 PM  wrote:
>
> Hi all,
>
> I have configured a 3 nodes cluster with an Hyperconverged gluster.
> I have used 2 network interface bonded with one VLAN for the mgmt and an 
> other one for the storage.
>
> When I use cockpit to deploy the hosted engine I receive this error.
>
> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The host 
> has been set in non_operational status, please check engine logs, fix 
> accordingly and re-deploy.\n"}

Hi,

Please check/share relevant logs - from the host
/var/log/ovirt-hosted-engine-setup/* (and perhaps /var/log/vdsm/* and
/var/log/ovirt-hosted-engine-ha/*) and from the engine (if you can
access it) /var/log/ovirt-engine/engine.log .

Thanks and best regards,
-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BRP3KH7OLA4JJL6WB7M7BYMVGDZXJFDI/


[ovirt-users] Re: oVirt hyperconverged via cockpit error

2018-10-25 Thread Jarosław Prokopowski
Ok, so I created the gluster volumes manually. It came out that
virtualization was disabled in bios on the new servers. After enabling the
virtualization I run standard wizard for creating hosted engine and choose
gluster volume as location for it. It looks like it is ok now. Thanks.

On Thu, Oct 25, 2018 at 3:37 PM Jarosław Prokopowski 
wrote:

> Yes, bq817 is the one with cockpit. Yes I can ssh locally by using full
> host name too.
> I checked with tcpdump and there isn't any ssh connection try during the
> deployment on any of the network interfaces. Strange...
>
> On Thu, Oct 25, 2018 at 3:15 PM Jayme  wrote:
>
>> Is the host in the error the same host you are running cockpit from?
>> Make sure you can ssh not to localhost by to the hostname from the host.
>> I.e. on bq817storage.example.com try ssh'ing to root@
>> bq817storage.example.com
>>
>> On Thu, Oct 25, 2018 at 9:06 AM Jarosław Prokopowski <
>> jprokopow...@gmail.com> wrote:
>>
>>> And because I sometimes ssh through the main (non-storage) network
>>> interface i have local .ssh/config file on the root account with:
>>> Host *
>>> StrictHostKeyChecking no
>>>
>>>
>>> On Thu, Oct 25, 2018 at 2:03 PM Jarosław Prokopowski <
>>> jprokopow...@gmail.com> wrote:
>>>
 Hi,

 Yes ssh keys have been distributed and root remote login works each way.
 After I got the error  I tested all connection manually and they work.
 On every host I can ssh to root@localhost and to other hosts without
 any problem.
 That's why the error is so strange to me. I event tested ansible from
 oVirt host to others and it works ok using ssh keys.


 W dniu czw., 25.10.2018 o 13:43 Jayme  napisał(a):

> You should also make sure the host can ssh to itself and accept keys
>
> On Thu, Oct 25, 2018, 8:42 AM Jayme,  wrote:
>
>> Darn autocorrect, sshd config rather
>>
>> On Thu, Oct 25, 2018, 7:29 AM Jarosław Prokopowski, <
>> jprokopow...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> Please help! :-) I couldn't find any solution via google.
>>>
>>> I followed this document to create oVirt hyperconverged on 3 hosts
>>> using cockpit wizard:
>>>
>>>
>>> https://ovirt.org/blog/2018/02/up-and-running-with-ovirt-4-2-and-gluster-storage/
>>>
>>> System: CentOS Linux release 7.5.1804
>>>
>>> All hosts can resolve each other names via DNS, ssh keys are
>>> exchanged and working.
>>> I added firewall rules based on oVirt installation guide. SSH is
>>> possible between all hosts using keys.
>>>
>>> I cannot create the configuration and the error I get in the last
>>> step is:
>>>
>>>
>>> --
>>> PLAY [gluster_servers]
>>> *
>>>
>>> TASK [Run a shell script]
>>> **
>>> failed: [bq817storage.example.com]
>>> (item=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h
>>> bq817storage.example.com, bq735storage.example.com,
>>> bq813storage.example.com) => {"item":
>>> "/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h
>>> bq817storage.example.com, bq735storage.example.com,
>>> bq813storage.example.com", "msg": "Failed to connect to the host
>>> via ssh: Permission denied
>>> (publickey,gssapi-keyex,gssapi-with-mic,password).\r\n", "unreachable":
>>> true}
>>> fatal: [bq817storage.example.com]: UNREACHABLE! => {"changed":
>>> false, "msg": "All items completed", "results": 
>>> [{"_ansible_ignore_errors":
>>> null, "_ansible_item_label":
>>> "/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h
>>> bq817storage.example.com, bq735storage.example.com,
>>> bq813storage.example.com", "_ansible_item_result": true, "item":
>>> "/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h
>>> bq817storage.example.com, bq735storage.example.com,
>>> bq813storage.example.com", "msg": "Failed to connect to the host
>>> via ssh: Permission denied
>>> (publickey,gssapi-keyex,gssapi-with-mic,password).\r\n", "unreachable":
>>> true}]}
>>> to retry, use: --limit @/tmp/tmpYLHDCP/run-script.retry
>>>
>>> PLAY RECAP
>>> *
>>> bq817storage.example.com : ok=0changed=0unreachable=1
>>> failed=0
>>>
>>>
>>> Firewall rules:
>>>
>>> oVirt engine host:
>>>
>>> #firewall-cmd --list-all
>>> public (active)
>>>   target: default
>>>   icmp-block-inversion: no
>>>   interfaces: enp134s0f0 enp134s0f1
>>>   sources:
>>>   services: ssh dhcpv6-client cockpit glusterfs http https dns
>>>   ports: /tcp 6100/tcp 7410/udp 54323/tcp 2223/tcp 161/udp
>>> 111/tcp 5900-69

[ovirt-users] Re: oVirt hyperconverged via cockpit error

2018-10-25 Thread Jarosław Prokopowski
Yes, bq817 is the one with cockpit. Yes I can ssh locally by using full
host name too.
I checked with tcpdump and there isn't any ssh connection try during the
deployment on any of the network interfaces. Strange...

On Thu, Oct 25, 2018 at 3:15 PM Jayme  wrote:

> Is the host in the error the same host you are running cockpit from?  Make
> sure you can ssh not to localhost by to the hostname from the host.  I.e.
> on bq817storage.example.com try ssh'ing to r...@bq817storage.example.com
>
> On Thu, Oct 25, 2018 at 9:06 AM Jarosław Prokopowski <
> jprokopow...@gmail.com> wrote:
>
>> And because I sometimes ssh through the main (non-storage) network
>> interface i have local .ssh/config file on the root account with:
>> Host *
>> StrictHostKeyChecking no
>>
>>
>> On Thu, Oct 25, 2018 at 2:03 PM Jarosław Prokopowski <
>> jprokopow...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> Yes ssh keys have been distributed and root remote login works each way.
>>> After I got the error  I tested all connection manually and they work.
>>> On every host I can ssh to root@localhost and to other hosts without
>>> any problem.
>>> That's why the error is so strange to me. I event tested ansible from
>>> oVirt host to others and it works ok using ssh keys.
>>>
>>>
>>> W dniu czw., 25.10.2018 o 13:43 Jayme  napisał(a):
>>>
 You should also make sure the host can ssh to itself and accept keys

 On Thu, Oct 25, 2018, 8:42 AM Jayme,  wrote:

> Darn autocorrect, sshd config rather
>
> On Thu, Oct 25, 2018, 7:29 AM Jarosław Prokopowski, <
> jprokopow...@gmail.com> wrote:
>
>> Hi,
>>
>> Please help! :-) I couldn't find any solution via google.
>>
>> I followed this document to create oVirt hyperconverged on 3 hosts
>> using cockpit wizard:
>>
>>
>> https://ovirt.org/blog/2018/02/up-and-running-with-ovirt-4-2-and-gluster-storage/
>>
>> System: CentOS Linux release 7.5.1804
>>
>> All hosts can resolve each other names via DNS, ssh keys are
>> exchanged and working.
>> I added firewall rules based on oVirt installation guide. SSH is
>> possible between all hosts using keys.
>>
>> I cannot create the configuration and the error I get in the last
>> step is:
>>
>>
>> --
>> PLAY [gluster_servers]
>> *
>>
>> TASK [Run a shell script]
>> **
>> failed: [bq817storage.example.com]
>> (item=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h
>> bq817storage.example.com, bq735storage.example.com,
>> bq813storage.example.com) => {"item":
>> "/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h
>> bq817storage.example.com, bq735storage.example.com,
>> bq813storage.example.com", "msg": "Failed to connect to the host via
>> ssh: Permission denied
>> (publickey,gssapi-keyex,gssapi-with-mic,password).\r\n", "unreachable":
>> true}
>> fatal: [bq817storage.example.com]: UNREACHABLE! => {"changed":
>> false, "msg": "All items completed", "results": 
>> [{"_ansible_ignore_errors":
>> null, "_ansible_item_label":
>> "/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h
>> bq817storage.example.com, bq735storage.example.com,
>> bq813storage.example.com", "_ansible_item_result": true, "item":
>> "/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h
>> bq817storage.example.com, bq735storage.example.com,
>> bq813storage.example.com", "msg": "Failed to connect to the host via
>> ssh: Permission denied
>> (publickey,gssapi-keyex,gssapi-with-mic,password).\r\n", "unreachable":
>> true}]}
>> to retry, use: --limit @/tmp/tmpYLHDCP/run-script.retry
>>
>> PLAY RECAP
>> *
>> bq817storage.example.com : ok=0changed=0unreachable=1
>> failed=0
>>
>>
>> Firewall rules:
>>
>> oVirt engine host:
>>
>> #firewall-cmd --list-all
>> public (active)
>>   target: default
>>   icmp-block-inversion: no
>>   interfaces: enp134s0f0 enp134s0f1
>>   sources:
>>   services: ssh dhcpv6-client cockpit glusterfs http https dns
>>   ports: /tcp 6100/tcp 7410/udp 54323/tcp 2223/tcp 161/udp
>> 111/tcp 5900-6923/tcp 5989/tcp 9090/tcp 16514/tcp 49152-49216/tcp 
>> 54321/tcp
>> 54322/tcp 6081/udp
>>   protocols:
>>   masquerade: no
>>   forward-ports:
>>   source-ports:
>>   icmp-blocks:
>>   rich rules:
>>
>> oVirt nodes:
>>
>> #firewall-cmd --list-all
>> public (active)
>>   target: default
>>   icmp-block-inversion: no
>>   interfaces: enp134s0f0 enp134s0f1
>>   sources:
>>   services: ssh dhcpv6-cl

[ovirt-users] Re: oVirt hyperconverged via cockpit error

2018-10-25 Thread Jayme
Is the host in the error the same host you are running cockpit from?  Make
sure you can ssh not to localhost by to the hostname from the host.  I.e.
on bq817storage.example.com try ssh'ing to r...@bq817storage.example.com

On Thu, Oct 25, 2018 at 9:06 AM Jarosław Prokopowski 
wrote:

> And because I sometimes ssh through the main (non-storage) network
> interface i have local .ssh/config file on the root account with:
> Host *
> StrictHostKeyChecking no
>
>
> On Thu, Oct 25, 2018 at 2:03 PM Jarosław Prokopowski <
> jprokopow...@gmail.com> wrote:
>
>> Hi,
>>
>> Yes ssh keys have been distributed and root remote login works each way.
>> After I got the error  I tested all connection manually and they work.
>> On every host I can ssh to root@localhost and to other hosts without any
>> problem.
>> That's why the error is so strange to me. I event tested ansible from
>> oVirt host to others and it works ok using ssh keys.
>>
>>
>> W dniu czw., 25.10.2018 o 13:43 Jayme  napisał(a):
>>
>>> You should also make sure the host can ssh to itself and accept keys
>>>
>>> On Thu, Oct 25, 2018, 8:42 AM Jayme,  wrote:
>>>
 Darn autocorrect, sshd config rather

 On Thu, Oct 25, 2018, 7:29 AM Jarosław Prokopowski, <
 jprokopow...@gmail.com> wrote:

> Hi,
>
> Please help! :-) I couldn't find any solution via google.
>
> I followed this document to create oVirt hyperconverged on 3 hosts
> using cockpit wizard:
>
>
> https://ovirt.org/blog/2018/02/up-and-running-with-ovirt-4-2-and-gluster-storage/
>
> System: CentOS Linux release 7.5.1804
>
> All hosts can resolve each other names via DNS, ssh keys are exchanged
> and working.
> I added firewall rules based on oVirt installation guide. SSH is
> possible between all hosts using keys.
>
> I cannot create the configuration and the error I get in the last step
> is:
>
>
> --
> PLAY [gluster_servers]
> *
>
> TASK [Run a shell script]
> **
> failed: [bq817storage.example.com]
> (item=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h
> bq817storage.example.com, bq735storage.example.com,
> bq813storage.example.com) => {"item":
> "/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h
> bq817storage.example.com, bq735storage.example.com,
> bq813storage.example.com", "msg": "Failed to connect to the host via
> ssh: Permission denied
> (publickey,gssapi-keyex,gssapi-with-mic,password).\r\n", "unreachable":
> true}
> fatal: [bq817storage.example.com]: UNREACHABLE! => {"changed": false,
> "msg": "All items completed", "results": [{"_ansible_ignore_errors": null,
> "_ansible_item_label": "/usr/share/gdeploy/scripts/grafton-sanity-check.sh
> -d sdb -h bq817storage.example.com, bq735storage.example.com,
> bq813storage.example.com", "_ansible_item_result": true, "item":
> "/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h
> bq817storage.example.com, bq735storage.example.com,
> bq813storage.example.com", "msg": "Failed to connect to the host via
> ssh: Permission denied
> (publickey,gssapi-keyex,gssapi-with-mic,password).\r\n", "unreachable":
> true}]}
> to retry, use: --limit @/tmp/tmpYLHDCP/run-script.retry
>
> PLAY RECAP
> *
> bq817storage.example.com : ok=0changed=0unreachable=1
> failed=0
>
>
> Firewall rules:
>
> oVirt engine host:
>
> #firewall-cmd --list-all
> public (active)
>   target: default
>   icmp-block-inversion: no
>   interfaces: enp134s0f0 enp134s0f1
>   sources:
>   services: ssh dhcpv6-client cockpit glusterfs http https dns
>   ports: /tcp 6100/tcp 7410/udp 54323/tcp 2223/tcp 161/udp 111/tcp
> 5900-6923/tcp 5989/tcp 9090/tcp 16514/tcp 49152-49216/tcp 54321/tcp
> 54322/tcp 6081/udp
>   protocols:
>   masquerade: no
>   forward-ports:
>   source-ports:
>   icmp-blocks:
>   rich rules:
>
> oVirt nodes:
>
> #firewall-cmd --list-all
> public (active)
>   target: default
>   icmp-block-inversion: no
>   interfaces: enp134s0f0 enp134s0f1
>   sources:
>   services: ssh dhcpv6-client cockpit glusterfs dns
>   ports: 2223/tcp 161/udp 111/tcp 5900-6923/tcp 5989/tcp 9090/tcp
> 16514/tcp 49152-49216/tcp 54321/tcp 54322/tcp 6081/udp
>   protocols:
>   masquerade: no
>   forward-ports:
>   source-ports:
>   icmp-blocks:
>
> -
>
> Thanks in advance
> Jarson
> _

[ovirt-users] Re: oVirt hyperconverged via cockpit error

2018-10-25 Thread Jarosław Prokopowski
And because I sometimes ssh through the main (non-storage) network
interface i have local .ssh/config file on the root account with:
Host *
StrictHostKeyChecking no


On Thu, Oct 25, 2018 at 2:03 PM Jarosław Prokopowski 
wrote:

> Hi,
>
> Yes ssh keys have been distributed and root remote login works each way.
> After I got the error  I tested all connection manually and they work.
> On every host I can ssh to root@localhost and to other hosts without any
> problem.
> That's why the error is so strange to me. I event tested ansible from
> oVirt host to others and it works ok using ssh keys.
>
>
> W dniu czw., 25.10.2018 o 13:43 Jayme  napisał(a):
>
>> You should also make sure the host can ssh to itself and accept keys
>>
>> On Thu, Oct 25, 2018, 8:42 AM Jayme,  wrote:
>>
>>> Darn autocorrect, sshd config rather
>>>
>>> On Thu, Oct 25, 2018, 7:29 AM Jarosław Prokopowski, <
>>> jprokopow...@gmail.com> wrote:
>>>
 Hi,

 Please help! :-) I couldn't find any solution via google.

 I followed this document to create oVirt hyperconverged on 3 hosts
 using cockpit wizard:


 https://ovirt.org/blog/2018/02/up-and-running-with-ovirt-4-2-and-gluster-storage/

 System: CentOS Linux release 7.5.1804

 All hosts can resolve each other names via DNS, ssh keys are exchanged
 and working.
 I added firewall rules based on oVirt installation guide. SSH is
 possible between all hosts using keys.

 I cannot create the configuration and the error I get in the last step
 is:


 --
 PLAY [gluster_servers]
 *

 TASK [Run a shell script]
 **
 failed: [bq817storage.example.com]
 (item=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h
 bq817storage.example.com, bq735storage.example.com,
 bq813storage.example.com) => {"item":
 "/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h
 bq817storage.example.com, bq735storage.example.com,
 bq813storage.example.com", "msg": "Failed to connect to the host via
 ssh: Permission denied
 (publickey,gssapi-keyex,gssapi-with-mic,password).\r\n", "unreachable":
 true}
 fatal: [bq817storage.example.com]: UNREACHABLE! => {"changed": false,
 "msg": "All items completed", "results": [{"_ansible_ignore_errors": null,
 "_ansible_item_label": "/usr/share/gdeploy/scripts/grafton-sanity-check.sh
 -d sdb -h bq817storage.example.com, bq735storage.example.com,
 bq813storage.example.com", "_ansible_item_result": true, "item":
 "/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h
 bq817storage.example.com, bq735storage.example.com,
 bq813storage.example.com", "msg": "Failed to connect to the host via
 ssh: Permission denied
 (publickey,gssapi-keyex,gssapi-with-mic,password).\r\n", "unreachable":
 true}]}
 to retry, use: --limit @/tmp/tmpYLHDCP/run-script.retry

 PLAY RECAP
 *
 bq817storage.example.com : ok=0changed=0unreachable=1
 failed=0


 Firewall rules:

 oVirt engine host:

 #firewall-cmd --list-all
 public (active)
   target: default
   icmp-block-inversion: no
   interfaces: enp134s0f0 enp134s0f1
   sources:
   services: ssh dhcpv6-client cockpit glusterfs http https dns
   ports: /tcp 6100/tcp 7410/udp 54323/tcp 2223/tcp 161/udp 111/tcp
 5900-6923/tcp 5989/tcp 9090/tcp 16514/tcp 49152-49216/tcp 54321/tcp
 54322/tcp 6081/udp
   protocols:
   masquerade: no
   forward-ports:
   source-ports:
   icmp-blocks:
   rich rules:

 oVirt nodes:

 #firewall-cmd --list-all
 public (active)
   target: default
   icmp-block-inversion: no
   interfaces: enp134s0f0 enp134s0f1
   sources:
   services: ssh dhcpv6-client cockpit glusterfs dns
   ports: 2223/tcp 161/udp 111/tcp 5900-6923/tcp 5989/tcp 9090/tcp
 16514/tcp 49152-49216/tcp 54321/tcp 54322/tcp 6081/udp
   protocols:
   masquerade: no
   forward-ports:
   source-ports:
   icmp-blocks:

 -

 Thanks in advance
 Jarson
 ___
 Users mailing list -- users@ovirt.org
 To unsubscribe send an email to users-le...@ovirt.org
 Privacy Statement: https://www.ovirt.org/site/privacy-policy/
 oVirt Code of Conduct:
 https://www.ovirt.org/community/about/community-guidelines/
 List Archives:
 https://lists.ovirt.org/archives/list/users@ovirt.org/message/4KKTG4VVPG7WKRNBDJV6JWGOKPBMM2LB/

>>>
__

[ovirt-users] Re: oVirt hyperconverged via cockpit error

2018-10-25 Thread Jarosław Prokopowski
Hi,

Yes ssh keys have been distributed and root remote login works each way.
After I got the error  I tested all connection manually and they work.
On every host I can ssh to root@localhost and to other hosts without any
problem.
That's why the error is so strange to me. I event tested ansible from oVirt
host to others and it works ok using ssh keys.


W dniu czw., 25.10.2018 o 13:43 Jayme  napisał(a):

> You should also make sure the host can ssh to itself and accept keys
>
> On Thu, Oct 25, 2018, 8:42 AM Jayme,  wrote:
>
>> Darn autocorrect, sshd config rather
>>
>> On Thu, Oct 25, 2018, 7:29 AM Jarosław Prokopowski, <
>> jprokopow...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> Please help! :-) I couldn't find any solution via google.
>>>
>>> I followed this document to create oVirt hyperconverged on 3 hosts using
>>> cockpit wizard:
>>>
>>>
>>> https://ovirt.org/blog/2018/02/up-and-running-with-ovirt-4-2-and-gluster-storage/
>>>
>>> System: CentOS Linux release 7.5.1804
>>>
>>> All hosts can resolve each other names via DNS, ssh keys are exchanged
>>> and working.
>>> I added firewall rules based on oVirt installation guide. SSH is
>>> possible between all hosts using keys.
>>>
>>> I cannot create the configuration and the error I get in the last step
>>> is:
>>>
>>>
>>> --
>>> PLAY [gluster_servers]
>>> *
>>>
>>> TASK [Run a shell script]
>>> **
>>> failed: [bq817storage.example.com]
>>> (item=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h
>>> bq817storage.example.com, bq735storage.example.com,
>>> bq813storage.example.com) => {"item":
>>> "/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h
>>> bq817storage.example.com, bq735storage.example.com,
>>> bq813storage.example.com", "msg": "Failed to connect to the host via
>>> ssh: Permission denied
>>> (publickey,gssapi-keyex,gssapi-with-mic,password).\r\n", "unreachable":
>>> true}
>>> fatal: [bq817storage.example.com]: UNREACHABLE! => {"changed": false,
>>> "msg": "All items completed", "results": [{"_ansible_ignore_errors": null,
>>> "_ansible_item_label": "/usr/share/gdeploy/scripts/grafton-sanity-check.sh
>>> -d sdb -h bq817storage.example.com, bq735storage.example.com,
>>> bq813storage.example.com", "_ansible_item_result": true, "item":
>>> "/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h
>>> bq817storage.example.com, bq735storage.example.com,
>>> bq813storage.example.com", "msg": "Failed to connect to the host via
>>> ssh: Permission denied
>>> (publickey,gssapi-keyex,gssapi-with-mic,password).\r\n", "unreachable":
>>> true}]}
>>> to retry, use: --limit @/tmp/tmpYLHDCP/run-script.retry
>>>
>>> PLAY RECAP
>>> *
>>> bq817storage.example.com : ok=0changed=0unreachable=1
>>> failed=0
>>>
>>>
>>> Firewall rules:
>>>
>>> oVirt engine host:
>>>
>>> #firewall-cmd --list-all
>>> public (active)
>>>   target: default
>>>   icmp-block-inversion: no
>>>   interfaces: enp134s0f0 enp134s0f1
>>>   sources:
>>>   services: ssh dhcpv6-client cockpit glusterfs http https dns
>>>   ports: /tcp 6100/tcp 7410/udp 54323/tcp 2223/tcp 161/udp 111/tcp
>>> 5900-6923/tcp 5989/tcp 9090/tcp 16514/tcp 49152-49216/tcp 54321/tcp
>>> 54322/tcp 6081/udp
>>>   protocols:
>>>   masquerade: no
>>>   forward-ports:
>>>   source-ports:
>>>   icmp-blocks:
>>>   rich rules:
>>>
>>> oVirt nodes:
>>>
>>> #firewall-cmd --list-all
>>> public (active)
>>>   target: default
>>>   icmp-block-inversion: no
>>>   interfaces: enp134s0f0 enp134s0f1
>>>   sources:
>>>   services: ssh dhcpv6-client cockpit glusterfs dns
>>>   ports: 2223/tcp 161/udp 111/tcp 5900-6923/tcp 5989/tcp 9090/tcp
>>> 16514/tcp 49152-49216/tcp 54321/tcp 54322/tcp 6081/udp
>>>   protocols:
>>>   masquerade: no
>>>   forward-ports:
>>>   source-ports:
>>>   icmp-blocks:
>>>
>>> -
>>>
>>> Thanks in advance
>>> Jarson
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/4KKTG4VVPG7WKRNBDJV6JWGOKPBMM2LB/
>>>
>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7HGCNUVR6DV4NKMVNATW5VBJFXP2UJCV/


[ovirt-users] Re: oVirt hyperconverged via cockpit error

2018-10-25 Thread Jayme
You should also make sure the host can ssh to itself and accept keys

On Thu, Oct 25, 2018, 8:42 AM Jayme,  wrote:

> Darn autocorrect, sshd config rather
>
> On Thu, Oct 25, 2018, 7:29 AM Jarosław Prokopowski, <
> jprokopow...@gmail.com> wrote:
>
>> Hi,
>>
>> Please help! :-) I couldn't find any solution via google.
>>
>> I followed this document to create oVirt hyperconverged on 3 hosts using
>> cockpit wizard:
>>
>>
>> https://ovirt.org/blog/2018/02/up-and-running-with-ovirt-4-2-and-gluster-storage/
>>
>> System: CentOS Linux release 7.5.1804
>>
>> All hosts can resolve each other names via DNS, ssh keys are exchanged
>> and working.
>> I added firewall rules based on oVirt installation guide. SSH is possible
>> between all hosts using keys.
>>
>> I cannot create the configuration and the error I get in the last step is:
>>
>>
>> --
>> PLAY [gluster_servers]
>> *
>>
>> TASK [Run a shell script]
>> **
>> failed: [bq817storage.example.com]
>> (item=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h
>> bq817storage.example.com, bq735storage.example.com,
>> bq813storage.example.com) => {"item":
>> "/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h
>> bq817storage.example.com, bq735storage.example.com,
>> bq813storage.example.com", "msg": "Failed to connect to the host via
>> ssh: Permission denied
>> (publickey,gssapi-keyex,gssapi-with-mic,password).\r\n", "unreachable":
>> true}
>> fatal: [bq817storage.example.com]: UNREACHABLE! => {"changed": false,
>> "msg": "All items completed", "results": [{"_ansible_ignore_errors": null,
>> "_ansible_item_label": "/usr/share/gdeploy/scripts/grafton-sanity-check.sh
>> -d sdb -h bq817storage.example.com, bq735storage.example.com,
>> bq813storage.example.com", "_ansible_item_result": true, "item":
>> "/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h
>> bq817storage.example.com, bq735storage.example.com,
>> bq813storage.example.com", "msg": "Failed to connect to the host via
>> ssh: Permission denied
>> (publickey,gssapi-keyex,gssapi-with-mic,password).\r\n", "unreachable":
>> true}]}
>> to retry, use: --limit @/tmp/tmpYLHDCP/run-script.retry
>>
>> PLAY RECAP
>> *
>> bq817storage.example.com : ok=0changed=0unreachable=1failed=0
>>
>>
>> Firewall rules:
>>
>> oVirt engine host:
>>
>> #firewall-cmd --list-all
>> public (active)
>>   target: default
>>   icmp-block-inversion: no
>>   interfaces: enp134s0f0 enp134s0f1
>>   sources:
>>   services: ssh dhcpv6-client cockpit glusterfs http https dns
>>   ports: /tcp 6100/tcp 7410/udp 54323/tcp 2223/tcp 161/udp 111/tcp
>> 5900-6923/tcp 5989/tcp 9090/tcp 16514/tcp 49152-49216/tcp 54321/tcp
>> 54322/tcp 6081/udp
>>   protocols:
>>   masquerade: no
>>   forward-ports:
>>   source-ports:
>>   icmp-blocks:
>>   rich rules:
>>
>> oVirt nodes:
>>
>> #firewall-cmd --list-all
>> public (active)
>>   target: default
>>   icmp-block-inversion: no
>>   interfaces: enp134s0f0 enp134s0f1
>>   sources:
>>   services: ssh dhcpv6-client cockpit glusterfs dns
>>   ports: 2223/tcp 161/udp 111/tcp 5900-6923/tcp 5989/tcp 9090/tcp
>> 16514/tcp 49152-49216/tcp 54321/tcp 54322/tcp 6081/udp
>>   protocols:
>>   masquerade: no
>>   forward-ports:
>>   source-ports:
>>   icmp-blocks:
>>
>> -
>>
>> Thanks in advance
>> Jarson
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/4KKTG4VVPG7WKRNBDJV6JWGOKPBMM2LB/
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/G3MBCQCVQF2G4VTQLUIYQPDNKYRKVMAC/


[ovirt-users] Re: oVirt hyperconverged via cockpit error

2018-10-25 Thread Jayme
Darn autocorrect, sshd config rather

On Thu, Oct 25, 2018, 7:29 AM Jarosław Prokopowski, 
wrote:

> Hi,
>
> Please help! :-) I couldn't find any solution via google.
>
> I followed this document to create oVirt hyperconverged on 3 hosts using
> cockpit wizard:
>
>
> https://ovirt.org/blog/2018/02/up-and-running-with-ovirt-4-2-and-gluster-storage/
>
> System: CentOS Linux release 7.5.1804
>
> All hosts can resolve each other names via DNS, ssh keys are exchanged and
> working.
> I added firewall rules based on oVirt installation guide. SSH is possible
> between all hosts using keys.
>
> I cannot create the configuration and the error I get in the last step is:
>
>
> --
> PLAY [gluster_servers]
> *
>
> TASK [Run a shell script]
> **
> failed: [bq817storage.example.com]
> (item=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h
> bq817storage.example.com, bq735storage.example.com,
> bq813storage.example.com) => {"item":
> "/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h
> bq817storage.example.com, bq735storage.example.com,
> bq813storage.example.com", "msg": "Failed to connect to the host via ssh:
> Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).\r\n",
> "unreachable": true}
> fatal: [bq817storage.example.com]: UNREACHABLE! => {"changed": false,
> "msg": "All items completed", "results": [{"_ansible_ignore_errors": null,
> "_ansible_item_label": "/usr/share/gdeploy/scripts/grafton-sanity-check.sh
> -d sdb -h bq817storage.example.com, bq735storage.example.com,
> bq813storage.example.com", "_ansible_item_result": true, "item":
> "/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h
> bq817storage.example.com, bq735storage.example.com,
> bq813storage.example.com", "msg": "Failed to connect to the host via ssh:
> Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).\r\n",
> "unreachable": true}]}
> to retry, use: --limit @/tmp/tmpYLHDCP/run-script.retry
>
> PLAY RECAP
> *
> bq817storage.example.com : ok=0changed=0unreachable=1failed=0
>
>
> Firewall rules:
>
> oVirt engine host:
>
> #firewall-cmd --list-all
> public (active)
>   target: default
>   icmp-block-inversion: no
>   interfaces: enp134s0f0 enp134s0f1
>   sources:
>   services: ssh dhcpv6-client cockpit glusterfs http https dns
>   ports: /tcp 6100/tcp 7410/udp 54323/tcp 2223/tcp 161/udp 111/tcp
> 5900-6923/tcp 5989/tcp 9090/tcp 16514/tcp 49152-49216/tcp 54321/tcp
> 54322/tcp 6081/udp
>   protocols:
>   masquerade: no
>   forward-ports:
>   source-ports:
>   icmp-blocks:
>   rich rules:
>
> oVirt nodes:
>
> #firewall-cmd --list-all
> public (active)
>   target: default
>   icmp-block-inversion: no
>   interfaces: enp134s0f0 enp134s0f1
>   sources:
>   services: ssh dhcpv6-client cockpit glusterfs dns
>   ports: 2223/tcp 161/udp 111/tcp 5900-6923/tcp 5989/tcp 9090/tcp
> 16514/tcp 49152-49216/tcp 54321/tcp 54322/tcp 6081/udp
>   protocols:
>   masquerade: no
>   forward-ports:
>   source-ports:
>   icmp-blocks:
>
> -
>
> Thanks in advance
> Jarson
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/4KKTG4VVPG7WKRNBDJV6JWGOKPBMM2LB/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MNTMFPEXLGYJKLYOVGGN5Z45K64WJGMN/


[ovirt-users] Re: oVirt hyperconverged via cockpit error

2018-10-25 Thread Jayme
It looks to me like a fairly obvious ssh problem.  Are the ssh keys setup
for root user and permitrootlogin yes in Asheville config?

On Thu, Oct 25, 2018, 7:29 AM Jarosław Prokopowski, 
wrote:

> Hi,
>
> Please help! :-) I couldn't find any solution via google.
>
> I followed this document to create oVirt hyperconverged on 3 hosts using
> cockpit wizard:
>
>
> https://ovirt.org/blog/2018/02/up-and-running-with-ovirt-4-2-and-gluster-storage/
>
> System: CentOS Linux release 7.5.1804
>
> All hosts can resolve each other names via DNS, ssh keys are exchanged and
> working.
> I added firewall rules based on oVirt installation guide. SSH is possible
> between all hosts using keys.
>
> I cannot create the configuration and the error I get in the last step is:
>
>
> --
> PLAY [gluster_servers]
> *
>
> TASK [Run a shell script]
> **
> failed: [bq817storage.example.com]
> (item=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h
> bq817storage.example.com, bq735storage.example.com,
> bq813storage.example.com) => {"item":
> "/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h
> bq817storage.example.com, bq735storage.example.com,
> bq813storage.example.com", "msg": "Failed to connect to the host via ssh:
> Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).\r\n",
> "unreachable": true}
> fatal: [bq817storage.example.com]: UNREACHABLE! => {"changed": false,
> "msg": "All items completed", "results": [{"_ansible_ignore_errors": null,
> "_ansible_item_label": "/usr/share/gdeploy/scripts/grafton-sanity-check.sh
> -d sdb -h bq817storage.example.com, bq735storage.example.com,
> bq813storage.example.com", "_ansible_item_result": true, "item":
> "/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h
> bq817storage.example.com, bq735storage.example.com,
> bq813storage.example.com", "msg": "Failed to connect to the host via ssh:
> Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).\r\n",
> "unreachable": true}]}
> to retry, use: --limit @/tmp/tmpYLHDCP/run-script.retry
>
> PLAY RECAP
> *
> bq817storage.example.com : ok=0changed=0unreachable=1failed=0
>
>
> Firewall rules:
>
> oVirt engine host:
>
> #firewall-cmd --list-all
> public (active)
>   target: default
>   icmp-block-inversion: no
>   interfaces: enp134s0f0 enp134s0f1
>   sources:
>   services: ssh dhcpv6-client cockpit glusterfs http https dns
>   ports: /tcp 6100/tcp 7410/udp 54323/tcp 2223/tcp 161/udp 111/tcp
> 5900-6923/tcp 5989/tcp 9090/tcp 16514/tcp 49152-49216/tcp 54321/tcp
> 54322/tcp 6081/udp
>   protocols:
>   masquerade: no
>   forward-ports:
>   source-ports:
>   icmp-blocks:
>   rich rules:
>
> oVirt nodes:
>
> #firewall-cmd --list-all
> public (active)
>   target: default
>   icmp-block-inversion: no
>   interfaces: enp134s0f0 enp134s0f1
>   sources:
>   services: ssh dhcpv6-client cockpit glusterfs dns
>   ports: 2223/tcp 161/udp 111/tcp 5900-6923/tcp 5989/tcp 9090/tcp
> 16514/tcp 49152-49216/tcp 54321/tcp 54322/tcp 6081/udp
>   protocols:
>   masquerade: no
>   forward-ports:
>   source-ports:
>   icmp-blocks:
>
> -
>
> Thanks in advance
> Jarson
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/4KKTG4VVPG7WKRNBDJV6JWGOKPBMM2LB/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/D2TVAH3ZW4NRLBCO73CVBUUF7FNBLMGF/