[ovirt-users] Re: Ovirt 4.4.3 Hyper-converged Deployment with GlusterFS

2020-11-23 Thread Parth Dhanjal
Hey! Are you running over CentOS? Either you have to uncheck the "Blacklist gluster devices" option on the bricks page and try again Or you can add a filter to /etc/lvm/lvm.conf something like this - a|^/dev/sda2$|", On Mon, Nov 23, 2020 at 6:17 PM wrote: > Trying to deploy a 3 Node

[ovirt-users] Re: ovirt glusterfs

2020-11-02 Thread Parth Dhanjal
Hey! In case you are deploying on any server which is not RHVH based, the devices are not automatically blacklisted. Or it could be because the disk was previously partitioned. You can try these solutions if they help - If the filter is correct (/etc/lvm/lvm.conf) and old partition table

[ovirt-users] Re: Install test lab single host HCI with plain CentOS as OS

2020-10-30 Thread Parth Dhanjal
in the hosts file, so that ansible can execute roles on the server. You can refer to this doc as well - https://ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hyperconverged.html On Fri, Oct 30, 2020 at 4:20 PM Gianluca Cecchi wrote: > On Fri, Oct 30, 2020 at 11:43 AM Parth Dhanjal wr

[ovirt-users] Re: Install test lab single host HCI with plain CentOS as OS

2020-10-30 Thread Parth Dhanjal
Hey! It seems vdsm packages are missing. Can you try installing vdsm-gluster and ovirt-engine-appliance packages? In case you face repo issues, just install yum install https://resources.ovirt.org/pub/yum-repo/ovirt-release-master.rpm Then try again. Thanks! On Fri, Oct 30, 2020 at 4:00 PM

[ovirt-users] Re: Add Node to a single node installation with self hosted engine.

2020-10-27 Thread Parth Dhanjal
; >> >> >> >> Hello Marcel, >> For a note, you can't expand your single gluster node cluster to 3 >> nodes.Only you can add compute nodes. >> If you want to add compute nodes then you do not need any glusterfs packages >> to be installed. Only ov

[ovirt-users] Re: Add Node to a single node installation with self hosted engine.

2020-10-26 Thread Parth Dhanjal
Hey Marcel! You have to install the required glusterfs packages and then deploy the gluster setup on the 2 new hosts. After creating the required LVs, VGs, thinpools, mount points and bricks, you'll have to expand the gluster-cluster from the current host using add-brick functionality from

[ovirt-users] Re: Disconnected -- Server has closed the connection

2020-10-08 Thread Parth Dhanjal
Hey! Can you check the cockpit service from host systemctl status cockpit In case it is not started systemctl start cockpit This issue could be because of a firewall exception. You can try this - firewall-cmd --permanent --add-port=9090/tcp firewall-cmd --permanent --add-port=9090/udp systemctl

[ovirt-users] Re: Gluster Name too long

2020-09-23 Thread Parth Dhanjal
Hey! This is a known bug targeted for oVirt 4.4.3. Firstly, multipath should be ideally used when you are not using a RHVH system. Then disabling the "blacklist gluster device" option will ensure that the ansible inventory file doesn't blacklists your device. In case you have a multipath and you

[ovirt-users] Re: oVirt - vdo: ERROR - Device /dev/sd excluded by a filter

2020-09-21 Thread Parth Dhanjal
Hey! Can you try editing the lvm cache filter and including sdc multipath into the filter? I see that it is missing, and hence the error that sdc is excluded. Add "a|^/dev/sdc$|" to the lvmfilter and try again. Thanks On Mon, Sep 21, 2020 at 11:34 AM Jeremey Wise wrote: > > > > [image:

[ovirt-users] Re: Actual oVirt install questions

2020-08-18 Thread Parth Dhanjal
Hey! Have you added the hostname to the known_hosts file or setup passwordless ssh for the single node? Ansible requires a passwordless ssh to ensure root access to automate the installation process. If yes, then probably this issue is fixed in the next build of cockpit-ovirt. Can you try to

[ovirt-users] Unable to connect to cockpit

2020-05-11 Thread Parth Dhanjal
Hey! I have a remote machine on which I have installed RHVH4.4 I'm unable to access the cockpit-plugin. journalctl -u cockpit returns this error cockpit-tls: gnutls_handshake failed: A TLS fatal alert has been received. Screenshot while trying to reach cockpit through the browser attached. Is

[ovirt-users] Re: upgrade issue

2020-01-13 Thread Parth Dhanjal
/yum-repo/ovirt-release43.rpm Regards Parth Dhanjal On Mon, Jan 13, 2020 at 11:44 PM wrote: > Hello, > > By following the instruction at > https://www.ovirt.org/documentation/upgrade-guide/appe-Manually_Updating_Hosts.html, > I tried to upgrade host from 4.2.8(CentOS 7.6) to

[ovirt-users] Re: Engine deployment last step.... Can anyone help ?

2019-11-25 Thread Parth Dhanjal
is 400.”} >>> >>> >>> On 25 Nov 2019, at 09:16, Rob wrote: >>> >>> Yes, >>> >>> I’ll restart all Nodes after wiping the failed setup of Hosted engine >>> using. >>> >>> * ovirt-hosted-engine-cleanup* >>> *

[ovirt-users] Re: Engine deployment last step.... Can anyone help ?

2019-11-25 Thread Parth Dhanjal
look for errors under /var/log/ovirt-hosted-engine-setup/engine.log On Mon, Nov 25, 2019 at 3:13 PM Rob wrote: > > > On 25 Nov 2019, at 09:28, Parth Dhanjal wrote: > > /var/log/vdsm/vdsm.log > > > ___ Users mailing

[ovirt-users] Re: Engine deployment last step.... Can anyone help ?

2019-11-25 Thread Parth Dhanjal
t libvirtd* > *systemctl restart vdsm* > > *although last time I did * > > *systemctl restart vdsm* > > *VDSM did **not** restart maybe that is OK as Hosted Engine was then de > deployed or is that the issue ?* > > > On 25 Nov 2019, at 09:13, Parth Dhanjal wrote:

[ovirt-users] Re: Engine deployment last step.... Can anyone help ?

2019-11-25 Thread Parth Dhanjal
Can you please share the error in case it fails again? On Mon, Nov 25, 2019 at 2:42 PM Rob wrote: > hmm, I’’l try again, that failed last time. > > > On 25 Nov 2019, at 09:08, Parth Dhanjal wrote: > > Hey! > > For > Storage Connection you can add - :/engine > An

[ovirt-users] Re: Engine deployment last step.... Can anyone help ?

2019-11-25 Thread Parth Dhanjal
Hey! For Storage Connection you can add - :/engine And for Mount Options - backup-volfile-servers=: On Mon, Nov 25, 2019 at 2:31 PM wrote: > So... > > I have got to the last step > > 3 Machines with Gluster Storage configured however at the last screen > > Deploying the Engine to Gluster

[ovirt-users] Re: Gluster setup 3 Node - Now only showing single node setup in setup Wizard

2019-11-25 Thread Parth Dhanjal
Hey! What version of ovirt are you using? On Sat, Nov 23, 2019 at 4:17 PM wrote: > I have set up 3 Nodes with a separate volume for Gluster, I have set up > the two networks and DNS works fine SSH has been set up for Gluster and you > can login via ssh to the other two hosts from the host

[ovirt-users] Re: Hosted-Engine wizard disappeared after cockpit idle session timeout

2019-11-14 Thread Parth Dhanjal
Hey! cockpit is stateless, so once the session ends you lose the data. The process completes in the background. I'll suggest to run ovirt-hosted-engine-cleanup and then start from the UI again if needed. You can add the port to the firewall by running firewall-cmd --permanent --zone=public

[ovirt-users] Re: oVirt 4.3.5/6 HC: Reinstall fails from WEB UI

2019-10-28 Thread Parth Dhanjal
Hey! Host2 and Host3 should be added automatically if you have provided the FQDN for these hosts during the deployment. >From the error above "msg": "Error getting key from: https://ovirt-engine2.example.com/ovirt-engine/services/pki-resource?resource=engine-certificate=OPENSSH-PUBKEY " I think

[ovirt-users] Re: HE deployment failing - FAILED! => {"changed": false, "msg": "network default not found"}

2019-10-18 Thread Parth Dhanjal
Thanks! That resolved the issue. On Fri, Oct 18, 2019 at 7:22 PM Simone Tiraboschi wrote: > > > On Fri, Oct 18, 2019 at 3:46 PM Parth Dhanjal wrote: > >> Hey! >> >> I am trying a static IP deployment. >> But the HE deployment fails during the VM preparat

[ovirt-users] HE deployment failing - FAILED! => {"changed": false, "msg": "network default not found"}

2019-10-18 Thread Parth Dhanjal
uot;msg": "network default not found"} I tried restarting the network service, but the error is still persisting. Upon checking the logs, I can find the following https://pastebin.com/MB6GrLKA Has anyone else faced this issue? Regards Parth Dhanjal

[ovirt-users] Re: HE deployment failing

2019-07-08 Thread Parth Dhanjal
Hey! The other 2 bricks were of 50G each. I forgot to check that. Sorry for the confusion. Thanks! On Mon, Jul 8, 2019 at 11:42 AM Sahina Bose wrote: > > > On Mon, Jul 8, 2019 at 11:40 AM Parth Dhanjal wrote: > >> Hey! >> >> I used cockpit to deploy glu

[ovirt-users] Re: HE deployment failing

2019-07-08 Thread Parth Dhanjal
ume to be so > small ... > > Best Regards, > Strahil Nikolov > > В петък, 5 юли 2019 г., 10:17:11 ч. Гринуич-4, Simone Tiraboschi < > stira...@redhat.com> написа: > > > > > On Fri, Jul 5, 2019 at 4:12 PM Parth Dhanjal wrote: > > Hey! > > I

[ovirt-users] HE deployment failing

2019-07-05 Thread Parth Dhanjal
o 90GiB. But the deployment still fails. A 50GiB storage domain is created by default even if some other size is provided. Has anyone faced a similar issue? Regards Parth Dhanjal ___ Users mailing list -- users@ovirt.org To unsubscribe send an email

[ovirt-users] Re: 4.3.3 single node hyperconverged wizard failing because var/log is too small?

2019-05-09 Thread Parth Dhanjal
workaround can be to skip the test, by editing the finally generated inventory file in the last step before deployment and adding gluster_features_force_varlogsizecheck: false under the vars section of the file. Regards Parth Dhanjal On Fri, May 10, 2019 at 5:58 AM Edward Berger wrote: > I'm try

[ovirt-users] Re: Ovirt 4.3 RC missing glusterfs-gnfs

2019-02-28 Thread Parth Dhanjal
And gluster version glusterfs-5.3-2.el7.x86_64 upgraded from glusterfs-3.12.15-1.el7.x86_64 On Fri, Mar 1, 2019 at 12:01 PM Parth Dhanjal wrote: > Hey Sandro! > > I tried testing a 3 node setup 4.2 to 4.3 upgrade with the latest build. > I didn't face this issue while upgrading wit

[ovirt-users] Re: Ovirt 4.3 RC missing glusterfs-gnfs

2019-02-28 Thread Parth Dhanjal
Hey Sandro! I tried testing a 3 node setup 4.2 to 4.3 upgrade with the latest build. I didn't face this issue while upgrading with the latest build. gluster-gnfs package was removed automatically with an upgrade. I was using ovirt-release43-4.3.0-1.el7.noarch for testing Regards Parth Dhanjal

[ovirt-users] Re: oVirt-4.2: HE rebooting continuously and 2 hosts non-operatinal

2019-02-26 Thread Parth Dhanjal
Hey Sandro! The issue was resolved. It was a network problem. There were IPv6 configurations on the machine which were causing the issue. On Tue, Feb 26, 2019 at 1:02 AM Parth Dhanjal wrote: > +Sandro > > Hey Sandro! > Sorry for the late reply. > PFA vdsm, broker and agent

[ovirt-users] Re: Gluster setup Problem

2019-02-25 Thread Parth Dhanjal
Hey Matthew! Can you please provide me with the following to help you debug the issue that you are facing? 1. oVirt and gdeploy version 2. /var/log/messages file 3. /root/.gdeploy file On Mon, Feb 25, 2019 at 1:23 PM Parth Dhanjal wrote: > Hey Matthew! > > Can you please provide wh

[ovirt-users] Re: [oVirt 4.3.1 Test Day] cmdline HE Deployment

2019-02-25 Thread Parth Dhanjal
Hey! You can check under /var/run/libvirt/qemu/HostedEngine.xml Search for 'vnc' >From there you can look up the port on which the HE VM is available and connect to the same. On Mon, Feb 25, 2019 at 6:47 PM Guillaume Pavese < guillaume.pav...@interactiv-group.com> wrote: > 1) I am running in a

[ovirt-users] oVirt-4.2: HE rebooting continuously and 2 hosts non-operatinal

2019-02-25 Thread Parth Dhanjal
of the non-operational hosts - http://pastebin.test.redhat.com/721991 Error in the broker log - http://pastebin.test.redhat.com/721990 Also, I couldn't see the brick mounted on the non-op hosts. Even though the volume status seems fine. Anyone knows why is this issue occurring? Regards Parth Dhanjal

[ovirt-users] Re: Gluster setup Problem

2019-02-24 Thread Parth Dhanjal
Hey Matthew! Can you please provide which oVirt and gdeploy version have you installed? Regards Parth Dhanjal On Mon, Feb 25, 2019 at 12:56 PM Sahina Bose wrote: > +Gobinda Das +Dhanjal Parth can you please check? > > On Fri, Feb 22, 2019 at 11:52 PM Matthew Roth wrote: > &g