Hey!
These are the steps you need to follow
1. Shutdown all VMs through oVirt UI except Hosted Engine(HE) VM
2. Then move the HE VM into global maintenance On one of the hosts
hosted-engine --set-maintenance --mode=global
3. Poweroff HE VM.
hosted-enginen --vm-shutdown (To ensure graceful shutdo
with replica 2 configuration
>
> 10.33.50.33:/VOL1VOL1
> 10.33.50.34:/VOL1/VOL1
>
> thanks
>
> On Wed, Dec 16, 2020 at 12:09 PM Parth Dhanjal wrote:
>
>> Hey!
>>
>> Did you input a mount point?
>> It seems from the error message that either the mount po
Hey!
Did you input a mount point?
It seems from the error message that either the mount point was missing or
was wrong.
On Wed, Dec 16, 2020 at 6:07 AM Ariez Ahito
wrote:
> HI guys, i have installed ovirt 4.4 hosted engine and a separate glusterfs
> storage.
> now during hosted engine deploym
the same network for both the inputs. But as you have two
different networks you can mention the storage network in the first field
and the management network in the second.
Thanks
Parth Dhanjal
On Sun, Dec 13, 2020 at 11:59 PM Gilboa Davara wrote:
> Hello all,
>
> I'm slowly
Hello!
To my knowledge, the latest build is oVirt/RHVH 4.4.3
So there is no oVirt/RHVH - 4.5 as of now.
On Thu, Dec 10, 2020 at 9:40 PM Gianluca Cecchi
wrote:
> Hello,
> my engine is 4.4.3.12-1.el8 and my 3 oVirt nodes (based on plain CentOS
> due to megaraid_sas kernel module needed) have been
7, 2020 at 6:04 AM Parth Dhanjal wrote:
>
>> Can you shut down the VM and try once?
>>
>> On Fri, Dec 4, 2020 at 7:17 PM wrote:
>>
>>> Hi all,
>>>
>>> Is there any way how to edit cpu/memory/boot and stuff like that once
>>> the VM h
Can you shut down the VM and try once?
On Fri, Dec 4, 2020 at 7:17 PM wrote:
> Hi all,
>
> Is there any way how to edit cpu/memory/boot and stuff like that once the
> VM has been created by the pool? All option when trying to edit VM are
> greyed out. We are unable to edit any option for vm in p
Hey!
Are you running over CentOS?
Either you have to uncheck the "Blacklist gluster devices" option on the
bricks page and try again
Or you can add a filter to /etc/lvm/lvm.conf something like this
- a|^/dev/sda2$|",
On Mon, Nov 23, 2020 at 6:17 PM wrote:
> Trying to deploy a 3 Node Hyperconver
Hey!
In case you are deploying on any server which is not RHVH based, the
devices are not automatically blacklisted.
Or it could be because the disk was previously partitioned.
You can try these solutions if they help -
If the filter is correct (/etc/lvm/lvm.conf) and old partition table
informat
n the hosts file, so
that ansible can execute roles on the server.
You can refer to this doc as well -
https://ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hyperconverged.html
On Fri, Oct 30, 2020 at 4:20 PM Gianluca Cecchi
wrote:
> On Fri, Oct 30, 2020 at 11:43 AM Parth Dhanja
Hey!
It seems vdsm packages are missing.
Can you try installing vdsm-gluster and ovirt-engine-appliance packages?
In case you face repo issues, just install yum install
https://resources.ovirt.org/pub/yum-repo/ovirt-release-master.rpm
Then try again.
Thanks!
On Fri, Oct 30, 2020 at 4:00 PM Gianl
t;>
>>
>>
>>
>> Hello Marcel,
>> For a note, you can't expand your single gluster node cluster to 3
>> nodes.Only you can add compute nodes.
>> If you want to add compute nodes then you do not need any glusterfs packages
>> to be installe
Hey Marcel!
You have to install the required glusterfs packages and then deploy the
gluster setup on the 2 new hosts. After creating the required LVs, VGs,
thinpools, mount points and bricks, you'll have to expand the
gluster-cluster from the current host using add-brick functionality from
gluster
Hey!
Can you check the cockpit service from host
systemctl status cockpit
In case it is not started
systemctl start cockpit
This issue could be because of a firewall exception.
You can try this -
firewall-cmd --permanent --add-port=9090/tcp
firewall-cmd --permanent --add-port=9090/udp
systemctl re
Hey!
This is a known bug targeted for oVirt 4.4.3.
Firstly, multipath should be ideally used when you are not using a RHVH
system.
Then disabling the "blacklist gluster device" option will ensure that the
ansible inventory file doesn't blacklists your device.
In case you have a multipath and you m
Hey!
Can you try editing the lvm cache filter and including sdc multipath into
the filter?
I see that it is missing, and hence the error that sdc is excluded.
Add "a|^/dev/sdc$|" to the lvmfilter and try again.
Thanks
On Mon, Sep 21, 2020 at 11:34 AM Jeremey Wise
wrote:
>
>
>
> [image: image.p
Hey!
Have you added the hostname to the known_hosts file or setup passwordless
ssh for the single node?
Ansible requires a passwordless ssh to ensure root access to automate the
installation process.
If yes, then probably this issue is fixed in the next build of
cockpit-ovirt.
Can you try to upgrad
Hey!
I have a remote machine on which I have installed RHVH4.4
I'm unable to access the cockpit-plugin.
journalctl -u cockpit returns this error
cockpit-tls: gnutls_handshake failed: A TLS fatal alert has been received.
Screenshot while trying to reach cockpit through the browser attached.
Is
/yum-repo/ovirt-release43.rpm
Regards
Parth Dhanjal
On Mon, Jan 13, 2020 at 11:44 PM wrote:
> Hello,
>
> By following the instruction at
> https://www.ovirt.org/documentation/upgrade-guide/appe-Manually_Updating_Hosts.html,
> I tried to upgrade host from 4.2.8(CentOS 7.6) to 4.3
is 400.”}
>>>
>>>
>>> On 25 Nov 2019, at 09:16, Rob wrote:
>>>
>>> Yes,
>>>
>>> I’ll restart all Nodes after wiping the failed setup of Hosted engine
>>> using.
>>>
>>> * ovirt-hosted-engine-cleanup*
>>> *
look for errors
under /var/log/ovirt-hosted-engine-setup/engine.log
On Mon, Nov 25, 2019 at 3:13 PM Rob wrote:
>
>
> On 25 Nov 2019, at 09:28, Parth Dhanjal wrote:
>
> /var/log/vdsm/vdsm.log
>
>
>
___
Users mailing li
t libvirtd*
> *systemctl restart vdsm*
>
> *although last time I did *
>
> *systemctl restart vdsm*
>
> *VDSM did **not** restart maybe that is OK as Hosted Engine was then de
> deployed or is that the issue ?*
>
>
> On 25 Nov 2019, at 09:13, Parth Dhanjal wrote:
Can you please share the error in case it fails again?
On Mon, Nov 25, 2019 at 2:42 PM Rob wrote:
> hmm, I’’l try again, that failed last time.
>
>
> On 25 Nov 2019, at 09:08, Parth Dhanjal wrote:
>
> Hey!
>
> For
> Storage Connection you can add - :/engine
> An
Hey!
For
Storage Connection you can add - :/engine
And for
Mount Options - backup-volfile-servers=:
On Mon, Nov 25, 2019 at 2:31 PM wrote:
> So...
>
> I have got to the last step
>
> 3 Machines with Gluster Storage configured however at the last screen
>
> Deploying the Engine to Gluster a
Hey!
What version of ovirt are you using?
On Sat, Nov 23, 2019 at 4:17 PM wrote:
> I have set up 3 Nodes with a separate volume for Gluster, I have set up
> the two networks and DNS works fine SSH has been set up for Gluster and you
> can login via ssh to the other two hosts from the host used
Hey!
cockpit is stateless, so once the session ends you lose the data. The
process completes in the background.
I'll suggest to run ovirt-hosted-engine-cleanup and then start from the UI
again if needed.
You can add the port to the firewall by running
firewall-cmd --permanent --zone=public --add-p
Hey!
Host2 and Host3 should be added automatically if you have provided the FQDN
for these hosts during the deployment.
>From the error above
"msg": "Error getting key from:
https://ovirt-engine2.example.com/ovirt-engine/services/pki-resource?resource=engine-certificate&format=OPENSSH-PUBKEY
"
I
Thanks!
That resolved the issue.
On Fri, Oct 18, 2019 at 7:22 PM Simone Tiraboschi
wrote:
>
>
> On Fri, Oct 18, 2019 at 3:46 PM Parth Dhanjal wrote:
>
>> Hey!
>>
>> I am trying a static IP deployment.
>> But the HE deployment fails during the VM preparat
uot;msg": "network
default not found"}
I tried restarting the network service, but the error is still persisting.
Upon checking the logs, I can find the following
https://pastebin.com/MB6GrLKA
Has anyone else faced this issue?
Regards
Parth Dhanjal
Hey!
The other 2 bricks were of 50G each.
I forgot to check that.
Sorry for the confusion.
Thanks!
On Mon, Jul 8, 2019 at 11:42 AM Sahina Bose wrote:
>
>
> On Mon, Jul 8, 2019 at 11:40 AM Parth Dhanjal wrote:
>
>> Hey!
>>
>> I used cockpit to deploy gluster.
#x27;s volume to be so
> small ...
>
> Best Regards,
> Strahil Nikolov
>
> В петък, 5 юли 2019 г., 10:17:11 ч. Гринуич-4, Simone Tiraboschi <
> stira...@redhat.com> написа:
>
>
>
>
> On Fri, Jul 5, 2019 at 4:12 PM Parth Dhanjal wrote:
>
> Hey!
>
ab) to 90GiB.
But the deployment still fails. A 50GiB storage domain is created by
default even if some other size is provided.
Has anyone faced a similar issue?
Regards
Parth Dhanjal
___
Users mailing list -- users@ovirt.org
To unsubscribe send an
workaround can be to skip the test, by editing the finally
generated inventory file in the last step before deployment and adding
gluster_features_force_varlogsizecheck:
false under the vars section of the file.
Regards
Parth Dhanjal
On Fri, May 10, 2019 at 5:58 AM Edward Berger wrote:
> I'm t
And gluster version glusterfs-5.3-2.el7.x86_64 upgraded
from glusterfs-3.12.15-1.el7.x86_64
On Fri, Mar 1, 2019 at 12:01 PM Parth Dhanjal wrote:
> Hey Sandro!
>
> I tried testing a 3 node setup 4.2 to 4.3 upgrade with the latest build.
> I didn't face this issue while upgradin
Hey Sandro!
I tried testing a 3 node setup 4.2 to 4.3 upgrade with the latest build.
I didn't face this issue while upgrading with the latest build.
gluster-gnfs package was removed automatically with an upgrade.
I was using ovirt-release43-4.3.0-1.el7.noarch for testing
Regards
Parth Dh
Hey Sandro!
The issue was resolved.
It was a network problem.
There were IPv6 configurations on the machine which were causing the issue.
On Tue, Feb 26, 2019 at 1:02 AM Parth Dhanjal wrote:
> +Sandro
>
> Hey Sandro!
> Sorry for the late reply.
> PFA vdsm, broker and agent
Hey Matthew!
Can you please provide me with the following to help you debug the issue
that you are facing?
1. oVirt and gdeploy version
2. /var/log/messages file
3. /root/.gdeploy file
On Mon, Feb 25, 2019 at 1:23 PM Parth Dhanjal wrote:
> Hey Matthew!
>
> Can you please provide wh
Hey!
You can check under /var/run/libvirt/qemu/HostedEngine.xml
Search for 'vnc'
>From there you can look up the port on which the HE VM is available and
connect to the same.
On Mon, Feb 25, 2019 at 6:47 PM Guillaume Pavese <
guillaume.pav...@interactiv-group.com> wrote:
> 1) I am running in a
for one of the
non-operational hosts - http://pastebin.test.redhat.com/721991 Error in the
broker log - http://pastebin.test.redhat.com/721990 Also, I couldn't see
the brick mounted on the non-op hosts. Even though the volume status seems
fine.
Anyone knows why is this issue occurring?
Regard
Hey Matthew!
Can you please provide which oVirt and gdeploy version have you installed?
Regards
Parth Dhanjal
On Mon, Feb 25, 2019 at 12:56 PM Sahina Bose wrote:
> +Gobinda Das +Dhanjal Parth can you please check?
>
> On Fri, Feb 22, 2019 at 11:52 PM Matthew Roth wrote:
> &g
40 matches
Mail list logo