On Tue, Mar 9, 2021 at 8:44 AM wrote:
>
> "ansible_distribution" : "AlmaLinux",
> "ansible_distribution_release" : "Purple Manul",
> "ansible_distribution_version" : "8.3",
> "ansible_distribution_major_version" : "8",
> "ansible_distribution_file_path" : "/etc/redhat-release",
> "ansible_distribu
"ansible_distribution" : "AlmaLinux",
"ansible_distribution_release" : "Purple Manul",
"ansible_distribution_version" : "8.3",
"ansible_distribution_major_version" : "8",
"ansible_distribution_file_path" : "/etc/redhat-release",
"ansible_distribution_file_variety" : "RedHat",
"ansible_distribution_
On Tue, Mar 9, 2021 at 8:17 AM Yedidyah Bar David wrote:
>
> On Tue, Mar 9, 2021 at 7:33 AM wrote:
> >
> > The installation of the ovirt "hosted-engine" hangs at the stage "[INFO]
> > TASK [ovirt.ovirt.hosted_engine_setup: Wait for the host to be up]"
> > (https://pastebin.com/zvf9T8nP) for 20
On Tue, Mar 9, 2021 at 7:33 AM wrote:
>
> The installation of the ovirt "hosted-engine" hangs at the stage "[INFO] TASK
> [ovirt.ovirt.hosted_engine_setup: Wait for the host to be up]"
> (https://pastebin.com/zvf9T8nP) for 20 minutes, then gives an error " [ERROR]
> fatal: [localhost]: FAILED!
Also check the status of the file on each brick with the getfattr command ( seeĀ
https://docs.gluster.org/en/latest/Troubleshooting/resolving-splitbrain/ ) and
provide the output.
Best Regards,Strahil Nikolov
Thank you for your reply.
I'm trying that right now and I see it triggered the se
The installation of the ovirt "hosted-engine" hangs at the stage "[INFO] TASK
[ovirt.ovirt.hosted_engine_setup: Wait for the host to be up]"
(https://pastebin.com/zvf9T8nP) for 20 minutes, then gives an error " [ERROR]
fatal: [localhost]: FAILED! => {"Changed": false, "msg": Host is not up, plea
Install on AlmaLinux distro.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/commun
You can route it to a private address on your router if you want...
We use EVPN/VXLAN (but regular old vlans work too) Just put the public
space on a vlan, add public as a vlan tagged network in ovirt. Only your
public facing VM's need addresses in the space.
On 2021-03-08 05:53, David White
For me, what happens every few months is the Hosted Engine fills up its
/var/log, specifically the httpd folder. Once this partition is full, the
HE only runs for a few seconds before shutting down and trying a different
host. Obviously that makes no difference, so it just starts & stops over
and
Their may be aspect of this where HCI engine.. composes bricks of gluster into
engine hosted storage... and then is not able to add hosts into cluster.
Below is looping error :
[root@ovirte01 ~]# tail -f /var/log/ovirt-engine/engine.log
2021-03-08 10:40:19,192-05 INFO
[org.ovirt.engine.core.
Il giorno mar 2 mar 2021 alle ore 07:34 Thiago Linhares
ha scritto:
> Hello there,
>
> I wonder whats the right approach to get security updates for ovirt nodes?
> (installed using ovirt node iso image)
>
> Eg.:
> 'sudo' package has a know vulnerability until version
> sudo-1.8.23-9.el7.x86_64.
>
Il giorno lun 1 feb 2021 alle ore 11:46 Renaud RAKOTOMALALA <
renaud.rakotomal...@smile.fr> ha scritto:
> Hello everyone,
>
> I operate several oVirt clusters including pre-productions using
> ovirt-node-ng images.
>
> For our traditional clusters we manage the incident in a unitary way with
> a d
<<< Update>>>
as I try to add node. I am able to add fingerprint .. and below is output from
engine
[root@ovirte01 ~]# tail -f /var/log/ovirt-engine/server.log
...
2021-03-08 10:23:21,542-05 WARNING [javax.persistence.spi] (default task-6)
javax.persistence.spi::No valid providers found.
2021-0
I reinstalled OS all nodes CentoOS 8 streams.
installed cockpit engine and ran through HCI deploy wizard with gluster. this
deployed 4.4 version of engine.. then said 4.5 version available.
I try to add hosts and get error "Error while executing action: Server
thor.penguinpages.local is al
The broker log, these lines are pretty much repeating:
MainThread::WARNING::2021-03-03
09:19:12,086::storage_broker::97::ovirt_hosted_engine_ha.broker.storage_broker.StorageBroker::(__init__)
Can't connect vdsm storage: 'metadata_image_UUID can't be 'None'
MainThread::INFO::2021-03-03
09:19:12,
Thank you for your reply.
I'm trying that right now and I see it triggered the self-healing process.
I will come back with an update.
Best regards.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy St
Thank you.
I have tried that and it didn't work as the system sees that the file is not in
split-brain.
I have also tried force heal and full heal and still nothing. I always end up
with the entry being stuck in unsynched stage.
___
Users mailing list
On Monday, 8 March 2021 12:46:58 CET kim.karga...@noroff.no wrote:
> Hi,
>
> We have plenty of space, I have checked with both df -h and df -i.
>
> Here are the vdsm logs from one node.
what is the whole flow you are running? This looks like a flow where you try
to delete an image, which is p
Hi,
We have plenty of space, I have checked with both df -h and df -i.
Here are the vdsm logs from one node.
2021-03-08 12:09:15,550+0100 INFO (jsonrpc/7) [vdsm.api] START
deleteImage(sdUUID=u'fb14e013-15f1-49b6-b129-210517aca0da',
spUUID=u'aec485fa-cdbb-4979-8dc9-a2376559b2a4',
imgUUID=u'
Also share vdsm logs from all the hosts at the time it will be usefull
On Mon, Mar 8, 2021 at 4:55 PM Ritesh Chikatwar wrote:
> Hello,
>
> I would recommend to check how much space you have available and which
> partition is filling up with the following command:
>
> df -h
>
> Also if you have e
Hello,
I would recommend to check how much space you have available and which
partition is filling up with the following command:
df -h
Also if you have enough space, make sure to check your Inode usage using df
-i
On Mon, Mar 8, 2021 at 4:43 PM wrote:
> Hi,
>
> We are running ovirt 4.3 and w
Hi,
We are running ovirt 4.3 and whenever we create a new VM, we get the error:
HSMGetAllTasksStatusesVDS failed: Error creating a new volume: (u"Volume
creation 77be733a-a38a-4125-a3ae-beea960d9e28 failed: (28, 'Sanlock resource
write failure', 'No space left on device')",)
3/8/2111:56:57 AM
Hello,
Did you try starting using hosted-engine --vm-start
Please check/share all relevant logs.
from engine and hosts, and at least:
engine.log &
/var/log/vdsm/*
On Mon, Mar 8, 2021 at 3:54 PM wrote:
> Hello.
> I have a 4 host infrastructure in ovirt and a few days ago the
> hosted-engine
If I have a private network (10.1.0.0/24) that is being used by the cluster for
intra-host communication & replication, how do I get a block of public IP
addresses routed to the virtual cluster?
For example, let's say I have a public /28, and let's use 1.1.1.0/28 for
example purposes.
I'll assi
Hello.
I have a 4 host infrastructure in ovirt and a few days ago the hosted-engine
was turned off and I cannot turn it on from any host. Any ideas? Thanks.
--== Host host1.myhost.com (id: 7) status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname
Hi,
On Mon, Mar 8, 2021 at 10:13 AM Marko Vrgotic wrote:
>
> I cannot find the reason why the re-Deployment on this Hosts fails, as it was
> already deployed on it before.
>
> No errors, found int the deployment, but it seems half done, based on
> messages I sent in previous email.
Please chec
I cannot find the reason why the re-Deployment on this Hosts fails, as it was
already deployed on it before.
No errors, found int the deployment, but it seems half done, based on messages
I sent in previous email.
-
kind regards/met vriendelijke groeten
Marko Vrgotic
Sr. System Engineer @ S
27 matches
Mail list logo