Just for reference in case anyone else comes across this, I opted to take the
path of using GlusterFS in place of the NFS storage server that I've got to
avoid having to make changes to the NFS server.
The glusterfs was setup on the same node, which is actually my preferred way to
go for this.
HI again,
I found out that the vdsm-gluster was not installed on the host I had just
upgraded.. probably a bad manipulation i did g. I will need to be more
careful with the other host
everything seems to be back to normal..
Thanks for your help..
Carl
On Thu, Aug 6, 2020 at 4:49 PM carl
Hi ,
I was able to update to ovirt 4.2.8 on one of my host using 7.6 centos repo
like Alex suggested. But now when i try to activate it i get this error in
the engine log
020-08-06 16:43:44,903-04 INFO
[org.ovirt.engine.core.vdsbroker.gluster.ManageGlusterServiceVDSCommand]
On Thu, Aug 6, 2020 at 6:42 PM carl langlois wrote:
> Hi,
>
> Thanks for the suggestion. I will try it.
> But at one point I will need to update the OS past 7.6 as 4.3.9 needs 7.7
> or later.
>
At that point you just switch back your repos at their previous state,
install 4.3 repo, and proceed
On Thu, Aug 6, 2020 at 6:33 PM wrote:
> I have tried importing oVirt exported VMs (which correctly imported and
> ran on a second oVirt host) both on VMware workstation 15.5.6 and
> VirtualBox 6.1.12 running on a Windows 2019 host without success.
>
> I've also tried untaring the *.ova into the
After applying the OVA export patch, that ensured disk content was actually
written ino the OVA image, I have been able to transfer *.ova VMs between two
oVirt clusters. There are still problems that I will report once I have fully
tested what's going on, but in the mean-time for all those, who
Hi,
Thanks for the suggestion. I will try it.
But at one point I will need to update the OS past 7.6 as 4.3.9 needs 7.7
or later.
Regards
Carl
On Thu, Aug 6, 2020 at 9:49 AM Alex K wrote:
> Hi
>
> On Thu, Aug 6, 2020 at 3:45 PM carl langlois
> wrote:
>
>> Hi all,
>>
>> I am in the process
oVirt 4.4.2 Second Release Candidate is now available for testing
The oVirt Project is pleased to announce the availability of oVirt 4.4.2
Second Release Candidate for testing, as of August 6th, 2020.
This update is the second in a series of stabilization updates to the 4.4
series.
Important
I have done that, even added five nodes that contribute a separate Gluster file
system using dispersed (erasure codes, more efficient) mode.
But in another cluster with such a 3-node-HCI base, I had a lot (3 or 4) of
compute nodes, that were actually dual-boot or just shut off when not used:
Hi Nardus,
I'm assuming that your setup was stable and you were able to run your VMs
without problems. If so, then below is not a solution to your problem, you
should really check engine and VDSM logs for reasons why your hosts become
NonResponsive. Most probably there is underlying storage or
If OVA export and import work for you, you get to chose between the two at
import.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of
Thanks Nardus,
After a quick look I found what I was suspecting - there are way too many
threads in Blocked state. I don't know yet the reason but this is very
helpful. I'll let you know about the findings/investigation. Meanwhile, you
may try restarting the engine as (a very brute and ugly)
Hi oVirt land.
Can I convert the disks of Thin Provision to Preallocated?
Best Regards.
--
Att,
Jorge Visentini
+55 55 98432-9868
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement:
Sure thing.
On engine host please find jboss pid. You can use this command:
ps -ef | grep jboss | grep -v grep | awk '{ print $2 }'
or jps tool from jdk. Sample output on my dev environment is:
± % jps
!2860
64853 jboss-modules.jar
196217 Jps
Hi
On Thu, Aug 6, 2020 at 3:45 PM carl langlois wrote:
> Hi all,
>
> I am in the process of upgrading our cluster to 4.3. But first i need to
> update everything to 4.2.8 and update the os to the latest 7.x. I was able
> to update the self-hosted engine to the latest 4.2.8 and centos 7.8. But
>
Here is a sample os the error :
glusterfs = 3.12.15-1.el7
Error: Package: glusterfs-server-3.12.15-1.el7.x86_64
(ovirt-4.2-centos-gluster312)
Requires: glusterfs-api = 3.12.15-1.el7
Removing: glusterfs-api-3.12.11-1.el7.x86_64
(@ovirt-4.2-centos-gluster312)
Hi
Can create thread dump, please send details on howto.
Regards
Nardus
On Thu, 6 Aug 2020 at 14:17, Artur Socha wrote:
> Hi Nardus,
> You might have hit an issue I have been hunting for some time ( [1] and
> [2] ).
> [1] could not be properly resolved because at a time was not able to
>
Hi
[root@engine-aa-1-01 ovirt-engine]# sudo yum list installed | grep vdsm
vdsm-jsonrpc-java.noarch 1.4.18-1.el7
@ovirt-4.3
[root@engine-aa-1-01 ovirt-engine]# sudo yum list installed | grep vdsm
vdsm-jsonrpc-java.noarch 1.4.18-1.el7
@ovirt-4.3
Hi all,
I am in the process of upgrading our cluster to 4.3. But first i need to
update everything to 4.2.8 and update the os to the latest 7.x. I was able
to update the self-hosted engine to the latest 4.2.8 and centos 7.8. But
when i tried to update the host yum update got broken gluster
Hi Nardus,
You might have hit an issue I have been hunting for some time ( [1] and
[2] ).
[1] could not be properly resolved because at a time was not able to
recreate an issue on dev setup.
I suspect [2] is related.
Would you be able to prepare a thread dump from your engine instance?
Hi
Hope you are well. Did you find a solution for this? Think we have the same
type of issue.
Regards
Nar
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement:
Also see this in engine:
Aug 6, 2020, 7:37:17 AM
VDSM someserver command Get Host Capabilities failed: Message timeout which
can be caused by communication issues
On Thu, 6 Aug 2020 at 07:09, Strahil Nikolov wrote:
> Can you fheck for errors on the affected host. Most probably you need the
>
22 matches
Mail list logo