I went the other way. Created another HE one on a new host. On a separate
server, make the export domain, connected it to both HE and migrated all vm
through export / import between HE. Everything went quickly and without
problems.
___
Users mailing
Thank you for your reply. I used glusterfs storage and used three copies for
storage. When one of my nodes was down, HostedEngine was suspended for a period
of time.I/O errors were reported when I tried to start the HostedEngine virtual
machine with virsh resume, and returned to normal when I
FYI
It looks like virt-viewer 7 is in RHEL8 Beta.
The latest version of virt-viewer is 8, just released and version 6 changed the
API version to 4.
Maybe you could install a later version using Flatpak or Snap.
Regards,
Paul S.
To view the terms under which this email is
Hello,
we are using oVirt-4.2.8 and I have created a logical network
using the ovn-network-provider, I haven't configured it to connect to a
physical network.
I have 2 VMs running on 2 hosts which can connect to each other this logical
network. The only connection between the
There were some issues with the migration.
Check that all files/directories are owned by vdsm:kvm .
Best Regards,
Strahil Nikolov___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement:
Sandro, thx for help!
Problem with volume msk-gluster-facility.: / data
Log file here https://yadi.sk/d/RFHHey-5jQMxYQ
In logs there is a cyclic errors.
2019-03-13 21:30:21,130+0300 ERROR (jsonrpc/6) [storage.HSM] Could not connect
to storageServer (hsm:2414)
Traceback (most recent call
On Wed, Mar 13, 2019 at 8:40 PM Jingjie Jiang
wrote:
> Hi Nir,
>
> I had qcow2 on FC, but qemu-img still showed size is 0.
>
> # qemu-img info
> /rhev/data-center/mnt/blockSD/eaa6f641-6b36-4c1d-bf99-6ba77df3156f/images/38cdceea-45d9-4616-8eef-966acff2f7be/8a32c5af-f01f-48f4-9329-e173ad3483b1
>
>
Hi Nir,
I had qcow2 on FC, but qemu-img still showed size is 0.
# qemu-img info
/rhev/data-center/mnt/blockSD/eaa6f641-6b36-4c1d-bf99-6ba77df3156f/images/38cdceea-45d9-4616-8eef-966acff2f7be/8a32c5af-f01f-48f4-9329-e173ad3483b1
image:
Deployed MetricStore on the engine and hosts as per the instructions. But using
some time I realized that for me the functionality is redundant, enough data
collected by DWH.
https://www.ovirt.org/develop/release-management/features/metrics/metrics-store-installation.html
Is there an instruction
When checking block device configuration, on an ovirt configuration using a
SAN, I found this line:
dm/use_blk_mq:0
Did someone try it, by adding in the kernel command line:
dm_mod.use_blk_mq=y
I'm not sure, but it might improve performance on multipath, even on spinning
rust.
Il giorno mer 13 mar 2019 alle ore 15:14 alexeynikolaev <
alexeynikolaev.p...@yandex.ru> ha scritto:
> Hi community!
>
> After update one of ovirt node NG from version 4.2.x to 4.3.1 this node
> lost connection to glusterfs volume with error:
>
> ConnectStoragePoolVDS failed: Cannot find master
On Wed, Mar 13, 2019 at 3:56 AM wrote:
> Hi, everyone, there is a VM in the HostedEngine virtual machine that has
> been paused due to a storage I/O error, but engine manager services are
> normal, what is the problem?
> ___
>
>
Which kind of storage?
Try disabling or making the VM rule soft, it won't migrate more than one VM at
a time so it can't migrate either VM without breaking the vms_rule.
Regards,
Paul S.
From: zoda...@gmail.com
Sent: 13 March 2019 07:49
To: users@ovirt.org
Hi community!
After update one of ovirt node NG from version 4.2.x to 4.3.1 this node lost
connection to glusterfs volume with error:
ConnectStoragePoolVDS failed: Cannot find master domain:
u'spUUID=5a5cca91-01f8-01af-0297-025f,
msdUUID=7d5de684-58ff-4fbc-905d-3048fc55b2b1'.
Hi community! After update one of ovirt node NG from version 4.2.x to 4.3.1 this node lost connection to glusterfs volume with error: ConnectStoragePoolVDS failed: Cannot find master domain: u'spUUID=5a5cca91-01f8-01af-0297-025f, msdUUID=7d5de684-58ff-4fbc-905d-3048fc55b2b1'. Another nodes
First you must look vdsm logs on the hypervisor's hosts. 13.03.2019, 05:57, "xil...@126.com" :Hi, everyone, there is a VM in the HostedEngine virtual machine that has been paused due to a storage I/O error, but engine manager services are normal, what is the
The oVirt Project is pleased to announce the availability of the oVirt
4.3.2 Second Release Candidate, as of March 13th, 2019.
This update is a release candidate of the second in a series of
stabilization updates to the 4.3 series.
This is pre-release software. This pre-release should not to be
It seems to be working properly , but the OVF got updated recently and
powering up the hosted-engine is not working :)
[root@ovirt2 ~]# sudo -u vdsm tar -tvf
/var/run/vdsm/storage/808423f9-8a5c-40cd-bc9f-2568c85b8c74/441abdc8-6cb1-49a4-903f-a1ec0ed88429/c3309fc0-8707-4de1-903d-8d4bbb024f81
Dear Simone,
it seems that there is some kind of problem ,as the OVF got updated with wrong
configuration:[root@ovirt2 ~]# ls -l
Hi Andrej,
Thank you for your quick response, as well as the RFE.
Thanks,
-Zhen
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of
On Wed, Mar 13, 2019 at 9:57 AM Strahil Nikolov
wrote:
> Hi Simone,Nir,
>
> >Adding also Nir on this, the whole sequence is tracked here:
> >I'd suggest to check ovirt-imageio and vdsm logs on ovirt2.localdomain
> about the same time.
>
> I have tested again (first wiped current transfers) and
Hi Simone,Nir,
>Adding also Nir on this, the whole sequence is tracked here:
>I'd suggest to check ovirt-imageio and vdsm logs on ovirt2.localdomain about
>the same time.
I have tested again (first wiped current transfers) and it is happening the
same (phase 10).
engine=# \x
Expanded display is
Similarly,
I tried to deploy hosted-engine on iSCSI through cockpit (oVirt 4.3.2-rc1)
Retriving the Target works, I get :
The following targets have been found:
 iqn.2000-01.com.synology:SVC-STO-FR-301.Target-1.2dfed4a32a, TPGT: 1
10.199.9.16:3260
fe80::211:32ff:fe6d:6ddb:3260

Hi,
This is the expected behavior. The process that automatically migrates VMs
so that they do not break affinity groups, only migrates one VM at a time.
In this case the two VMs are in a positive enforcing group, so none of them
can be migrated away from the other.
Currently, for the same
Hi there,
Here is my setup:
oVirt engine: 4.2.8
1. Create an affinity group as below:
VM affinity rule: positive + enforcing
Host affinity rule: disabled.
VMs: 2 VMs added
Hosts: No host selected.
2. Run the 2 VMs, they are running on the same host, say host1.
3. Change the affinity group's host
Hi,
OK, thanks. I'd also asked for gluster version you're running. Could you
share that information as well?
-Krutika
On Thu, Mar 7, 2019 at 9:38 PM Drew Rash wrote:
> Here is the output for our ssd gluster which exhibits the same issue as
> the hdd glusters.
> However, I can replicate the
26 matches
Mail list logo