Helo Nir,
No I do not have libvirt logs enabled.
I restored the vm from the snapshot and retried. It did boot but at the same
time it did not merge again when I tried it. On the other hand when I cloned it
and tried to recreate the situation the image did merge.
Is it possible that the image is
I think these are the corresponding logs
qcow2: Marking image as corrupt: Cluster allocation offset 0x7890c000 unaligned
(L2 offset: 0x39e0, L2 index: 0); further corruption events will be
suppressed
main_channel_link: add main channel client
main_channel_client_handle_pong: net test:
here os the vdsm.log from the SPM
there is a report for the second disk of the vm but the first (the one which
failes to merge does not seem to be anywhere)
2021-08-03 15:51:40,051+0300 INFO (jsonrpc/7) [vdsm.api] START
getVolumeInfo(sdUUID=u'96000ec9-e181-44eb-893f-e0a36e3a6775',
hello Benny and thank you for the quick response:
this is the vdsm log:
2021-08-03 15:50:58,655+0300 INFO (jsonrpc/3) [storage.VolumeManifest]
96000ec9-e181-44eb-893f-e0a36e3a6775/205a30a3-fc06-4ceb-8ef2-018f16d4ccbb/7611ebcf-5323-45ca-b16c-9302d0bdedc6
info is {'status': 'OK', 'domain':
Hello
I have a situation with a vm in which I cannot delete the snapshot.
The whole thing is quite strange because I can delete the snapshot when I
create and delete it from the web interface but when I do it with a python
script through the API it failes.
The script does create snapshot->
ok the desired configuration (which works perfectly on a similar cluster
running ovirt 4.3) is as follows:
1 Gateway network
1 ovirtmgmt network
1 display network
i.e. 10.0.0.0/24 is ovirtmgmt
10.0.1.0/24 is display network
193.92.xx.xxx/27 is gateway network
So whenever I want to run an update
Hello I created a cluster of ovirt 4.4 with 5 servers, I have a diferent
gateway, ovirtmgmt and display network. In 4.4 routing is managed from network
manager. Network manager seems to "forget" and recreate the route each time. So
I cannot connect to the virtual machines console unless I run a
Hello
We have a problem with hosted engine storage after updating one host which
serves as a gluster server for the engine (the setup is gluster replica 3 with
local disks from 3 hypervisors)
Volume heal command shows
[root@o5-car0118 engine]# gluster volume heal engine info
Brick
Thank you for your answer
most of the machines where marked with questionmark.
No I have not tried through virsh to shutdown the machines.
Some machines marked with "?" where active some where and I mean they where
not responding to ping command neither could I shut them down through
Hello
We have a cluster with 9 ovirt host and a storage cluster with 3 storage
servers, we run ovirt 4.3.6.6-1.
Storage is setup is replica 3 with arbiter.
There is another unmanaged gluster volume which is replica 3 based on local
disks of ovirt hosts and the engine and a few small pfsense vm
I would like some clarification on MinFreeMemoryForUnderUtilized and
MaxFreeMemoryForOverUtilized.
How does that work? it seems the names should be oposite ?
so if I have hosts with 64GB ram and I want to consider them over utilized at
48GB used and under utilized at 12GB used, should I set
ok fixed it. For some strange reason there was a leftover record on /etc/hosts
showing to 192.168.122.15
thank you very much
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement:
I have an ovirt installation (4.2.6), and I have come accross this issue:
Hosted engine can migrate to a certain host say host5. Admin portal is working
fine, everything is working fine AFAIK but hosted-engne --vm-status shows
"failed livelines check", when engine is on that host, so after a
I'm not an expert but as far as I can tell, if your router supports a secondary
ip in that network and routes these packets to the internet it should work. If
not you should do NAT/Masquerade the vm network to the internet. But that is
something that is not in ovirt configuration to handle, you
ok thank you very much for your answer
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
Moving disk from one gluster domain to another fails, either with the vm
running or down..
It strikes me that it says : File
"/usr/lib64/python2.7/site-packages/libvirt.py", line 718, in blockCopy
if ret == -1: raise libvirtError ('virDomainBlockCopy() failed', dom=self)
I'am sending the
ok I think I figured out what is happening...
I am currently running some redundancy tests on ovirt+replica2+arbiter glusterfs
This is happening under small file 4k fio random write test. like this
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test
--filename=test --bs=4k
I know about this bug, but this is not what I 've experienced. The virtual
machine was up and running, I did not try to start it..
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement:
It seems that a vm with 3 disks boot in domain engine another disk in domain
vol1 and a third in domain v3 became non responsive when one gluster host went
down.
To explain a bit the situation I have 3 glusterfs hosts with 3 volumes
hosts are g1,g2,g3 each have 3 bricks
g1 has vol1,vol2 and
Thank you very much for your answer, somehow the exception did not work, but I
guess it is ok it is not a shared storage, it is a dual sas port external JBOD
box. I guess multipath is not really needed in that case
___
Users mailing list --
after upgrading to 4.20.39-1.el7 sas multipath stopped being detected
I did a diff to the two files
Is the current behaviour the correct one or the previous one ?
I think sas multipath should also be detected no ?
[root@g1-car0136 etc]# diff multipath.conf multipath.conf.201809031555
1c1
< #
today our network administration did some upgrades on the networking equipment,
so the engine vlan went down for a while. Afterwards when it came back up 3
hosts was found as non responding, I couldn't see anything suspicious on the
hosts, the problem "fixed" itself when I restarted the
so if I understand correctly isolatedprivatevlan-hook does not work correctly
(when I tested it it vimply simply had no traffic) and
clean-traffic-gateway.xml filter is being tested to be added in libvirt and
ovirt.
It would be nice to be able to add custom nwfilters..
Thank you for your
still cannot figure out how this works
Steps taken : I added isolatedprivatevlan=.* as custom property with engine
setup,
when I edit the vm Ι added a custom property to it as
isolatedprivatevlan=11:22:33:44:55,10.0.0.1
but then the vm does not start and complains about multiple filters on the
How does this filter work ?
Do I have to set a custom property on the engine ?
Should I add clean traffic on vnic profile and then add GATEWAY_MAC and IP ?
is there a way to select it on a specific vm?
I installed the hook on one hook restarted the engine put the host in
maintenance restarted
Very usefull information thank you!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
Very usefull information thank you
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
I did set it up yesterday and I intent to use it. Setup worked ok on Centos
7.5. Only problem was with depency of ansible for
centos-openshift-origin39-candidate, which I had to deactivate from repos. It
did work though and I am getting reports now. Anyway is there a way to "relax"
the amount
agent.log is here
https://pastebin.com/tGyeBNr3
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
interesting part of agent.log is here
https://pastebin.com/tGyeBNr3
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
from glusterfs/rhev_datacenter
[2018-06-18 12:32:50.854668] W [socket.c:593:__socket_rwv] 0-glusterfs: readv
on 172.16.224.10:24007 failed (No data available)
[2018-06-18 12:33:38.194322] C
[rpc-clnt-ping.c:166:rpc_clnt_ping_timer_expired] 0-engine-client-0: server
172.16.224.10:49152 has not
it seems that reduduncy of glusterfs is working. It doesn't show on mount
options but it is there in the processes. This must be something else that
caused the engine to pause. So ignore this. Is there a way to debug why the
hosted engine paused?
___
I have a 4 node setup (centos 7) with hosted engine on glusterfs (replica 3
arbiter1). Gluster fs is like this
ohost01 104G brick (real data)
ohost02 104g brick (real data)
ohost04 104g brick (arbiter)
ohost05 104g partition used as nfs-storage.
hosted engine is on gluster. I also have an
33 matches
Mail list logo