Hi, All. We have a new oVirt 4.1.1 cluster up with the OVS switch type.
Everything seems to be working great, except for live migration.
I believe the red flag in vdsm.log on the source is:
Cannot get interface MTU on 'vdsmbr_QwORbsw2': No such device (migration:287)
Which results from vdsm
For some reason when i add iscsi storage on my ovirtNode it works like a
charm , but when i reboot the ovirtengine and node , the storage doesnt
come up and on the storage tab , i have noticed that the USE HOST dropdown
is empty
NB:iscsi and iscsid are running on the node and the LUN is active
Hi Simone
> you should see it also adding new hosts, it should look like this:
> https://raw.githubusercontent.com/oVirt/ovirt-site/30c70a907
6f94113469401a51cafdfc7be8fbedd/source/images/2017-02-16-
hyperconverged-ovirt-with-ovn-picture17.png
> On Mar 25, 2017, at 10:57 PM, Yedidyah Bar David wrote:
>
> On Fri, Mar 24, 2017 at 3:08 AM, Jamie Lawrence
> wrote:
[…]
>> Anyone know what I am missing?
>
> Probably OVESETUP_PROVISIONING/postgresProvisioningEnabled
> and
Hi,
I'm trying to passthrough gpu card to windows 7 guest, but I got the
error code 43.
I use ovirt node: ovirt-node-ng-installer-ovirt-4.1-2017032304.iso and
ovirt-engine-appliance-4.1-20170322.1.el7.centos.noarch.rpm
I have added options "intel_iommu=on rdblacklist=nouveau
I am sorry, how can I get the screenshot of the host in the engine?
2017-03-27 16:57 GMT+08:00 Artyom Lukianov :
> Looks like a bug, the "Max free Memory for scheduling new VMs" in the case
> when we do not have memory optimization must be equal to the free memory on
> the
Hi,
I inherited a small oVirt cluster with 3 nodes using iSCSI storage.
The LUNs used by the oVirt nodes were also accessible by some VMware nodes.
By mistake an action was started on VMware to initialize and start using
the same LUNs, which it happily did. This rendered the storage no longer
The answer is yes to both question
On 03/27/2017 10:12 AM, Sahina Bose wrote:
Is "spfy-hw01" resolvable from engine? Have you associated network
named "gluster" to interface associated with "spfy-hw01" under "Setup
Host Network" of host?
On Sat, Mar 25, 2017 at 5:56 PM, Arsène Gschwind
On Mon, Mar 27, 2017 at 5:05 AM, Tatsuya wrote:
> Dear Didi
>
> > Even if you never heard about hosted-engine in previous versions and 4.1
> > was your first attempt? Any suggestions for improvement to make it
> easier
> > to find?
>
> Sorry to late reply. I notice that I
On Mon, Mar 27, 2017 at 2:06 PM, Davide Ferrari wrote:
> I have the same error (warning) and in my case the answer is yes to both
> questions.
>
Which IP address does "spfy-hw01" resolve to from engine.
Is it the same IP address that you see in Networks sub-tab under hosts-
Looks like a bug, the "Max free Memory for scheduling new VMs" in the case
when we do not have memory optimization must be equal to the free memory on
the host(minus some small amount of the reserved memory). Can you please
provide the screenshot of the host in the engine and also output of the
Sorry for the mistake.
In the following mail I sent earlier, the “4.0” should read “4.1.0”
and the “4.1” should read “4.1.1” instead.
Regards
Tatsuya
2017-03-27 12:05 GMT+09:00 Tatsuya :
> Dear Didi
>
> > Even if you never heard about hosted-engine in previous versions and
Is "spfy-hw01" resolvable from engine? Have you associated network named
"gluster" to interface associated with "spfy-hw01" under "Setup Host
Network" of host?
On Sat, Mar 25, 2017 at 5:56 PM, Arsène Gschwind
wrote:
> Hi,
>
> I do have a recuring warning in engine.log
On 26 March 2017 at 19:34, Anantha Raghava
wrote:
> Hello,
>
> We are now using oVirt 4.1 in our lab with 2 hosts and a separate server for
> engine. Both hosts (Dell R710 Servers) are in a single cluster and load
> balancing & migrations work perfectly.
>
>
24.03.2017 17:11, Nelson Lameiras пишет:
Hello,
When upgrading my test setup from 4.0 to 4.1, my engine vm lost it's
console (from SPICE to None in GUI)
My test setup :
2 manually built hosts using centos 7.3, ovirt 4.1
1 manually built hosted engine centos 7.3, oVirt 4.1.0.4-el7,
Which iDrac card do your servers have and which agent are you using ?
On 27 March 2017 at 03:05, Anantha Raghava
wrote:
> Hi,
>
> I tried to configure the power management. But, the agent (idrac) is
> failing. It is unable to contact the idrac management
I have the same error (warning) and in my case the answer is yes to both
questions.
2017-03-27 10:12 GMT+02:00 Sahina Bose :
> Is "spfy-hw01" resolvable from engine? Have you associated network named
> "gluster" to interface associated with "spfy-hw01" under "Setup Host
>
On 03/24/2017 04:01 PM, Davide Ferrari wrote:
> Source: CentOS 7.2 - qemu-kvm-ev-2.3.0-31.el7.16.1
> Dest: CentOS 7.3 - qemu-kvm-ev-2.6.0-28.el7_3.3.1
>
> To be fair I'm trying to migrate away that VM so I can install updates
> on the source host.
Another, hopefully less likely, case is a QEMu
Hi there,
my smartphone updated mOvirt these days to 1.7. Since then I always
get errors when trying to access the disks dialogue of a VM in mOVirt.
It boils down to the URL
https:///ovirt-engine/api/vms//disks
Result is always 404.
A simple cross check in the webbrowser returns the same
Hi,
I don't really know if this question is suitable on this list, as I
doubt it's an oVirt bug, neither I know if it shall be considered a bug.
We recently run a VM on a host that was memory over-used (around 80% of
usage). The VM booted correctly, then we run "top" and saw how physical
Hi all,
I have just upgraded ovirt manager from 4.1.0 to 4.1.1, then I have
upgraded one host. After reboot and activation of this host some vms
started to migrate here.
Some time after migration CPU of this vms go to 100% and vms become
unreachable.
There is /var/log/libvirt/qemu/hypnos.log
This is the page covering minor releases:
http://www.ovirt.org/documentation/upgrade-guide/chap-Updates_between_Minor_Releases/
Yaniv Dary
Technical Product Manager
Red Hat Israel Ltd.
34 Jerusalem Road
Building A, 4th floor
Ra'anana, Israel 4350109
Tel : +972 (9) 7692306
8272306
Email:
Le 27/03/2017 à 14:43, Yaniv Dary a écrit :
This is the page covering minor releases:
http://www.ovirt.org/documentation/upgrade-guide/chap-Updates_between_Minor_Releases/
Yaniv Dary Technical Product Manager Red Hat Israel Ltd. 34 Jerusalem
Road Building A, 4th floor Ra'anana, Israel 4350109
Sorry for hijacking the thread but, in my case, I have a separate VLAN for
gluster intercommunication and that VLAN is not accessible from the hosted
engine. Maybe that's the problem?
2017-03-27 12:14 GMT+02:00 Sahina Bose :
>
>
> On Mon, Mar 27, 2017 at 2:06 PM, Davide
Is a 4.1 upgrade from 4.0 considered minor? I thought that were considered
minor only the 4.0.x upgrades (or 4.1.x)
2017-03-27 14:43 GMT+02:00 Yaniv Dary :
> This is the page covering minor releases:
> http://www.ovirt.org/documentation/upgrade-guide/
>
I am not sure I understand your question, though it reminds me of
Bug 1372798 - Setupnetworks not removing the "BRIDGE=" entry in ifcfg
file when changing a untagged network to tagged
which Vdsm version do you have? can you share your supervdsm.log?
On Fri, Mar 24, 2017 at 6:44 PM, Gianluca
I did following steps:
- delete target on all initiators (ovirt nodes)
iscsiadm -m node -T iqn.2017-03.cz.dragon.ovirt.fuvs-sn1:10T -p
10.53.1.201:3260 -u
iscsiadm -m node -T iqn.2017-03.cz.dragon.ovirt.fuvs-sn1:10T -p
10.53.1.201:3260 -o delete
- stop tgtd on target
- fill storage by
On Mon, Mar 27, 2017 at 9:07 PM, Jamie Lawrence
wrote:
>> On Mar 25, 2017, at 10:57 PM, Yedidyah Bar David wrote:
>>
>> On Fri, Mar 24, 2017 at 3:08 AM, Jamie Lawrence
>> wrote:
>
> […]
>
>>> Anyone know what I am missing?
28 matches
Mail list logo