You can try to passthrough a dedicated nic for the pfsense vm.
Note if you need several (up to 64) vms with a trunk and if your (often
10Gb )nic supports virtual function, you can use sr-iov so as to share
your PV into 64 VFs.
Le 09/03/2017 à 20:40, Rogério Ceni Coelho a écrit :
Hi,
Ovirt
On 03/10/2017 09:46 AM, Ondra Machacek wrote:
> So what's your provider 389ds or FreeIPA?
>
> Note that both use differrent unique ID. IPA is using 'ipaUniqueID',
> and 389ds is using 'nsuniqueid'. DId you tried both?
Thanks for pointing that out! It works perfectly if I use IPA.
I didn't know t
On Thu, Mar 9, 2017 at 2:22 PM, Richard Neuboeck
wrote:
> Hi,
>
> I seem to experience the same problem right now and am at a bit of a
> loss as to where to dig for some more troubleshooting information. I
> would highly appreciate some help.
>
> Here is what I have and what I did:
>
> ovirt-engi
I just noticed this in the vdsm.logs. The agent looks like it is trying to
start hosted engine on both machines??
destroydestroydestroy
Thread-7517::ERROR::2017-03-10
01:26:13,053::vm::773::virt.vm::(_startUnderlyingVm)
vmId=`2419f9fe-4998-4b7a-9fe9-151571d20379`::The vm start process failed
Trac
Vote 1 for this. Interested also
On 10 March 2017 at 05:40, Rogério Ceni Coelho
wrote:
> Hi,
>
> Ovirt user interface does not allow to input 4095 as a tag vlan number ...
> Only values between 0 and 4094.
>
> This is useful to me too. Maybe any other way ?
>
> Em qui, 9 de mar de 2017 às 16:15,
I was wondering if open vswitch will get round this problem. Has anyone
tried it?
On 9 Mar 2017 7:41 pm, "Rogério Ceni Coelho"
wrote:
> Hi,
>
> Ovirt user interface does not allow to input 4095 as a tag vlan number ...
> Only values between 0 and 4094.
>
> This is useful to me too. Maybe any oth
Hi All
I had a storage issue with my gluster volumes running under ovirt hosted.
I now cannot start the hosted engine manager vm from "hosted-engine
--vm-start".
I've scoured the net to find a way, but can't seem to find anything
concrete.
Running Centos7, ovirt 4.0 and gluster 3.8.9
How do I re
Hi Ian,
it is normal that VDSMs are competing for the lock, one should win
though. If that is not the case then the lockspace might be corrupted
or the sanlock daemons can't reach it.
I would recommend putting the cluster to global maintenance and
attempting a manual start using:
# hosted-engine
Hello Luca
>Name: openstack-networks
>Type: OpenStack Networking
>Description:
>Provider URL: http://openstack.example.com:9696
I assume Networking Plugin: Open vSwitch
Do you have any security groups defined for the vnic profile on your
external network?
Looking at the output you provided is se
Hello everybody,
for production usage i'm testing ovirt with gluster.
All components seems to be running fine but whenever I'm testing huge
workload, then node freez. Not the main OS, but VDSM mgmt and attached
services, VMs eg.
*mgmt *
oVirt - 4.1.0.4
centos 7.3-1611
*nodes* ( installed f
Il 10 mar 2017 11:48 AM, "Marcin Mirecki" ha scritto:
Hello Luca
>Name: openstack-networks
>Type: OpenStack Networking
>Description:
>Provider URL: http://openstack.example.com:9696
I assume Networking Plugin: Open vSwitch
Yes, i confirm.
Do you have any security groups defined for the vni
I am not sure if another email I sent went through but has anyone got
problems when upgrading a running oVirt-node-ng from 4.1.0 to 4.1.1.
Is the only solution a complete reinstall of the node ?
Thanks
Fernando
___
Users mailing list
Users@ovirt.org
On Thu, Mar 9, 2017 at 11:23 AM, Juan Hernández wrote:
>
> > Very good point Gialuca, you are right, the 'persist_memorystate'
> flag
> > is 'true' by default, and that makes the pause longer. Would you be
> so
> > kind to add it to the 'vm_backup.py' example that is part of version
>
On Fri, Mar 10, 2017 at 1:07 PM, p...@email.cz wrote:
> Hello everybody,
>
> for production usage i'm testing ovirt with gluster.
> All components seems to be running fine but whenever I'm testing huge
> workload, then node freez. Not the main OS, but VDSM mgmt and attached
> services, VMs eg.
W
On Fri, Mar 10, 2017 at 12:11 PM, Luca 'remix_tj' Lorenzetto
wrote:
>
>>[root@ovirt002 ~]# ovs-vsctl show
>>ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection
>>failed (No such file or directory)
>
> This is quite worrying. Is ovs on the host working properly?
> Can you please check
freez / freezing
IO operations are paused from any reasons
available posibilities are
1) net - any tcp framework collapse
2) gluster interconnect due gluster daemon - process hang ??
3) VSD - pause managed services
4) XFS - RW issues
5) swap overfulled - any processes are killed - but why swap is
> No i haven't defined any security policy, explicitly. I'm using a network
defined through horizon without any special option.
This is strange, as the basic flow should only connect the vnic to the ovs
integration bridge.
Please give me some time to look at the code.
> Where i find the xml file?
>[root@fltiovirt002 ~]# ovs-vsctl show
>53bfa8a8-80c5-4372-abb1-633c29eb1e60
> ovs_version: "2.6.90
This show that ovs is up. It's however missing the integration bridge
(br-int).
The integration bridge is the bridge to which the vm vnics should be
connected.
Try:
ovs-vsctl add-br br-int
this
Hi everyone,
Tonight we experienced a hardware fault on one of our PHX storage servers.
The faulty server was used to provide storage for multiple production VMs.
Since automatic failover did not happen they became unavailable.
The outage lasted between 09.03.2017 20:36 UTC and 10.03.2017 09:15 UT
Luca,
Can you please send me the vdsm log of plugging a vnic to an external
network?
The relevant log file is: /var/log/vdsm/vdsm.log
On Fri, Mar 10, 2017 at 12:11 PM, Luca 'remix_tj' Lorenzetto <
lorenzetto.l...@gmail.com> wrote:
>
>
> Il 10 mar 2017 11:48 AM, "Marcin Mirecki" ha
> scritto:
Hi, All. We have an ovirt 4.1 cluster setup using multiple paths to a single
iSCSI LUN for the data storage domain. I would now like to migrate to a hosted
engine.
I setup the new engine VM, shutdown and backed-up the old VM, and restored to
the new VM using engine-backup. After updating DNS
and apparently it worked as expected.
The master (you? jenkins CI? ;-) should see my change...
Just learnt (a little...) another thing
sys admins often try to put an eye inside devs field but the reverse seldom
happens ;-)
-- next part --
An HTML attachm
On Fri, Mar 10, 2017 at 2:41 PM, Marcin Mirecki wrote:
> Luca,
>
> Can you please send me the vdsm log of plugging a vnic to an external
> network?
> The relevant log file is: /var/log/vdsm/vdsm.log
You mean when i power on a VM that is using that external network?
--
"E' assurdo impiegare gli
Hello,
on my 4.1 environment sometimes I get this kind of message, typically when
I create a snapshot.
Please note that the snapshot is correctly created: I get the event between
the snapshot creation initiated and completed events
VDSM ovmsrv05 command TeardownImageVDS failed: Cannot deactivate
We have an ovirt 4.1 cluster with an iSCSI data domain. If I shut down the
entire cluster and just boot the hosts, none of the hosts login to their iSCSI
sessions until the engine comes up. Without logging into the sessions, sanlock
doesn't obtain any leases and obviously none of the VMs start
Hi, what kind of setup you have? hosted engine just runs on nfs or gluster
afaik.
regards,
2017-03-10 15:22 GMT-03:00 Devin A. Bougie :
> We have an ovirt 4.1 cluster with an iSCSI data domain. If I shut down
> the entire cluster and just boot the hosts, none of the hosts login to
> their iSCSI
On Fri, Mar 10, 2017 at 6:24 PM, Gianluca Cecchi
wrote:
> Hello,
> on my 4.1 environment sometimes I get this kind of message, typically when I
> create a snapshot.
> Please note that the snapshot is correctly created: I get the event between
> the snapshot creation initiated and completed events
We have tried re-kickstarting the server, and removing it from ovirt and
reinstalling it.
Ovirt still thinks its not available to run VMs.
I even tried calling it by a different name, still no luck.
Starting a VM using "run-once" will deploy to the host just fine.
WHY?
I thought when host is in
28 matches
Mail list logo