On Nov 28, 2017 12:24 AM, "Ben Bradley" wrote:
On 23/11/17 06:46, Maton, Brett wrote:
> Might not be quite what you're after but adding
>
> # RHEV PRIVATE
>
> To /etc/multipath.conf will stop vdsm from changing the file.
> |||
>
Hi there. Thanks for the reply.
Yes I am aware
What about mounting over nfs instead of the fuse client. Or maybe libgfapi.
Is that available for export domains
On Fri, Nov 24, 2017 at 3:48 AM Jiří Sléžka wrote:
> On 11/24/2017 06:41 AM, Sahina Bose wrote:
> >
> >
> > On Thu, Nov 23, 2017 at 4:56 PM, Jiří Sléžka
Hi,
when running "hosted-engine --deploy" on a fresh CentOS 7.4 with pre-built
network configuration (and ovirtmgmt bridge), the following error occurs:
2017-11-27 19:08:12 DEBUG otopi.context context._executeMethod:142 method
exception
Traceback (most recent call last):
File
On Mon, Nov 27, 2017 at 9:42 PM, Michal Skrivanek <
michal.skriva...@redhat.com> wrote:
>
> On 27 Nov 2017, at 15:27, CRiMSON wrote:
>
> HA on a single host is kind of a bad labeling of options IMHO. HA-SPOF
> (that makes me giggle a little).
>
> But maybe another option
On 23/11/17 06:46, Maton, Brett wrote:
Might not be quite what you're after but adding
# RHEV PRIVATE
To /etc/multipath.conf will stop vdsm from changing the file.
|||
Hi there. Thanks for the reply.
Yes I am aware of that and it seems that's what I will have to do.
I have no problem with
Hi,
you can also add yourself to the RFE we have about this here:
https://bugzilla.redhat.com/show_bug.cgi?id=1325468
Best regards
Martin Sivak
SLA / oVirt
On Mon, Nov 27, 2017 at 9:42 PM, Michal Skrivanek
wrote:
>
> On 27 Nov 2017, at 15:27, CRiMSON
> On 27 Nov 2017, at 12:58, Arthur Melo wrote:
>
> Hello, there!
>
> Is it possible to reduce time to auto migrate machines when node is non
> responsive?
if you configure power management on hosts so the actual state of VMs by
fencing action it’s going to be faster
do
> On 27 Nov 2017, at 15:27, CRiMSON wrote:
>
> HA on a single host is kind of a bad labeling of options IMHO. HA-SPOF (that
> makes me giggle a little).
>
> But maybe another option simply as "Start VM when engine is ready" and an
> ever more awesome setting would be
HA on a single host is kind of a bad labeling of options IMHO. HA-SPOF
(that makes me giggle a little).
But maybe another option simply as "Start VM when engine is ready" and an
ever more awesome setting would be the ability to # the VMs and how they
start.
This is very home use case stuff tho
> On 27 Nov 2017, at 15:04, Derek Atkins wrote:
>
> Hi,
>
> On Mon, November 27, 2017 1:58 pm, CRiMSON wrote:
>> I've been digging thru mailing lists and blogs and I'm a bit confused
>> about
>> how you have VMs auto-start after a reboot in a ovirt system that is setup
>> as
Hi,
On Mon, November 27, 2017 1:58 pm, CRiMSON wrote:
> I've been digging thru mailing lists and blogs and I'm a bit confused
> about
> how you have VMs auto-start after a reboot in a ovirt system that is setup
> as all-in-one.
>
> From what I can gather this can be achieved via some startup
Hello
As some may have seen recently OVS DPDK has been introduced to oVirt
(https://ovirt.org/blog/2017/09/ovs-dpdk/). This is very interesting
feature which can make a huge performance difference in terms of network
performance.
Just wanted to ask if anyone has tested it in any environment
I've been digging thru mailing lists and blogs and I'm a bit confused about
how you have VMs auto-start after a reboot in a ovirt system that is setup
as all-in-one.
>From what I can gather this can be achieved via some startup scripts (or a
rc.local foo).
But there is no setting inside the
Hello, there!
Is it possible to reduce time to auto migrate machines when node is non
responsive?
Atenciosamente,
Arthur Melo
Linux User #302250
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
In my experience, though, you can't restore from a 3.6 engine backup to a
4.1 engine. You'd have to install 4.0, restore, then upgrade to 4.1
wouldn't you?
-Original Message-
From: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] On Behalf Of
Cam Wright
Sent: Wednesday, November
according tohttp://docs.ceph.com/docs/luminous/rbd/qemu-rbd/ the use of
Discard/TRIM for Ceph RBD disks is possible. Openstack seems to have
implemented it
(https://www.sebastien-han.fr/blog/2015/02/02/openstack-and-ceph-rbd-discard/).
In oVirt there is no option "Enable Discard" for Cinder
Hi Team,
I am looking at integrating openstack neutron with oVirt.
Reading the docs so far, and through my setup experiments, I can see that
oVirt and neutron do seem to understand each other.
But I need some helpful pointers to help me understand a few items
during configuration.
1) During
Hi list,
access policy to my system has changed recently and now I've to manage
non admin users allowed to create VMs only on selected resources.
Unfortunately this condition was not expected years ago when the system
was installed and so all VNic profiles was created with the option
"Allow all
Hi,
This error just informs you that multiple hosts tried to start the
engine VM and one of them lost the lock race. Nothing to worry about
as long as the webadmin is up.
https://ovirt.org/documentation/how-to/hosted-engine/#engineunexpectedlydown
Best regards
--
Martin Sivak
oVirt / SLA
On
Hi Terry,
Can you please attach engine and vdsm logs?
*Regards,*
*Shani Leviim*
On Mon, Nov 27, 2017 at 11:29 AM, Terry hey wrote:
> Hello all,
> I installed ovirt self-hosted engine. Unfortunately, the engine VM was
> suddenly shutdown. And later, it automatically
Hi,
according to http://docs.ceph.com/docs/luminous/rbd/qemu-rbd/ the use of
Discard/TRIM for Ceph RBD disks is possible. Openstack seems to have
implemented it
(https://www.sebastien-han.fr/blog/2015/02/02/openstack-and-ceph-rbd-discard/).
In oVirt there is no option "Enable Discard" for
Hello all,
I installed ovirt self-hosted engine. Unfortunately, the engine VM was
suddenly shutdown. And later, it automatically powered on. The engine admin
console showed the following error.
VM HostedEngine is down with error. Exit message: resource busy: Failed to
acquire lock: error -243.
I
Perfect!
Very detail explanation.
For the moment I will simply ignore the warning message.
a.
On Mon, Nov 27, 2017 at 9:42 AM, Shani Leviim wrote:
> Hi,
> When you'll press 'ok', a snapshot of that disk's image chain (its Base
> volume) is created in the source storage
Hi,
When you'll press 'ok', a snapshot of that disk's image chain (its Base
volume) is created in the source storage domain, and the entire image chain
is replicated in the destination storage domain.
It doesn't effect the original disk, and meanwhile, the VM keep "act
normally":
- If the VM's
On Thu, Nov 23, 2017 at 7:41 PM, Artem Tambovskiy <
artem.tambovs...@gmail.com> wrote:
> I have a question indirectly releted to Ovirt. I need to move one old
> setup into VM running in oVirt cluster. The VM was based on on Debian 8.9,
> so I took a Debian cloude image from
25 matches
Mail list logo