Thanks, Konstantin.
Just to be clear enough: the first deployment would be made on classic eth
interfaces and later after the deployment of Hosted Engine I can convert the
"ovirtmgmt" network to a LACP Bond, right?
Another question: what about iSCSI Multipath on Self Hosted Engine? I've looked
Ok, thank you!
Just abusing a little more, why you use CentOS instead of oVirt Node? What’s
the reason behind this choice?
Thanks,
V.
On 4 Jul 2017, at 00:50, Vinícius Ferrão
> wrote:
Thanks, Konstantin.
Just to be clear enough: the first
Red Hat Virtualization Host (RHVH) is a minimal operating system based
on Red Hat Enterprise Linux that is designed to provide a simple
method for setting up a physical machine to act as a hypervisor in a
Red Hat Virtualization environment. The minimal operating system
contains only the
Not for hosted engine, with ovirt-engine of course.
On 07/04/2017 11:27 AM, Yaniv Kaul wrote:
How are you using Ceph for hosted engine?
--
Best regards,
Konstantin Shalygin
___
Users mailing list
Users@ovirt.org
LOL.
It’s the hypervisor appliance, just like RHVH.
> On 4 Jul 2017, at 01:23, Konstantin Shalygin wrote:
>
> I don't know what is oVirt Node :)
>
> And for "generic_linux" I have 95% automation (work in progress).
>
>
> On 07/04/2017 11:20 AM, Vinícius Ferrão wrote:
>> Just
Hello,
I’m deploying oVirt for the first time and a question has emerged: what is the
good practice to enable LACP on oVirt Node? Should I create 802.3ad bond during
the oVirt Node installation in Anaconda, or it should be done in a posterior
moment inside the Hosted Engine manager?
In my
Yes, I do deployment in four steps:
1. Install CentOS via iDRAC.
2. Attach vlan to 10G physdev via iproute. This is one handwork. May be
replaced via DHCP management, but for now I only have 2x10G Fiber,
without any DHCP.
3. Run ovirt_deploy Ansible role.
4. Attach oVirt networks after host
I don't know what is oVirt Node :)
And for "generic_linux" I have 95% automation (work in progress).
On 07/04/2017 11:20 AM, Vinícius Ferrão wrote:
Just abusing a little more, why you use CentOS instead of oVirt Node?
What’s the reason behind this choice?
--
Best regards,
Konstantin
Hello,
I’m deploying oVirt for the first time and a question has emerged: what is the
good practice to enable LACP on oVirt Node? Should I create 802.3ad bond during
the oVirt Node installation in Anaconda, or it should be done in a posterior
moment inside the Hosted Engine manager?
In my
On Jul 4, 2017 7:14 AM, "Konstantin Shalygin" wrote:
Yes, I do deployment in four steps:
1. Install CentOS via iDRAC.
2. Attach vlan to 10G physdev via iproute. This is one handwork. May be
replaced via DHCP management, but for now I only have 2x10G Fiber, without
any DHCP.
3.
On Tue, Jul 4, 2017 at 3:51 AM, Vinícius Ferrão wrote:
> Hello,
>
> I’m deploying oVirt for the first time and a question has emerged: what is
> the good practice to enable LACP on oVirt Node? Should I create 802.3ad bond
> during the oVirt Node installation in Anaconda, or
On Mon, Jul 3, 2017 at 6:49 AM, M Mahboubian wrote:
> Hi Yaniv,
>
> Thanks for your reply. Apologies for my late reply we had a long holiday
> here.
>
> To answer you:
>
> Yes the guest VM become completely frozen and non responsive as soon as
> its disk has any activity
Any recommendations in case I have to take down a site for maintenance?
Thanks,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
Hi Sophia,
As part of the UI redesign we did for 4.2, we decided to remove the user
portal, and replace it with the new VM portal.
It will soon be removed from the git repo as well, and the list will be
removed from the welcome page.
CC-ing Michal and Tomas from the virt team, in case you have
On 07/03/2017 02:35 PM, Gianluca Cecchi wrote:
Any recommendations in case I have to take down a site for maintenance?
Thanks,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
Hi ,
You can follow
Could you provide output of "gluster peer status" and "gluster volume
info" ?
On Sun, Jul 2, 2017 at 9:33 AM, Mike DePaulo wrote:
> Hi,
>
>
> I configured a "Gluster storage" network, but it doesn't look like it
> is being used for Gluster. Specifically, the switch's
On Mon, Jul 3, 2017 at 12:08 PM, Sophia Valentine wrote:
> Hi all!
>
> I installed the nightly from the repo on CentOS 7.0.
>
I suggest a newer CentOS - 7.3 has been released few months ago already.
>
> I currently have an issue where visiting the user portal triggers the
>
On Mon, Jul 3, 2017 at 11:00 AM, M Mahboubian
wrote:
> Hi Yanis,
>
> Thank you for your reply.
>
> | Interesting - what interface are they using?
> | Is that raw or raw sparse? How did you perform the conversion? (or no
> conversion - just copied the disks over?)
>
> The
On Mon, Jul 3, 2017 at 12:27 AM, Darrell Budic
wrote:
> It seems vdsmd under 4.1.x (or something under it’s control) changes the
> disk schedulers when it starts or a host node is activated, and I’d like to
> avoid this. Is it preventable? Or configurable anywhere? This
Hi all!
I installed the nightly from the repo on CentOS 7.0.
I currently have an issue where visiting the user portal triggers
the
following errors in the browser console. I believe that I will
need to
compile the engine from source due to the current source being
compacted, thus obscuring
On Sun, Jul 2, 2017 at 5:38 AM, Mike DePaulo wrote:
> Hi everyone,
>
> I have ovirt 4.1.1/4.1.2 running on 3 hosts with a gluster hosted engine.
>
> I was working on setting up a network for gluster storage and
> migration. The addresses for it will be 10.0.20.x, rather
Hi Yanis,
Thank you for your reply.
| Interesting - what interface are they using?| Is that raw or raw sparse? How
did you perform the conversion? (or no conversion - just copied the disks over?)
The VM disks are in the SAN storage in order to use oVirt we just pointed them
to the oVirt VMs.
> On 3 Jul 2017, at 12:21, Oved Ourfali wrote:
>
> Hi Sophia,
>
> As part of the UI redesign we did for 4.2, we decided to remove the user
> portal, and replace it with the new VM portal.
> It will soon be removed from the git repo as well, and the list will be
> removed
Note we do not support 7.0 anymore. oVirt 4.1 needs 7.3,
master/nightly would require 7.4 as soon as it is released
The ISO I had was CentOS 7.0 (whoops), I'll go upgrade it ASAP!
Thanks.
But as Oved says, there’ s not much sense in trying to
use/compile UserPortal on current nightly
Yaniv Kaul writes:
On Mon, Jul 3, 2017 at 12:08 PM, Sophia Valentine
wrote:
Hi all!
I installed the nightly from the repo on CentOS 7.0.
I suggest a newer CentOS - 7.3 has been released few months ago
already.
Will upgrade ASAP (my ISO was a
Hi,
And sorry for delay
2017-06-30 14:09 GMT+02:00 knarra :
> To add a fully replicated node you need to reduce the replica count to 2
> and add new brick to the volume so that it becomes replica 3. Reducing
> replica count by removing a brick from replica / arbiter cannot
Oved Ourfali writes:
Hi Sophia,
As part of the UI redesign we did for 4.2, we decided to remove
the user portal, and replace it with the
new VM portal.
Ah, I see!
It will soon be removed from the git repo as well, and the list
will be removed from the welcome page.
As Nicolas stated below, and as evident by [1], ladies and gentlemen - we
are live.
First person to share an actual screenshot of Stack Overflow displaying
this add gets a +2 on his next engine patch :-)
And thanks again to Eldan for designing the add and to all the upvoters.
[1]
On Mon, Jul 3, 2017 at 3:46 PM, Andrew Dent wrote:
> Has anyone successfully completed a hosted-engine recovery on a multiple
> host setup with production VMs?
I'd like to clarify that "recovery" can span a large spectrum of
flows, from a trivial "I did some change to the
On Sun, Jul 2, 2017 at 12:21 AM, Doug Ingham wrote:
>
> Only problem I would like to manage is that I have gluster network shared
>> with ovirtmgmt one.
>> Can I move it now with these updated packages?
>>
>
> Are the gluster peers configured with the same hostnames/IPs as your
Please attach glusterd & cmd_history log files from all the nodes.
On Mon, Jul 3, 2017 at 2:55 PM, Sahina Bose wrote:
>
>
> On Sun, Jul 2, 2017 at 5:38 AM, Mike DePaulo wrote:
>
>> Hi everyone,
>>
>> I have ovirt 4.1.1/4.1.2 running on 3 hosts with a
Has anyone successfully completed a hosted-engine recovery on a multiple
host setup with production VMs?
Kind regards
Andrew
-- Original Message --
From: "Andrew Dent"
To: "users"
Sent: 2/07/2017 2:22:16 PM
Subject: [ovirt-users]
Hi,
Can some one help me how to select PinToHost policy in ovirt 4.0 ?
I am lost in GUI.
http://www.ovirt.org/develop/release-management/features/sla/scheduler-policies/
Thanks,
~Rohit
___
Users mailing list
Users@ovirt.org
Have exactlly the same doubt here as well.
On 03/07/2017 12:05, aduckers wrote:
Running a 4.1 cluster with FC SAN storage. I’ve got a VM that I’ve customized,
and would now like to pull that out of oVirt in order to share with folks
outside the environment.
What’s the easiest way to do
I've found this on RHV manual, which should apply to oVirt:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.5/html/Administration_Guide/sect-Preparing_and_Adding_Block_Storage.html#Configuring_iSCSI_Multipathing
Let me know if works, because I'm considering a
Running a 4.1 cluster with FC SAN storage. I’ve got a VM that I’ve customized,
and would now like to pull that out of oVirt in order to share with folks
outside the environment.
What’s the easiest way to do that?
I see that the export domain is being deprecated, though I can still set one up
I got it.
It is under configure button at top right corner.
I want to do below stuff please help me how can I make it possible using
ovirt.
(let assume 2 host system and 1 vm)
1> I want to pin Host for particular VM let say host is host1.
2> In some circumstances let say host1 goes down in that
2017-07-03 15:42 GMT+02:00 knarra :
> So, please poweroff your vms while performing this.
Thank you,
Ok, no problem, cluster is not (yet) in production
Thank you again!
___
Users mailing list
Users@ovirt.org
hi,
i'm trying to use iSCSI multipathing for a LUN shared by a Hitachi SAN.
i can't figure out how this is supposed to work, maybe my setup isn't
applicable at all...
our storage admin shared the same LUN for me on two targets, which are
located in two logical networks connected to
On 07/03/2017 06:53 PM, yayo (j) wrote:
Hi,
And sorry for delay
2017-06-30 14:09 GMT+02:00 knarra >:
To add a fully replicated node you need to reduce the replica
count to 2 and add new brick to the volume so that it becomes
replica
Hi Didi
Fair enough.
If I'm in this situation.
I have 3 hosts with 6 production VMs.
The hosted-engine VM is completely toast and not recoverable.
However I have a backup of the hosted-engine database (do I need
anything else).
Is it possible to build a new VM, import the backup of the
On 07/03/2017 06:58 PM, knarra wrote:
On 07/03/2017 06:53 PM, yayo (j) wrote:
Hi,
And sorry for delay
2017-06-30 14:09 GMT+02:00 knarra >:
To add a fully replicated node you need to reduce the replica
count to 2 and add new brick to the
Hi andrew,
This is my personal experience on recovery.
Yes you can recover the hosted-engine from a backup (almost every action you
did on on oVirt after backup will have to be manually reproduced)
This can be tricky depending on your conditions (and there are some subtleties
which can have
It depends on how 'toasted' the engine is.
It is possible to mount the disk, if for example it's a kernel upgrade
that's gone wrong and stopping the machine form booting
On 3 July 2017 at 15:23, Yedidyah Bar David wrote:
> On Mon, Jul 3, 2017 at 4:40 PM, Andrew Dent
Hi Yaniv,
I tried looking into rng direction and I found below stats.
I am not familiar with rng device but it look to me /dev/urandom giving me
better option.
But I am unaware how can I use urandom device in ovirt.
RANDOM DEVICE ==>
cat /dev/random | rngtest -c 1000
rngtest 5
Copyright (c)
I have a rather strange issue which is affecting one of my last deployed
Hypervisors. It is a CentOS 7 (not a oVirt Node) which runs only 3
Virtual Machines.
One of these VMs have a reasonable output traffic at peaks (500 -
700Mbps) and the hypervisor underneath is connected to the switch via
On Mon, Jul 3, 2017 at 4:40 PM, Andrew Dent wrote:
> Hi Didi
>
> Fair enough.
> If I'm in this situation.
> I have 3 hosts with 6 production VMs.
> The hosted-engine VM is completely toast and not recoverable.
Meaning? It does not even boot? If so, then I am afraid
47 matches
Mail list logo