Hi guys, good morning/night .
first of all thanks to all the community and the whole team, you are doing
a great effort .
Today Im having an issue with ovirt 4.1.2 running as hosted engine over
nfs. and data storage over iscsi.
I found out of the problem when I tried to migrate one vm to another
Hi Devin,
Please consider that for the OS I have a RAID 1. Now, lets say I use RAID 5 to
assemble a single disk on each server. In this case, the SSD will not make any
difference, right? I guess that to be possible to use it, the SSD should not be
part of the RAID 5. In this case I could
Well, not sure this legit to try, but did try it.. And it seemed to
work, I was able to recreate the storage domain as needed... Access the
database, and find the bogus storage domain. I figured out the table
names from other discussions about accessing the database directly to
clean up
Bit more information...
[oVirt shell (connected)]# list storagedomains
id : 2b6d2e60-6cfb-45cd-a6e1-8cf38ad2bf34
name : Datastore_02
This is the new domain that failed, since the master was not 100%
up/initialized, but does NOT appear in the UI of course. When I try to
remove
Domain name in use? After failed domain setup?
I attempted to create a new domain, but I did not realize the master
domain was 100% initialized. The new domain creation failed. But it
appears the new domain 'name' was used. Now I cannot create the new
domain as expected. I get UI error
Moacir, I understand that if you do this type of configuration you will be
severely impacted on storage performance, specially for writes. Even if you
have a Hardware RAID Controller with Writeback cache you will have a
significant performance penalty and may not fully use all the resources you
Hi Colin,
Take a look on Devin's response. Also, read the doc he shared that gives some
hints on how to deploy Gluster.
It is more like that if you want high-performance you should have the bricks
created as RAID (5 or 6) by the server's disk controller and them assemble a
JBOD GlusterFS.
chronyd is the new ntpd.
On Aug 7, 2017 7:23 PM, "Moacir Ferreira"
wrote:
> I found that NTP does not get installed on oVirt node on the latest
> version ovirt-node-ng-installer-ovirt-4.1-2017052309 <(201)%20705-2309>
> .iso.
>
>
> Also the installed repositories
I found that NTP does not get installed on oVirt node on the latest version
ovirt-node-ng-installer-ovirt-4.1-2017052309.iso.
Also the installed repositories does not have it. So, is this a bug or NTP is
not considered appropriated anymore?
Thanks.
Moacir
> Users mailing list
> Users@ovirt.org <mailto:Users@ovirt.org>
> http://lists.ovirt.org/mailman/listinfo/users
> <http://lists.ovirt.org/mailman/listinfo/users>
>
>
>
>
> _
he file/move 40Gb NICs.
Why not Jumbo frame every where ?
-- next part --
An HTML attachment was scrubbed...
URL:
<http://lists.ovirt.org/pipermail/users/attachments/20170807/4ba55f08/attachment-0001.html>
--
Message: 2
Date: Mon, 7 Aug 2017 16:52:40 -0300
F
Hi Fernando,
Indeed, having and arbiter node is always a good idea, and it saves
costs a lot.
Good luck with your setup.
Cheers
Erekle
On 07.08.2017 23:03, FERNANDO FREDIANI wrote:
Thanks for the detailed answer Erekle.
I conclude that it is worth in any scenario to have a arbiter node
Hello.
Despite I didn't get any feedback on this topic anymore I just wanted to
let people know that since I moved the VM to another oVirt Cluster
running oVirt-Node-NG and Kernel 3.10 the problem stopped happening.
Although I still don't know the cause of it I suspect it may have to do
with
Thanks for the detailed answer Erekle.
I conclude that it is worth in any scenario to have a arbiter node in
order to avoid wasting more disk space to RAID X + Gluster Replication
on the top of it. The cost seems much lower if you consider running
costs of the whole storage and compare it
Hi Moacir,
First switch off all VMs.
Second you need to declare hosts maintenance mode, don't start with SRM
(of course if you are able use the ovirt-engine), it will ask you to
shutdown the glusterfs on a machine.
Third if all machines are in maintenance mode, you can start shutting
down
Hi Fernando (sorry for misspelling your name, I used a different keyboard),
So let's go with the following scenarios:
1. Let's say you have two servers (replication factor is 2), i.e. two
bricks per volume, in this case it is strongly recommended to have the
arbiter node, the metadata storage
Hi Franando,
So let's go with the following scenarios:
1. Let's say you have two servers (replication factor is 2), i.e. two
bricks per volume, in this case it is strongly recommended to have the
arbiter node, the metadata storage that will guarantee avoiding the
split brain situation, in
What you mentioned is a specific case and not a generic situation. The
main point there is that RAID 5 or 6 impacts write performance compared
when you write to only 2 given disks at a time. That was the comparison
made.
Fernando
On 07/08/2017 16:49, Fabrice Bacchella wrote:
Le 7 août
>> Moacir: Yes! This is another reason to have separate networks for
>> north/south and east/west. In that way I can use the standard MTU on the
>> 10Gb NICs and jumbo frames on the file/move 40Gb NICs.
Why not Jumbo frame every where ?___
Users
> Le 7 août 2017 à 17:41, FERNANDO FREDIANI a écrit
> :
>
> Yet another downside of having a RAID (specially RAID 5 or 6) is that it
> reduces considerably the write speeds as each group of disks will end up
> having the write speed of a single disk as all other
Hi Colin,
I am in Portugal, so sorry for this late response. It is quite confusing for
me, please consider:
1 - What if the RAID is done by the server's disk controller, not by software?
2 - For JBOD I am just using gdeploy to deploy it. However, I am not using the
oVirt node GUI to do
I have installed a oVirt cluster in a KVM virtualized test environment. Now,
how do I properly shutdown the oVirt cluster, with Gluster and the hosted
engine?
I.e.: I want to install a cluster of 3 servers and then send it to a remote
office. How do I do it properly? I noticed that glusterd is
On Mon, Aug 7, 2017 at 1:52 PM, Karli Sjöberg wrote:
> On mån, 2017-08-07 at 12:46 +0200, Johan Bernhardsson wrote:
> > There is no point on doing that as azure is a cloud in itself and
> > ovirt
> > is to build your own virtual environment to deploy on local hardware.
>
>
On Mon, Aug 7, 2017 at 2:41 PM, Colin Coe wrote:
> Hi
>
> I just thought that you'd do hardware RAID if you had the controller or
> JBOD if you didn't. In hindsight, a server with 40Gbps NICs is pretty
> likely to have a hardware RAID controller. I've never done JBOD with
On Mon, Aug 7, 2017 at 6:41 PM, FERNANDO FREDIANI wrote:
> Thanks for the clarification Erekle.
>
> However I get surprised with this way of operating from GlusterFS as it
> adds another layer of complexity to the system (either a hardware or
> software RAID) before
Thanks,
That works but I still have the engine reporting 1 node has an
update available, strange.
"Check for available updates on host x.xxx.xxx was completed successfully
with message 'found updates for packages
ovirt-node-ng-image-update-4.1.4-1.el7.centos'."
Regards,
>
>
>
> Better workaround is to download newer python-apt from here [1] and
> install it manually with dpkg. The guest agent seems to work OK with it.
> There's much less chance of breaking something else, also newer
> python-apt will be picked-up automatically on upgrades.
>
> Tomas
>
>
> [1]
Thanks for the clarification Erekle.
However I get surprised with this way of operating from GlusterFS as it
adds another layer of complexity to the system (either a hardware or
software RAID) before the gluster config and increase the system's
overall costs.
An important point to consider
On Mon, 7 Aug 2017 16:08:00 +0200
"yayo (j)" wrote:
> >
> > This is the problem!
> >
> > I looked at the packages for conflict and figured the issue is in gnpug.
> > Zentyal repository contains gnupg version 2.1.15-1ubuntu6 which breaks
> > python-apt <= 1.1.0~beta4.
> >
> >
>
> Agreed.
>>
>> Open a bug with Zentyal. They broke the packages from Ubuntu and should
>> fix it themselves. They have to backport newer version of python-apt.
>> The one from yakkety (1.1.0~beta5) should be good enough to fix the
>> problem.
>>
>> In the bug report note that the
Vinícius Ferrão writes:
> Hello Chris,
>
> On non-node installation I can’t see any problems as you said, but due
> to the appliance nature of oVirt Node I don’t know if this would be a
> supported scenario. Anyway you raised a good point: local storage. I’m
> not needing
>
> This is the problem!
>
> I looked at the packages for conflict and figured the issue is in gnpug.
> Zentyal repository contains gnupg version 2.1.15-1ubuntu6 which breaks
> python-apt <= 1.1.0~beta4.
>
>
Ok, thank you! Any workaround (something like packege pinning?) to fix this
problem?
>
>
Hi Frenando,
Here is my experience, if you consider a particular hard drive as a
brick for gluster volume and it dies, i.e. it becomes not accessible
it's a huge hassle to discard that brick and exchange with another one,
since gluster some tries to access that broken brick and it's causing
Moacir, I beleive for using the 3 servers directly connected to each
others without switch you have to have a Bridge on each server for every
2 physical interfaces to allow the traffic passthrough in layer2 (Is it
possible to create this from the oVirt Engine Web Interface?). If your
ovirtmgmt
For any RAID 5 or 6 configuration I normally follow a simple gold rule
which gave good results so far:
- up to 4 disks RAID 5
- 5 or more disks RAID 6
However I didn't really understand well the recommendation to use any
RAID with GlusterFS. I always thought that GlusteFS likes to work in
Hi, in-line responses.
Thanks,
Moacir
From: Yaniv Kaul
Sent: Monday, August 7, 2017 7:42 AM
To: Moacir Ferreira
Cc: users@ovirt.org
Subject: Re: [ovirt-users] Good practices
On Sun, Aug 6, 2017 at 5:49 PM, Moacir Ferreira
Hi,
The problem should be solved here:
http://jenkins.ovirt.org/job/ovirt-node-ng_ovirt-4.1_build-artifacts-el7-x86_64/lastSuccessfulBuild/artifact/exported-artifacts/
Thanks,
Yuval.
On Fri, Aug 4, 2017 at 12:37 PM, Staniforth, Paul <
p.stanifo...@leedsbeckett.ac.uk> wrote:
> Hello,
>
>
On Mon, 7 Aug 2017 10:18:35 +0200
"yayo (j)" wrote:
> Hi,
>
>
> I just tried that with development version of Zentyal and it works for
> > me. Well, there are some caveats, see below.
> >
> >
> Please provide steps not just "works for me" ... Thank you
>
>
>
> > > Just
Hi
I just thought that you'd do hardware RAID if you had the controller or
JBOD if you didn't. In hindsight, a server with 40Gbps NICs is pretty
likely to have a hardware RAID controller. I've never done JBOD with
hardware RAID. I think having a single gluster brick on hardware JBOD
would be
On mån, 2017-08-07 at 12:46 +0200, Johan Bernhardsson wrote:
> There is no point on doing that as azure is a cloud in itself and
> ovirt
> is to build your own virtual environment to deploy on local hardware.
Yeah, of course and I think Grzegorz knows that. But for people in the
testing,
There is no point on doing that as azure is a cloud in itself and ovirt
is to build your own virtual environment to deploy on local hardware.
/Johan
On Mon, 2017-08-07 at 12:32 +0200, Grzegorz Szypa wrote:
> Hi.
>
> Did anyone try to install ovirt on Azure Environment?
>
> --
> G.Sz.
>
Hi.
Did anyone try to install ovirt on Azure Environment?
--
G.Sz.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
The best is to use this tool:
$ ovirt-engine-extensions-tool --log-level=FINEST aaa search
--extension-name=your-openldap-authz-name --entity-name=myuser
It prints pretty verbose output, which you can analyze.
On Mon, Aug 7, 2017 at 9:01 AM, NUNIN Roberto wrote:
>
Devin,
Many, many thaks for your response. I will read the doc you sent and if I still
have questions I will post them here.
But why would I use a RAIDed brick if Gluster, by itself, already "protects"
the data by making replicas. You see, that is what is confusing to me...
Thanks,
Moacir
On Tue, Aug 1, 2017 at 3:39 PM, Marcelo Leandro wrote:
> Good morning
>
> I bought a external certificate, in godaddy , and they send to me only
> one archive crt. I saw this:
> https://www.ovirt.org/documentation/admin-guide/appe-oVirt_and_SSL/
>
> I don't know how I
Hi,
I just tried that with development version of Zentyal and it works for
> me. Well, there are some caveats, see below.
>
>
Please provide steps not just "works for me" ... Thank you
> > Just wanted to add my input. I just recently noticed the same thing.
> > Luckily i was just testing
I've two oVirt 4.1.4.2-1 pods used for labs.
These two pods are configured in the same way (three node with gluster)
Trying to setup LDAP auth, towards the same OpenLDAP server, setup ends
correctly in both engine VM.
When I try to perform system permission modification, only one of these is
On Sun, Aug 6, 2017 at 5:49 PM, Moacir Ferreira
wrote:
> I am willing to assemble a oVirt "pod", made of 3 servers, each with 2 CPU
> sockets of 12 cores, 256GB RAM, 7 HDD 10K, 1 SSD. The idea is to use
> GlusterFS to provide HA for the VMs. The 3 servers have a dual
48 matches
Mail list logo