Well good, we can at least bounce ideas off each other, and I'm sure we'll
get some good advice sooner or later! Best way to get good ideas on the
internet is to post bad ones and wait ;)
In the performance and sizing guide PDF, they make this statement:
*Standard servers with 4:2 erasure coding
Environment Rundown:
OVirt 4.2
6 CentOS 7.4 Compute Nodes Intel Xeon
1 CentOS 7.4 Dedicated Engine Node Intel Xeon
1 Datacenter
1 Storage Domain
1 Cluster
10Gig-E iSCSI Storage
10Gig-E NFS Export Domain
20 VM’s of various OS’s and uses
The current cluster is using the Nehalem architecture.
Vincent,
I've been back and forth on SSDs vs HDDs and can't really get a clear
answer. You are correct though, it would only equal 4TB usable in the end
which is pretty crazy but that amount of 7200 RPM HDDs equals about the
same cost as 3 2TB ssds would. I actually posted a question to this
I always found replica 3 a complete overkill. Don't know people made that
up that was necessary. Just looks good and costs a lot with little benefit.
Normally when using magnetic disks 2 copies are fine for most scenarios,
but if using SSDs for similar scenarios depending on the configuration of
Jayme,
I'm doing a very similar build, the only difference really is I am using
SSDs instead of HDDs. I have similar questions as you regarding expected
performance. Have you considered JBOD + NFS? Putting a Gluster Replica 3
on top of RAID 10 arrays sounds very safe, but my gosh the capacity
Thanks for your feedback. Any other opinions on this proposed setup? I'm
very torn over using GlusterFS and what the expected performance may be,
there seems to be little information out there. Would love to hear any
feedback specifically from ovirt users on hyperconverged configurations.
On
On Thu, Apr 5, 2018, 5:31 PM Luca 'remix_tj' Lorenzetto <
lorenzetto.l...@gmail.com> wrote:
> Hello,
>
> we're planning an upgrade of an old 4.0 setup to 4.2, going through 4.1.
>
> What we found out is that when upgrading from major to major, cluster
> and datacenter compatibility upgrade has to
On Thu, Apr 5, 2018, 2:33 PM Nicolas Ecarnot wrote:
> Hello,
>
> Amongst others, I have one 3.6 DC working very well since years and all
> based on GlusterFS.
> When having a close look (qemu-img info) on the images, I see their
> format is all RAW and not QCOW2.
>
Raw
Since it's still not installed, yes ;)
On 05/04/2018 16:11, Rich Megginson wrote:
> Is it possible that you could start over from scratch, using the latest
> instructions/files at
> https://github.com/ViaQ/Main/pull/37/files?
>
> On 04/05/2018 07:19 AM, Peter Hudec wrote:
>> The version is from
Hello,
we're planning an upgrade of an old 4.0 setup to 4.2, going through 4.1.
What we found out is that when upgrading from major to major, cluster
and datacenter compatibility upgrade has to be done at the end of the
upgrade.
This means that we also require to restart our VMs for adapting the
Is it possible that you could start over from scratch, using the latest
instructions/files at
https://github.com/ViaQ/Main/pull/37/files?
On 04/05/2018 07:19 AM, Peter Hudec wrote:
The version is from
Hi,
the oVirt team released today April 5th an update to oVirt 4.2.2 including
the following packages:
- ovirt-hosted-engine-ha-2.2.10
- ovirt-hosted-engine-setup-2.2.16
- cockpit-ovirt-0.11.20-1
- ovirt-release42-4.2.2-3
Addressing the following issues:
- [BZ 1560666
The version is from
/usr/share/ansible/openshift-ansible/roles/openshift_facts/library/openshift_facts.py:get_openshift_version
[PROD] r...@dipostat01.cnc.sk: /usr/share/ansible/openshift-ansible #
/usr/bin/openshift version
openshift v3.10.0-alpha.0+f0186dd-401
kubernetes v1.9.1+a0ce1bc657
etcd
Sent from my iPhone
> On Apr 5, 2018, at 5:29 AM, Yaniv Kaul wrote:
>
>
>
>> On Thu, Apr 5, 2018 at 9:08 AM, TomK wrote:
>>> On 4/4/2018 3:11 AM, Yaniv Kaul wrote:
>>>
>>>
>>> On Wed, Apr 4, 2018 at 12:39 AM, Tom >>
Hello,
Amongst others, I have one 3.6 DC working very well since years and all
based on GlusterFS.
When having a close look (qemu-img info) on the images, I see their
format is all RAW and not QCOW2.
I never noticed or bothered before, but I'm wondering :
- is it by design?
- it is something
On Thu, Apr 5, 2018 at 9:08 AM, TomK wrote:
> On 4/4/2018 3:11 AM, Yaniv Kaul wrote:
>
>>
>>
>> On Wed, Apr 4, 2018 at 12:39 AM, Tom t...@mdevsys.com>> wrote:
>>
>>
>>
>> Sent from my iPhone
>>
>> On Apr 3, 2018, at 9:32 AM, Yaniv Kaul
shot inthe dark, but have you got EPEL repo enabled by any chance?
On 4 April 2018 at 20:20, Vincent Royer wrote:
> Trying to update my nodes to 4.2.2, having a hard time.
>
> I updated the engine, no problems. Migrated VMs off host 1 and put it into
> maintenance. I do
The norm is to have a cluster with shared storage. So you have 3 to 5
hardware noed that shares storage for the hosted engine. That shared
storage is in sync. So you don't have one engine per physical node.
If one hardware node goes down the engine is restarted on another node with
the help
On 4/4/2018 3:11 AM, Yaniv Kaul wrote:
On Wed, Apr 4, 2018 at 12:39 AM, Tom > wrote:
Sent from my iPhone
On Apr 3, 2018, at 9:32 AM, Yaniv Kaul > wrote:
On Tue, Apr 3, 2018 at 3:12
19 matches
Mail list logo