On Thu, Apr 5, 2018, 11:51 PM FERNANDO FREDIANI <fernando.fredi...@upx.com>
wrote:

> I always found replica 3 a complete overkill. Don't know people made that
> up that was necessary. Just looks good and costs a lot with little benefit.
>

It's not very easy to solve split brain with only 2.
You can use 2+arbiter.
Y.


> Normally when using magnetic disks 2 copies are fine for most scenarios,
> but if using SSDs for similar scenarios depending on the configuration of
> each node disks it is possible to have a RAID 5/6 ish.
> Fernando
>
> 2018-04-05 17:38 GMT-03:00 Vincent Royer <vinc...@epicenergy.ca>:
>
>> Jayme,
>>
>> I'm doing a very similar build, the only difference really is I am using
>> SSDs instead of HDDs.   I have similar questions as you regarding expected
>> performance. Have you considered JBOD + NFS?   Putting a Gluster Replica 3
>> on top of RAID 10 arrays sounds very safe, but my gosh the capacity takes a
>> massive hit.  Am I correct in saying you will only get 4TB total usable
>> capacity out of 24TB worth of disks?  The cost per TB in that sort of
>> scenario is immense.
>>
>> My plan is two 2TB SSDs per server in JBOD with a caching raid card, with
>> replica 3.  I would end up with the same 4TB total capacity using 12TB of
>> SSDs.
>>
>> I think Replica 3 is safe enough that you could forgo the RAID 10. But
>> I'm talking from zero experience...  Would love others to chime in with
>> their opinions on both these setups.
>>
>> *Vincent Royer*
>> *778-825-1057*
>>
>>
>> <http://www.epicenergy.ca/>
>> *SUSTAINABLE MOBILE ENERGY SOLUTIONS*
>>
>>
>>
>>
>> On Thu, Apr 5, 2018 at 12:22 PM, Jayme <jay...@gmail.com> wrote:
>>
>>> Thanks for your feedback.  Any other opinions on this proposed setup?
>>> I'm very torn over using GlusterFS and what the expected performance may
>>> be, there seems to be little information out there.  Would love to hear any
>>> feedback specifically from ovirt users on hyperconverged configurations.
>>>
>>> On Thu, Apr 5, 2018 at 2:56 AM, Alex K <rightkickt...@gmail.com> wrote:
>>>
>>>> Hi,
>>>>
>>>> You should be ok with the setup.
>>>> I am running around 20 vms (linux and windows, small and medium size)
>>>> with the half of your specs. With 10G network replica 3 is ok.
>>>>
>>>> Alex
>>>>
>>>> On Wed, Apr 4, 2018, 16:13 Jayme <jay...@gmail.com> wrote:
>>>>
>>>>> I'm spec'ing hardware for a 3-node oVirt build (on somewhat of a
>>>>> budget).  I plan to do 20-30 Linux VMs most of them very light weight + a
>>>>> couple of heavier hitting web and DB servers with frequent rsync backups.
>>>>> Some have a lot of small files from large github repos etc.
>>>>>
>>>>> 3X of the following:
>>>>>
>>>>> Dell PowerEdge R720
>>>>> 2x 2.9 GHz 8 Core E5-2690 (SR0L0)
>>>>> 256GB RAM
>>>>> PERC H710
>>>>> 2x10GB Nic
>>>>>
>>>>> Boot/OS will likely be two cheaper small sata/ssd in raid 1.
>>>>>
>>>>> Gluster bricks comprised of 4x2TB WD Gold 7200RPM SATA HDDs in RAID 10
>>>>> per server.  Using a replica 3 setup (and I'm thinking right now with no
>>>>> arbiter for extra redundancy, although I'm not sure what the performance
>>>>> hit may be as a result).  Will this allow for two host failure or just 
>>>>> one?
>>>>>
>>>>> I've been really struggling with storage choices, it seems very
>>>>> difficult to predict the performance of glusterFS due to the variance in
>>>>> hardware (everyone is using something different).  I'm not sure if the
>>>>> performance will be adequate enough for my needs.
>>>>>
>>>>> I will be using an all ready existing Netgear XS716T 10GB switch for
>>>>> Gluster storage network.
>>>>>
>>>>> In addition I plan to build another simple glusterFS storage server
>>>>> that I can use to georeplicate the gluster volume to for DR purposes and
>>>>> use existing hardware to build an independent standby oVirt host that is
>>>>> able to start up a few high priority VMs from the georeplicated glusterFS
>>>>> volume if for some reason the primary oVirt cluster/glusterFS volume ever
>>>>> failed.
>>>>>
>>>>> I would love to hear any advice or critiques on this plan.
>>>>>
>>>>> Thanks!
>>>>> _______________________________________________
>>>>> Users mailing list
>>>>> Users@ovirt.org
>>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>>
>>>>
>>>
>>> _______________________________________________
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>> _______________________________________________
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
> _______________________________________________
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

Reply via email to