Hi Mohamad thanks for your email !

I try to separate my racks for each type of sata and create news pools in
my environment.

Thanks and Best Regards,
Fabio Abreu

On Thu, Feb 21, 2019 at 3:46 PM Mohamad Gebai <[email protected]> wrote:

> On 2/21/19 1:22 PM, Fabio Abreu wrote:
>
> Hi Everybody,
>
> It's recommended join different hardwares in the same rack  ?
>
> For example I have a sata rack with Apollo 4200 storage and I will get
> another hardware type to expand this rack, Hp 380 Gen10.
>
> I was made a lot tests to  understand the performance and these new disks
> have 100% of utilization in my environment and the cluster recovery is
> worst than another hardware.
>
> Can Someone recommend a best practice or configuration in this scenario?
> I  make this issue because if these disks not performing as hope, I will
> configure another pools to my openstack and maybe that's not make sense to
> me because I will split nova process in the computes node if I have two
> pools .
>
>
> It's usually better to have homogeneous hardware across your cluster.
> Mixing hardware will cause your requests to will be subject to the "weakest
> link in the chain". For instance, write requests latency will be bound by
> the latency of your slowest device. In practice there might be other issues
> as well that have been pointed out on this list before (feel free to
> search).
>
> Having separate pools on different kind of hardware sounds like a good
> approach. Otherwise, depending on your workload, it might be worth thinking
> about tweaking the primary affinity of OSDs so that your fast OSD are more
> likely to be primaries (reads are served from the primary OSD only).
> Depending on your new disks (throughput and size), maybe look at tweaking
> the weights. But that's just the beginning of a real hassle in terms of
> management.
>
> Mohamad
>


-- 
Atenciosamente,
Fabio Abreu Reis
http://fajlinux.com.br
*Tel : *+55 21 98244-0161
*Skype : *fabioabreureis
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to