Re: [ceph-users] Optane still valid

2019-02-04 Thread solarflow99
I think one limitation would be the 375GB since bluestore needs a larger
amount of space than filestore did.

On Mon, Feb 4, 2019 at 10:20 AM Florian Engelmann <
florian.engelm...@everyware.ch> wrote:

> Hi,
>
> we have built a 6 Node NVMe only Ceph Cluster with 4x Intel DC P4510 8TB
> each and one Intel DC P4800X 375GB Optane each. Up to 10x P4510 can be
> installed in each node.
> WAL and RocksDBs for all P4510 should be stored on the Optane (approx.
> 30GB per RocksDB incl. WAL).
> Internally, discussions arose whether the Optane would become a
> bottleneck from a certain number of P4510 on.
> For us, the lowest possible latency is very important. Therefore the
> Optane NVMes were bought. In view of the good performance of the P4510,
> the question arises whether the Optanes still have a noticeable effect
> or whether they are actually just SPOFs?
>
>
> All the best,
> Florian
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Optane still valid

2019-02-04 Thread Florian Engelmann

Hi,

we have built a 6 Node NVMe only Ceph Cluster with 4x Intel DC P4510 8TB 
each and one Intel DC P4800X 375GB Optane each. Up to 10x P4510 can be 
installed in each node.
WAL and RocksDBs for all P4510 should be stored on the Optane (approx. 
30GB per RocksDB incl. WAL).
Internally, discussions arose whether the Optane would become a 
bottleneck from a certain number of P4510 on.
For us, the lowest possible latency is very important. Therefore the 
Optane NVMes were bought. In view of the good performance of the P4510, 
the question arises whether the Optanes still have a noticeable effect 
or whether they are actually just SPOFs?



All the best,
Florian


smime.p7s
Description: S/MIME cryptographic signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com