On 09/21/2017 03:19 AM, Maged Mokhtar wrote:
On 2017-09-21 10:01, Dietmar Rieder wrote:
Hi,
I'm in the same situation (NVMEs, SSDs, SAS HDDs). I asked the same
questions to myself.
For now I decided to use the NVMEs as wal and db devices for the SAS
HDDs and on the SSDs I colocate wal and
On 2017-09-21 10:01, Dietmar Rieder wrote:
> Hi,
>
> I'm in the same situation (NVMEs, SSDs, SAS HDDs). I asked the same
> questions to myself.
> For now I decided to use the NVMEs as wal and db devices for the SAS
> HDDs and on the SSDs I colocate wal and db.
>
> However, I'm still wonderin
But for example, on the same server i have 3 disks technologies to deploy
pools, SSD, SAS and SATA.
The NVME were bought just thinking on the journal for SATA and SAS, since
journals for SSD were colocated.
But now, exactly the same scenario, should i trust the NVME for the SSD
pool ? are there
Is there any guidance on the sizes for the WAL and DB devices when they
are separated to an SSD/NVMe? I understand that probably there isn't a
one size fits all number, but perhaps something as a function of
cluster/usage parameters like OSD size and usage pattern (amount of
writes,
On 21 September 2017 at 04:53, Maximiliano Venesio
wrote:
> Hi guys i'm reading different documents about bluestore, and it never
> recommends to use NVRAM to store the bluefs db, nevertheless the official
> documentation says that, is better to use the faster device to put
Hi guys i'm reading different documents about bluestore, and it never
recommends to use NVRAM to store the bluefs db, nevertheless the official
documentation says that, is better to use the faster device to put the
block.db in.
In my cluster i have NVRAM devices of 400GB, SSDs disks for high