Den ons 5 feb. 2020 kl 17:27 skrev Vladimir Prokofev :
> Thank you for the insight.
> > If you're using the default options for rocksdb, then the size of L3 will
> be 25GB
> Where this number comes from? Any documentation I can read?
> I want to have a better understanding on how DB size is
On 2/5/20 2:21 PM, Vladimir Prokofev wrote:
> Cluster upgraded from 12.2.12 to 14.2.5. All went smooth, except BlueFS
> spillover warning.
> We create OSDs with ceph-deploy, command goes like this:
> ceph-deploy osd create --bluestore --data /dev/sdf --block-db /dev/sdb5
> --block-wal /dev/sdb6
want to host up to 3 levels of
> rocksdb in the SSD.
>
> Thanks,
> Orlando
>
> -Original Message-
> From: Igor Fedotov
> Sent: Wednesday, February 5, 2020 7:04 AM
> To: Vladimir Prokofev ; ceph-users@ceph.io
> Subject: [ceph-users] Re: Fwd: BlueFS spillover
to 3 levels of rocksdb in the SSD.
Thanks,
Orlando
-Original Message-
From: Igor Fedotov
Sent: Wednesday, February 5, 2020 7:04 AM
To: Vladimir Prokofev ; ceph-users@ceph.io
Subject: [ceph-users] Re: Fwd: BlueFS spillover yet again
Hi Vladimir,
there were a plenty of discussions
Hi Vladimir,
there were a plenty of discussions/recommendations around db volume size
selection here.
In short it's advised to have DB volume of 30 - 64GB for most of use cases.
Thanks,
Igor
On 2/5/2020 4:21 PM, Vladimir Prokofev wrote:
Cluster upgraded from 12.2.12 to 14.2.5. All went