Hi Jesper & Lars,
Thnx for your answer.
c) is indeed the option with the devices being used for block.db
Adding an extra NVMe doesn't really seem necessary, since there is no space
issue... (Contrary... The only advantage is the impact of a defect NVMe
disk). The performance of the NVMe's
Hi Kristof,
may I add another choice?
I configured my SSDs this way.
Every host for OSDs has two fast and durable SSDs.
Both SSDs are in one RAID1 which then is split up into LVs.
I took 58GB for DB & WAL (and space for a special action by the DB (was it
compaction?)) for each OSD.
Then there
Is c) the bcache solution?
real life experience - unless you are really beating an enterprise ssd with
writes - they last very,very long and even when failure happens- you can
typically see it by the wear levels in smart months before.
I would go for c) but if possible add one more nvme to
Hi all,
Thanks for the feedback.
Though, just to be sure:
1. There is no 30GB limit if I understand correctly for the RocksDB size.
If metadata crosses that barrier, the L4 part will spillover to the primary
device? Or will it just move the RocksDB completely? Or will it just stop
and indicate
Use 30 GB for all OSDs. Other values are pointless, because
https://yourcmc.ru/wiki/Ceph_performance#About_block.db_sizing
You can use the rest of free NVMe space for bcache - it's much better
than just allocating it for block.db.
___
ceph-users
On Fri, Nov 15, 2019 at 4:39 PM Wido den Hollander wrote:
>
>
>
> On 11/15/19 4:25 PM, Paul Emmerich wrote:
> > On Fri, Nov 15, 2019 at 4:02 PM Wido den Hollander wrote:
> >>
> >> I normally use LVM on top
> >> of each device and create 2 LVs per OSD:
> >>
> >> - WAL: 1GB
> >> - DB: xx GB
> >
>
On 11/15/19 4:25 PM, Paul Emmerich wrote:
> On Fri, Nov 15, 2019 at 4:02 PM Wido den Hollander wrote:
>>
>> I normally use LVM on top
>> of each device and create 2 LVs per OSD:
>>
>> - WAL: 1GB
>> - DB: xx GB
>
> Why? I've seen this a few times and I can't figure out what the
> advantage of
On Fri, Nov 15, 2019 at 4:02 PM Wido den Hollander wrote:
>
> I normally use LVM on top
> of each device and create 2 LVs per OSD:
>
> - WAL: 1GB
> - DB: xx GB
Why? I've seen this a few times and I can't figure out what the
advantage of doing this explicitly on the LVM level instead of relying
On Fri, Nov 15, 2019 at 4:04 PM Kristof Coucke wrote:
>
> Hi Paul,
>
> Thank you for the answer.
> I didn't thought of that approach... (Using the NVMe for the meta data pool
> of RGW).
>
> From where do you get the limitation of 1.3TB?
13 OSDs/Server * 10 Servers * 30 GB/OSD usable DB space /
Hi Paul,
Thank you for the answer.
I didn't thought of that approach... (Using the NVMe for the meta data pool
of RGW).
>From where do you get the limitation of 1.3TB?
I don't get that one...
Br,
Kristof
Op vr 15 nov. 2019 om 15:26 schreef Paul Emmerich :
> On Fri, Nov 15, 2019 at 3:16 PM
On 11/15/19 3:19 PM, Kristof Coucke wrote:
> Hi all,
>
>
>
> We’ve configured a Ceph cluster with 10 nodes, each having 13 large
> disks (14TB) and 2 NVMe disks (1,6TB).
>
> The idea was to use the NVMe as “fast device”…
>
> The recommendations I’ve read in the online documentation, state
On Fri, Nov 15, 2019 at 3:16 PM Kristof Coucke wrote:
> We’ve configured a Ceph cluster with 10 nodes, each having 13 large disks
> (14TB) and 2 NVMe disks (1,6TB).
> The recommendations I’ve read in the online documentation, state that the db
> block device should be around 4%~5% of the slow
12 matches
Mail list logo