Thanks a lof, Boris.

Do you mean that, the best practice would be to create a DB partition on the 
SSD as OSD, and disable WAL by setting bluestore_prefer_deferred_size= 0, and 
bluestore_prefer_deferred_size_ssd=0

Or there is no need to create a DB partition on the SSD and let the OSD manages 
everything including data and metadata?  

Do not know which is the best strategy in terms of performance......

Samuel



[email protected]
 
From: Boris Behrens
Date: 2022-04-25 10:26
To: [email protected]
CC: ceph-users
Subject: Re: [ceph-users] How I disable DB and WAL for an OSD for improving 8K 
performance
Hi Samuel,

IIRC at least the DB (I am not sure if flash drives use the 1GB WAL) is always 
located on the same device as the OSD, when it is not configured somewhere 
else. On SSDs/NVMEs people tend to not separate the DB/WAL on other devices.

Cheers
 Boris

Am Mo., 25. Apr. 2022 um 10:09 Uhr schrieb [email protected] 
<[email protected]>:
Dear Ceph folks,

When setting up an all flash Ceph cluster with 8 nodes, I am wondering whether 
should i disable (or turn off)  DB and WAL for SSD based OSDs for better 8K IO 
performance. 

Nornally for HDD OSDs, i used to create a 30GB+ partitions on separate SSDs as 
DB/WAL for them. For (enterprise level)SSD-based OSDs,  one way is to create a 
partition on every SSD OSD as DB/WAL, and then use the rest as the data 
partition of the OSD. However, I am wondering whether such operation would 
improve performance or degrade performance? Since WAL is just a pure write 
buffering, it could cause double writes on the same SSD and thus cause damage 
to the performance...

Any comments, suggestions are highly appreciated,

Samuel



[email protected]
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]


-- 
Die Selbsthilfegruppe "UTF-8-Probleme" trifft sich diesmal abweichend im 
groüen Saal.
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to