On 8/16/2018 5:40 PM, Hervé Ballans wrote:
Thanks Igor, indeed it does match !
# cat ceph-osd.0.log |grep wal
2018-08-16 11:55:27.182011 7fa47c106e00 4 rocksdb:
Options.max_total_wal_size: 0
Just an additional question, is it normal that on the osd log, I see
that /max_total_wal_size/ is setted to 0 ?
I used the ceph default values at this time :
# ceph-conf --show-config | grep wal
bluefs_preextend_wal_files = false
bluestore_block_wal_create = false
bluestore_block_wal_path =
bluestore_block_wal_size = 100663296
rocksdb_separate_wal_dir = false
Yeah, that's fine IMO.
max_total_wal_size is rocksdb option while the ones you provided are
ceph/bluestore ones.
You can find more about this specific rocksdb option here:
https://github.com/facebook/rocksdb/wiki/Speed-Up-DB-Open
One can adjust rocksdb settings using Ceph's bluestore_rocksdb_options
config parameter.
But I suppose this is the default settings and I've never seen anybody
tuning it.
And bluestore_block_wal_size is just ignored in your case.
Thanks,
Igor
Regards,
Hervé
Le 16/08/2018 à 16:05, Igor Fedotov a écrit :
Hi Herve
actually absence of block.wal symlink is good enough symptom that DB
and WAL are merged .
But you can also inspect OSD startup log or check bluefs perf
counters after some load - corresponding WAL counters (total/used)
should be zero.
Thanks,
Igor
On 8/16/2018 4:55 PM, Hervé Ballans wrote:
Hi all,
I'm setting up my Ceph cluster (last release of Luminous) and I'm
currently configuring OSD with WAL and DB on NVMe disk.
OSD data are on a SATA disk and Both WAL and DB are on the same
partition of the NVMe disk.
After creating partitions on the NVMe (block partitions, without
filesystem), I created my first OSD from the admin node :
$ ceph-deploy osd create --debug --bluestore --data /dev/sda
--block-db /dev/nvme0n1p1 /node-osd0/
It works perfectly, but I just want to clarify a point regarding the
WAL : I understood that if we specify a --block-db option without a
--block-wal, WAL is stored on the same partition than the DB.
OK, I'm sure it's working like that but how can I check now where
the wal is really stored ? (as there is no symbolic link block.wal
into /var/lib/ceph/osd/ceph-0 [1] ?)
Is there somewhere or a Ceph command where I can check this ?
I just wanted to be sure of my options before starting deployment on
my 120 OSDs !
Thanks for your clarifications,
Hervé
[1] # ls -l /var/lib/ceph/osd/ceph-0/
total 48
-rw-r--r-- 1 ceph ceph 465 Aug 16 14:36 activate.monmap
lrwxrwxrwx 1 ceph ceph 93 Aug 16 14:36 block ->
/dev/ceph-766bd78c-ed1a-4e27-8b4d-7adc4c4f2f0d/osd-block-98bfb597-009b-4e88-bc5e-dd22587d21fe
lrwxrwxrwx 1 ceph ceph 15 Aug 16 14:36 block.db -> /dev/nvme0n1p1
-rw-r--r-- 1 ceph ceph 2 Aug 16 14:36 bluefs
-rw-r--r-- 1 ceph ceph 37 Aug 16 14:36 ceph_fsid
-rw-r--r-- 1 ceph ceph 37 Aug 16 14:36 fsid
-rw------- 1 ceph ceph 55 Aug 16 14:36 keyring
-rw-r--r-- 1 ceph ceph 8 Aug 16 14:36 kv_backend
-rw-r--r-- 1 ceph ceph 21 Aug 16 14:36 magic
-rw-r--r-- 1 ceph ceph 4 Aug 16 14:36 mkfs_done
-rw-r--r-- 1 ceph ceph 41 Aug 16 14:36 osd_key
-rw-r--r-- 1 ceph ceph 6 Aug 16 14:36 ready
-rw-r--r-- 1 ceph ceph 10 Aug 16 14:36 type
-rw-r--r-- 1 ceph ceph 2 Aug 16 14:36 whoami
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com