Re: [ceph-users] Ceph Luminous RocksDB vs WalDB?

2018-06-28 Thread Igor Fedotov
The idea is to avoid separate WAL partition - it doesn't make sense for single NVMe device - just compicates things. And if you don't specify WAL explicitly it's co-exist with DB. Hence I vote for the second option :) On 6/29/2018 12:07 AM, Kai Wagner wrote: I'm also not 100% sure but I thi

Re: [ceph-users] Ceph Luminous RocksDB vs WalDB?

2018-06-28 Thread Kai Wagner
On 28.06.2018 23:25, Eric Jackson wrote: > Recently, I learned that this is not necessary when both are on the same > device. The wal for the Bluestore OSD will use the db device when set to 0. That's good to know. Thanks for the input on this Eric. -- SUSE Linux GmbH, GF: Felix Imendörffer, Ja

Re: [ceph-users] Ceph Luminous RocksDB vs WalDB?

2018-06-28 Thread Eric Jackson
I'm going to hope that Igor is correct since I have a PR for DeepSea to change this exact behavior. With respect to ceph-deploy, if you specify --block-wal, your OSD will have a block.wal symlink. Likewise, --block-db will give you a block.db symlink. If you have both on the command line, you

Re: [ceph-users] Ceph Luminous RocksDB vs WalDB?

2018-06-28 Thread Kai Wagner
I'm also not 100% sure but I think that the first one is the right way to go. The second command only specifies the db partition but no dedicated WAL partition. The first one should do the trick. On 28.06.2018 22:58, Igor Fedotov wrote: > > I think the second variant is what you need. But I'm not

Re: [ceph-users] Ceph Luminous RocksDB vs WalDB?

2018-06-28 Thread Igor Fedotov
I think the second variant is what you need. But I'm not the guru in ceph-deploy so there might be some nuances there... Anyway the general idea is to have just a single NVME partition (for both WAL and DB) per OSD. Thanks, Igor On 6/27/2018 11:28 PM, Pardhiv Karri wrote: Thank you Igor f

Re: [ceph-users] Ceph Luminous RocksDB vs WalDB?

2018-06-27 Thread Pardhiv Karri
Thank you Igor for the response. So do I need to use this, ceph-deploy osd create --debug --bluestore --data /dev/sdb --block-wal /dev/nvme0n1p1 --block-db /dev/nvme0n1p2 cephdatahost1 or ceph-deploy osd create --debug --bluestore --data /dev/sdb --block-db /dev/nvme0n1p2 cephdatahost1 where /

Re: [ceph-users] Ceph Luminous RocksDB vs WalDB?

2018-06-27 Thread Igor Fedotov
Hi Pardhiv, there is no WalDB in Ceph. It's WAL (Write Ahead Log) that is a way to ensure write safety in RocksDB. In other words - that's just a RocksDB subsystem which can use separate volume though. In general For BlueStore/BlueFS one can either allocate separate volumes for WAL and DB o

Re: [ceph-users] Ceph Luminous RocksDB vs WalDB?

2018-06-26 Thread Konstantin Shalygin
I am playing with Ceph Luminous and getting confused information around usage of WalDB vs RocksDB. I have 2TB NVMe drive which I want to use for Wal/Rocks DB and have 5 2TB SSD's for OSD. I am planning to create 5 30GB partitions for RocksDB on NVMe drive, do I need to create partitions of WalDB

[ceph-users] Ceph Luminous RocksDB vs WalDB?

2018-06-26 Thread Pardhiv Karri
Hi, I am playing with Ceph Luminous and getting confused information around usage of WalDB vs RocksDB. I have 2TB NVMe drive which I want to use for Wal/Rocks DB and have 5 2TB SSD's for OSD. I am planning to create 5 30GB partitions for RocksDB on NVMe drive, do I need to create partitions of Wa