Re: [ceph-users] WAL/DB size

2019-08-16 Thread Anthony D'Atri
Thanks — interesting reading. Distilling the discussion there, below are my takeaways. Am I interpreting correctly? 1) The spillover phenomenon and thus the small number of discrete sizes that are effective without being wasteful — are recognized 2) "I don't think we should plan teh block.db

Re: [ceph-users] WAL/DB size

2019-08-16 Thread Paul Emmerich
Btw, the original discussion leading to the 4% recommendation is here: https://github.com/ceph/ceph/pull/23210 -- Paul Emmerich Looking for help with your Ceph cluster? Contact us at https://croit.io croit GmbH Freseniusstr. 31h 81247 München www.croit.io Tel: +49 89 1896585 90 On Thu, Aug

Re: [ceph-users] WAL/DB size

2019-08-15 Thread Виталий Филиппов
30gb already includes WAL, see http://yourcmc.ru/wiki/Ceph_performance#About_block.db_sizing 15 августа 2019 г. 1:15:58 GMT+03:00, Anthony D'Atri пишет: >Good points in both posts, but I think there’s still some unclarity. > >Absolutely let’s talk about DB and WAL together. By “bluestore goes

Re: [ceph-users] WAL/DB size

2019-08-15 Thread Mark Nelson
Hi Folks, The basic idea behind the WAL is that for every DB write transaction you first write it into an in-memory buffer and to a region on disk.  RocksDB typically is setup to have multiple WAL buffers, and when one or more fills up, it will start flushing the data to L0 while new writes

Re: [ceph-users] WAL/DB size

2019-08-15 Thread Janne Johansson
Den tors 15 aug. 2019 kl 00:16 skrev Anthony D'Atri : > Good points in both posts, but I think there’s still some unclarity. > ... > We’ve seen good explanations on the list of why only specific DB sizes, > say 30GB, are actually used _for the DB_. > If the WAL goes along with the DB,

Re: [ceph-users] WAL/DB size

2019-08-14 Thread Anthony D'Atri
Good points in both posts, but I think there’s still some unclarity. Absolutely let’s talk about DB and WAL together. By “bluestore goes on flash” I assume you mean WAL+DB? “Simply allocate DB and WAL will appear there automatically” Forgive me please if this is obvious, but I’d like to see a

Re: [ceph-users] WAL/DB size

2019-08-14 Thread Mark Nelson
On 8/14/19 1:06 PM, solarflow99 wrote: Actually standalone WAL is required when you have either very small fast device (and don't want db to use it) or three devices (different in performance) behind OSD (e.g. hdd, ssd, nvme). So WAL is to be located at the fastest one.

Re: [ceph-users] WAL/DB size

2019-08-14 Thread solarflow99
> Actually standalone WAL is required when you have either very small fast > device (and don't want db to use it) or three devices (different in > performance) behind OSD (e.g. hdd, ssd, nvme). So WAL is to be located > at the fastest one. > > For the given use case you just have HDD and NVMe and

Re: [ceph-users] WAL/DB size

2019-08-14 Thread Igor Fedotov
Hi Wido & Hermant. On 8/14/2019 11:36 AM, Wido den Hollander wrote: On 8/14/19 9:33 AM, Hemant Sonawane wrote: Hello guys, Thank you so much for your responses really appreciate it. But I would like to mention one more thing which I forgot in my last email is that I am going to use this

Re: [ceph-users] WAL/DB size

2019-08-14 Thread Burkhard Linke
Hi, please keep in mind that due to the rocksdb level concept, only certain db partition sizes are useful. Larger partitions are a waste of capacity, since rockdb will only use whole level sizes. There has been a lot of discussion about this on the mailing list in the last months. A plain

Re: [ceph-users] WAL/DB size

2019-08-14 Thread Wido den Hollander
On 8/14/19 9:33 AM, Hemant Sonawane wrote: > Hello guys, > > Thank you so much for your responses really appreciate it. But I would > like to mention one more thing which I forgot in my last email is that I > am going to use this storage for openstack VM's. So still the answer > will be the

Re: [ceph-users] WAL/DB size

2019-08-14 Thread Hemant Sonawane
Hello guys, Thank you so much for your responses really appreciate it. But I would like to mention one more thing which I forgot in my last email is that I am going to use this storage for openstack VM's. So still the answer will be the same that I should use 1GB for wal? On Wed, 14 Aug 2019 at

Re: [ceph-users] WAL/DB size

2019-08-13 Thread Mark Nelson
On 8/13/19 3:51 PM, Paul Emmerich wrote: On Tue, Aug 13, 2019 at 10:04 PM Wido den Hollander wrote: I just checked an RGW-only setup. 6TB drive, 58% full, 11.2GB of DB in use. No slow db in use. random rgw-only setup here: 12TB drive, 77% full, 48GB metadata and 10GB omap for index and

Re: [ceph-users] WAL/DB size

2019-08-13 Thread Paul Emmerich
om > > www.PerformAir.com > > > > > > > > -Original Message- > > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > > Wido den Hollander > > Sent: Tuesday, August 13, 2019 12:51 PM > > To: ceph-users@lists.ceph

Re: [ceph-users] WAL/DB size

2019-08-13 Thread Wido den Hollander
age- > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Wido > den Hollander > Sent: Tuesday, August 13, 2019 12:51 PM > To: ceph-users@lists.ceph.com > Subject: Re: [ceph-users] WAL/DB size > > > > On 8/13/19 5:54 PM, Hemant Sonawane wrote: >

Re: [ceph-users] WAL/DB size

2019-08-13 Thread DHilsbos
. dhils...@performair.com www.PerformAir.com -Original Message- From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Wido den Hollander Sent: Tuesday, August 13, 2019 12:51 PM To: ceph-users@lists.ceph.com Subject: Re: [ceph-users] WAL/DB size On 8/13/19 5:54 PM

Re: [ceph-users] WAL/DB size

2019-08-13 Thread Wido den Hollander
On 8/13/19 5:54 PM, Hemant Sonawane wrote: > Hi All, > I have 4 6TB of HDD and 2 450GB SSD and I am going to partition each > disk to 220GB for rock.db. So my question is does it make sense to use > wal for my configuration? if yes then what could be the size of it? help > will be really

[ceph-users] WAL/DB size

2019-08-13 Thread Hemant Sonawane
Hi All, I have 4 6TB of HDD and 2 450GB SSD and I am going to partition each disk to 220GB for rock.db. So my question is does it make sense to use wal for my configuration? if yes then what could be the size of it? help will be really appreciated. -- Thanks and Regards, Hemant Sonawane

Re: [ceph-users] WAL/DB size

2018-09-07 Thread Alfredo Deza
On Fri, Sep 7, 2018 at 3:31 PM, Maged Mokhtar wrote: > On 2018-09-07 14:36, Alfredo Deza wrote: > > On Fri, Sep 7, 2018 at 8:27 AM, Muhammad Junaid > wrote: > > Hi there > > Asking the questions as a newbie. May be asked a number of times before by > many but sorry, it is not clear yet to me. >

Re: [ceph-users] WAL/DB size

2018-09-07 Thread Maged Mokhtar
On 2018-09-07 14:36, Alfredo Deza wrote: > On Fri, Sep 7, 2018 at 8:27 AM, Muhammad Junaid > wrote: > >> Hi there >> >> Asking the questions as a newbie. May be asked a number of times before by >> many but sorry, it is not clear yet to me. >> >> 1. The WAL device is just like journaling

Re: [ceph-users] WAL/DB size

2018-09-07 Thread Brett Chancellor
I saw above the recommended size for the db partition was 5% of data, but yet the recommendation is 40GB partitions for 4TB drives. Isn't that closer to 1%? On Fri, Sep 7, 2018 at 10:06 AM, Muhammad Junaid wrote: > Thanks very much. It is clear very much now. Because we are just in > planning

Re: [ceph-users] WAL/DB size

2018-09-07 Thread Muhammad Junaid
Thanks very much. It is clear very much now. Because we are just in planning stage right now, would you tell me if we use 7200rpm SAS 3-4TB for OSD's, write speed will be fine with this new scenario? Because it will apparently writing to slower disks before actual confirmation. (I understand there

Re: [ceph-users] WAL/DB size

2018-09-07 Thread Richard Hesketh
It can get confusing. There will always be a WAL, and there will always be a metadata DB, for a bluestore OSD. However, if a separate device is not specified for the WAL, it is kept in the same device/partition as the DB; in the same way, if a separate device is not specified for the DB, it is

Re: [ceph-users] WAL/DB size

2018-09-07 Thread Muhammad Junaid
Thanks again, but sorry again too. I couldn't understand the following. 1. As per docs, blocks.db is used only for bluestore (file system meta data info etc). It has nothing to do with actual data (for journaling) which will ultimately written to slower disks. 2. How will actual journaling will

Re: [ceph-users] WAL/DB size

2018-09-07 Thread Alfredo Deza
On Fri, Sep 7, 2018 at 9:02 AM, Muhammad Junaid wrote: > Thanks Alfredo. Just to clear that My configuration has 5 OSD's (7200 rpm > SAS HDDS) which are slower than the 200G SSD. Thats why I asked for a 10G > WAL partition for each OSD on the SSD. > > Are you asking us to do 40GB * 5 partitions

Re: [ceph-users] WAL/DB size

2018-09-07 Thread Eugen Block
Hi, Are you asking us to do 40GB * 5 partitions on SSD just for block.db? yes. By default ceph deploys block.db and wal.db on the same device if no separate wal device is specified. Regards, Eugen Zitat von Muhammad Junaid : Thanks Alfredo. Just to clear that My configuration has 5

Re: [ceph-users] WAL/DB size

2018-09-07 Thread Muhammad Junaid
Thanks Alfredo. Just to clear that My configuration has 5 OSD's (7200 rpm SAS HDDS) which are slower than the 200G SSD. Thats why I asked for a 10G WAL partition for each OSD on the SSD. Are you asking us to do 40GB * 5 partitions on SSD just for block.db? On Fri, Sep 7, 2018 at 5:36 PM Alfredo

Re: [ceph-users] WAL/DB size

2018-09-07 Thread Alfredo Deza
On Fri, Sep 7, 2018 at 8:27 AM, Muhammad Junaid wrote: > Hi there > > Asking the questions as a newbie. May be asked a number of times before by > many but sorry, it is not clear yet to me. > > 1. The WAL device is just like journaling device used before bluestore. And > CEPH confirms Write to

[ceph-users] WAL/DB size

2018-09-07 Thread Muhammad Junaid
Hi there Asking the questions as a newbie. May be asked a number of times before by many but sorry, it is not clear yet to me. 1. The WAL device is just like journaling device used before bluestore. And CEPH confirms Write to client after writing to it (Before actual write to primary device)?