Re: [ceph-users] ceph-deploy , osd_journal_size and entire disk partiton for journal

2017-06-12 Thread David Turner
Then you want separate partitions for each OSD journal. if you have 4 HDD OSDs using this as they're journal, you should have 4x 5GB partitions on the SSD. On Mon, Jun 12, 2017 at 12:07 PM Deepak Naidu wrote: > Thanks for the note, yes I know them all. It will be shared

Re: [ceph-users] ceph-deploy , osd_journal_size and entire disk partiton for journal

2017-06-12 Thread Deepak Naidu
Thanks for the note, yes I know them all. It will be shared among multiple 3-4 HDD OSD Disks. -- Deepak On Jun 12, 2017, at 7:07 AM, David Turner > wrote: Why do you want a 70GB journal? You linked to the documentation, so I'm assuming

Re: [ceph-users] ceph-deploy , osd_journal_size and entire disk partiton for journal

2017-06-12 Thread David Turner
Why do you want a 70GB journal? You linked to the documentation, so I'm assuming that you followed the formula stated to figure out how big your journal should be... "osd journal size = {2 * (expected throughput * filestore max sync interval)}". I've never heard of a cluster that requires such a

[ceph-users] ceph-deploy , osd_journal_size and entire disk partiton for journal

2017-06-11 Thread Deepak Naidu
Hello folks, I am trying to use an entire ssd partition for journal disk ie example /dev/sdf1 partition(70GB). But when I look up the osd config using below command I see ceph-deploy sets journal_size as 5GB. More confusing, I see the OSD logs showing the correct size in blocks in the