On 11/15/19 4:25 PM, Paul Emmerich wrote:
> On Fri, Nov 15, 2019 at 4:02 PM Wido den Hollander <w...@42on.com> wrote:
>>
>>  I normally use LVM on top
>> of each device and create 2 LVs per OSD:
>>
>> - WAL: 1GB
>> - DB: xx GB
> 
> Why? I've seen this a few times and I can't figure out what the
> advantage of doing this explicitly on the LVM level instead of relying
> on BlueStore to handle this.
> 

If the WAL+DB are on a external device you want the WAL to be there as
well. That's why I specify the WAL separate.

This might be an 'old habbit' as well.

Wido

> 
> Paul
> 
>>
>>>
>>>
>>> The initial cluster is +1PB and we’re planning to expand it again with
>>> 1PB in the near future to migrate our data.
>>>
>>> We’ll only use the system thru the RGW (No CephFS, nor block device),
>>> and we’ll store “a lot” of small files on it… (Millions of files a day)
>>>
>>>
>>>
>>> The reason I’m asking it, is that I’ve been able to break the test
>>> system (long story), causing OSDs to fail as they ran out of space…
>>> Expanding the disks (the block DB device as well as the main block
>>> device) failed with the ceph-bluestore-tool…
>>>
>>>
>>>
>>> Thanks for your answer!
>>>
>>>
>>>
>>> Kristof
>>>
>>>
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to