I don't have enough disk space on the nvme. The DB would overflow before I
reached 25% utilization in the cluster. The disks are 10TB spinners and
would need a minimum of 100 GB of DB space minimum based on early testing.
The official docs recommend 400GB size DB for this size disk. I don't have
enough flash space for that in the 2x nvme disks in those servers.  Hence I
put the WAL soon the nvmes and left the DB on the data disk where it would
have spoiled over to almost immediately anyway.

On Mon, Oct 22, 2018, 6:55 PM solarflow99 <solarflo...@gmail.com> wrote:

> Why didn't you just install the DB + WAL on the NVMe?  Is this "data disk"
> still an ssd?
>
>
>
> On Mon, Oct 22, 2018 at 3:34 PM David Turner <drakonst...@gmail.com>
> wrote:
>
>> And by the data disk I mean that I didn't specify a location for the DB
>> partition.
>>
>> On Mon, Oct 22, 2018 at 4:06 PM David Turner <drakonst...@gmail.com>
>> wrote:
>>
>>> Track down where it says they point to?  Does it match what you expect?
>>> It does for me.  I have my DB on my data disk and my WAL on a separate NVMe.
>>>
>>> On Mon, Oct 22, 2018 at 3:21 PM Robert Stanford <rstanford8...@gmail.com>
>>> wrote:
>>>
>>>>
>>>>  David - is it ensured that wal and db both live where the symlink
>>>> block.db points?  I assumed that was a symlink for the db, but necessarily
>>>> for the wal, because it can live in a place different than the db.
>>>>
>>>> On Mon, Oct 22, 2018 at 2:18 PM David Turner <drakonst...@gmail.com>
>>>> wrote:
>>>>
>>>>> You can always just go to /var/lib/ceph/osd/ceph-{osd-num}/ and look
>>>>> at where the symlinks for block and block.wal point to.
>>>>>
>>>>> On Mon, Oct 22, 2018 at 12:29 PM Robert Stanford <
>>>>> rstanford8...@gmail.com> wrote:
>>>>>
>>>>>>
>>>>>>  That's what they say, however I did exactly this and my cluster
>>>>>> utilization is higher than the total pool utilization by about the number
>>>>>> of OSDs * wal size.  I want to verify that the wal is on the SSDs too but
>>>>>> I've asked here and no one seems to know a way to verify this.  Do you?
>>>>>>
>>>>>>  Thank you, R
>>>>>>
>>>>>> On Mon, Oct 22, 2018 at 5:22 AM Maged Mokhtar <mmokh...@petasan.org>
>>>>>> wrote:
>>>>>>
>>>>>>>
>>>>>>> If you specify a db on ssd and data on hdd and not explicitly
>>>>>>> specify a
>>>>>>> device for wal, wal will be placed on same ssd partition with db.
>>>>>>> Placing only wal on ssd or creating separate devices for wal and db
>>>>>>> are
>>>>>>> less common setups.
>>>>>>>
>>>>>>> /Maged
>>>>>>>
>>>>>>> On 22/10/18 09:03, Fyodor Ustinov wrote:
>>>>>>> > Hi!
>>>>>>> >
>>>>>>> > For sharing SSD between WAL and DB what should be placed on SSD?
>>>>>>> WAL or DB?
>>>>>>> >
>>>>>>> > ----- Original Message -----
>>>>>>> > From: "Maged Mokhtar" <mmokh...@petasan.org>
>>>>>>> > To: "ceph-users" <ceph-users@lists.ceph.com>
>>>>>>> > Sent: Saturday, 20 October, 2018 20:05:44
>>>>>>> > Subject: Re: [ceph-users] Drive for Wal and Db
>>>>>>> >
>>>>>>> > On 20/10/18 18:57, Robert Stanford wrote:
>>>>>>> >
>>>>>>> >
>>>>>>> >
>>>>>>> >
>>>>>>> > Our OSDs are BlueStore and are on regular hard drives. Each OSD
>>>>>>> has a partition on an SSD for its DB. Wal is on the regular hard drives.
>>>>>>> Should I move the wal to share the SSD with the DB?
>>>>>>> >
>>>>>>> > Regards
>>>>>>> > R
>>>>>>> >
>>>>>>> >
>>>>>>> > _______________________________________________
>>>>>>> > ceph-users mailing list [ mailto:ceph-users@lists.ceph.com |
>>>>>>> ceph-users@lists.ceph.com ] [
>>>>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com |
>>>>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com ]
>>>>>>> >
>>>>>>> > you should put wal on the faster device, wal and db could share
>>>>>>> the same ssd partition,
>>>>>>> >
>>>>>>> > Maged
>>>>>>> >
>>>>>>> > _______________________________________________
>>>>>>> > ceph-users mailing list
>>>>>>> > ceph-users@lists.ceph.com
>>>>>>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>>>> > _______________________________________________
>>>>>>> > ceph-users mailing list
>>>>>>> > ceph-users@lists.ceph.com
>>>>>>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> ceph-users mailing list
>>>>>>> ceph-users@lists.ceph.com
>>>>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>>>>
>>>>>> _______________________________________________
>>>>>> ceph-users mailing list
>>>>>> ceph-users@lists.ceph.com
>>>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>>>
>>>>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to