If your had a specific location for the wal it would show up there. If
there is no entry for the wal, then it is using the same seeing as the db.

On Sun, Oct 28, 2018, 9:26 PM Robert Stanford <[email protected]>
wrote:

>
>  Mehmet: it doesn't look like wal is mentioned in the osd metadata.  I see
> bluefs slow, bluestore bdev, and bluefs db mentioned only.
>
> On Sun, Oct 28, 2018 at 1:48 PM <[email protected]> wrote:
>
>> IIRC there is a Command like
>>
>> Ceph osd Metadata
>>
>> Where you should be able to find Information like this
>>
>> Hab
>> - Mehmet
>>
>> Am 21. Oktober 2018 19:39:58 MESZ schrieb Robert Stanford <
>> [email protected]>:
>>>
>>>
>>>  I did exactly this when creating my osds, and found that my total
>>> utilization is about the same as the sum of the utilization of the pools,
>>> plus (wal size * number osds).  So it looks like my wals are actually
>>> sharing OSDs.  But I'd like to be 100% sure... so I am seeking a way to
>>> find out
>>>
>>> On Sun, Oct 21, 2018 at 11:13 AM Serkan Çoban <[email protected]>
>>> wrote:
>>>
>>>> wal and db device will be same if you use just db path during osd
>>>> creation. i do not know how to verify this with ceph commands.
>>>> On Sun, Oct 21, 2018 at 4:17 PM Robert Stanford <
>>>> [email protected]> wrote:
>>>> >
>>>> >
>>>> >  Thanks Serkan.  I am using --path instead of --dev (dev won't work
>>>> because I'm using VGs/LVs).  The output shows block and block.db, but
>>>> nothing about wal.db.  How can I learn where my wal lives?
>>>> >
>>>> >
>>>> >
>>>> >
>>>> > On Sun, Oct 21, 2018 at 12:43 AM Serkan Çoban <[email protected]>
>>>> wrote:
>>>> >>
>>>> >> ceph-bluestore-tool can show you the disk labels.
>>>> >> ceph-bluestore-tool show-label --dev /dev/sda1
>>>> >> On Sun, Oct 21, 2018 at 1:29 AM Robert Stanford <
>>>> [email protected]> wrote:
>>>> >> >
>>>> >> >
>>>> >> >  An email from this list stated that the wal would be created in
>>>> the same place as the db, if the db were specified when running ceph-volume
>>>> lvm create, and the db were specified on that command line.  I followed
>>>> those instructions and like the other person writing to this list today, I
>>>> was surprised to find that my cluster usage was higher than the total of
>>>> pools (higher by an amount the same as all my wal sizes on each node
>>>> combined).  This leads me to think my wal actually is on the data disk and
>>>> not the ssd I specified the db should go to.
>>>> >> >
>>>> >> >  How can I verify which disk the wal is on, from the command
>>>> line?  I've searched the net and not come up with anything.
>>>> >> >
>>>> >> >  Thanks and regards
>>>> >> >  R
>>>> >> >
>>>> >> > _______________________________________________
>>>> >> > ceph-users mailing list
>>>> >> > [email protected]
>>>> >> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>
>>> _______________________________________________
>> ceph-users mailing list
>> [email protected]
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to