But this is correct, isn't it?

****
root@osd1:~# ceph-volume lvm list --format=json hdd60/data60
{
    "60": [
        {
            "devices": [
                "/dev/sdh"
            ],
            "lv_name": "data60",
            "lv_path": "/dev/hdd60/data60",
            "lv_size": "3.64t",
            "lv_tags":
"ceph.block_device=/dev/hdd60/data60,ceph.block_uuid=ycRaVn-O70Q-Ci43-2IN3-U5ua-lnqL-IE9jVb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fef5bc3c-3912-4a77-a077-3398f21cc16d,ceph.cluster_name=ceph,ceph.crush_device_class=None,ceph.db_device=/dev/ssd0/db60,ceph.db_uuid=d32eQz-79GQ-2eJD-4ANB-vr0O-bDpb-fjWSD5,ceph.encrypted=0,ceph.osd_fsid=e0d69288-13e1-4023-a812-9d313204f600,ceph.osd_id=60,ceph.type=block,ceph.vdo=0",
            "lv_uuid": "ycRaVn-O70Q-Ci43-2IN3-U5ua-lnqL-IE9jVb",
            "name": "data60",
            "path": "/dev/hdd60/data60",
            "tags": {
                "ceph.block_device": "/dev/hdd60/data60",
                "ceph.block_uuid":
"ycRaVn-O70Q-Ci43-2IN3-U5ua-lnqL-IE9jVb",
                "ceph.cephx_lockbox_secret": "",
                "ceph.cluster_fsid":
"fef5bc3c-3912-4a77-a077-3398f21cc16d",
                "ceph.cluster_name": "ceph",
                "ceph.crush_device_class": "None",
                "ceph.db_device": "/dev/ssd0/db60",
                "ceph.db_uuid": "d32eQz-79GQ-2eJD-4ANB-vr0O-bDpb-fjWSD5",
                "ceph.encrypted": "0",
                "ceph.osd_fsid": "e0d69288-13e1-4023-a812-9d313204f600",
                "ceph.osd_id": "60",
                "ceph.type": "block",
                "ceph.vdo": "0"
            },
            "type": "block",
            "vg_name": "hdd60"
        }
    ]
}

On Tue, Nov 6, 2018 at 11:00 AM, Hayashida, Mami <mami.hayash...@uky.edu>
wrote:

> I see.  Thank you for clarifying lots of things along the way -- this has
> been extremely helpful.   Neither "df | grep osd" nor "mount | grep osd"
> shows ceph-60 through 69.
>
> On Tue, Nov 6, 2018 at 10:57 AM, Hector Martin <hec...@marcansoft.com>
> wrote:
>
>>
>>
>> On 11/7/18 12:48 AM, Hayashida, Mami wrote:
>> > All other OSDs that I converted (#60-69) look basically identical while
>> > the Filestore OSDs (/var/lib/ceph/osd/ceph-70 etc.) look different
>> > obviously.  When I run "df" it does NOT list those converted osds (only
>> > the Filestore ones).  In other words, /dev/sdh1 where osd.60 should be
>> > is not listed.  (Should it be?)  Neither does mount lists that drive.
>> >  ("df | grep sdh" and "mount | grep sdh" both return nothing)
>>
>> /dev/sdh1 no longer exists. Remember, we converted the drives to be LVM
>> physical volumes. There are no partitions any more. It's all in an LVM
>> volume backed by /dev/sdh (without the 1).
>>
>> What *should* be mounted at the OSD paths are tmpfs filesystems, i.e.
>> ramdisks. Those would not reference sdh so of course those commands will
>> return nothing. Try "df | grep osd" and "mount | grep osd" instead and
>> see if ceph-60 through ceph-69 show up.
>>
>> --
>> Hector Martin (hec...@marcansoft.com)
>> Public Key: https://mrcn.st/pub
>>
>
>
>
> --
> *Mami Hayashida*
>
> *Research Computing Associate*
> Research Computing Infrastructure
> University of Kentucky Information Technology Services
> 301 Rose Street | 102 James F. Hardymon Building
> Lexington, KY 40506-0495
> mami.hayash...@uky.edu
> (859)323-7521
>



-- 
*Mami Hayashida*

*Research Computing Associate*
Research Computing Infrastructure
University of Kentucky Information Technology Services
301 Rose Street | 102 James F. Hardymon Building
Lexington, KY 40506-0495
mami.hayash...@uky.edu
(859)323-7521
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to