Hello,

I'm trying to run iscsi tgtd on ceph cluster. When do 'rbd list' I see below 
errors.

[root@ceph1 ceph]# rbd list
2018-05-30 18:19:02.227 2ae7260a8140 -1 librbd::api::Image: list_images: error 
listing image in directory: (5) Input/output error
2018-05-30 18:19:02.227 2ae7260a8140 -1 librbd: error listing v2 images: (5) 
Input/output error
rbd: list: (5) Input/output error

I have followed '/ceph/doc/dev/osd-class-path.rst' [1]  and found that this is 
an issue with OSD class path, and I have added the osd_class_dir to ceph.conf.

After adding osd class path, 'rbd list' worked perfectly fine in TCP mode and 
no issues were seen. But in RDMA(async) mode, 26 out of 88 osds are down. No 
errors are seen in osd logs. Can someone please shed some light?

[root@hadoop1 my-ceph]# ceph -s
  cluster:
    id:     ee4660fd-167b-42e6-b27b-126526dab04d
    health: HEALTH_WARN
            1 filesystem is degraded
            39 osds down
            4 hosts (44 osds) down
            Reduced data availability: 23944 pgs inactive

  services:
    mon: 3 daemons, quorum hadoop1,hadoop4,hadoop6
    mgr: hadoop1(active), standbys: hadoop6, hadoop4
    mds: cephfs-1/1/1 up  {0=hadoop3=up:replay}, 2 up:standby
    osd: 88 osds: 26 up, 88 in

  data:
    pools:   13 pools, 23944 pgs
    objects: 0 objects, 0
    usage:   0 used, 0 / 0 avail
    pgs:     100.000% pgs unknown
             23944 unknown

I am more than happy to provide any extra information necessary.

[1]
/root/ceph/doc/dev/osd-class-path.rst

=======================
OSD class path issues
=======================
::
  2011-12-05 17:41:00.994075 7ffe8b5c3760 librbd: failed to assign a block name 
for image
  create error: error 5: Input/output error

This usually happens because your OSDs can't find ``cls_rbd.so``. They
search for it in ``osd_class_dir``, which may not be set correctly by
default (http://tracker.ceph.com/issues/1722).

Most likely it's looking in ``/usr/lib/rados-classes`` instead of
``/usr/lib64/rados-classes`` - change ``osd_class_dir`` in your
``ceph.conf`` and restart the OSDs to fix it.

Thanks,
Raju
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to