[ceph-users] issue with OSD class path in RDMA mode

2018-05-31 Thread Raju Rangoju
Hello, I'm trying to run iscsi tgtd on ceph cluster. When do 'rbd list' I see below errors. [root@ceph1 ceph]# rbd list 2018-05-30 18:19:02.227 2ae7260a8140 -1 librbd::api::Image: list_images: error listing image in directory: (5) Input/output error 2018-05-30 18:19:02.227 2ae7260a8140 -1

Re: [ceph-users] issues with ceph nautilus version

2018-06-20 Thread Raju Rangoju
will most probably fix that: https://github.com/ceph/ceph/pull/22610 Also you may try to switch bluestore and bluefs allocators (bluestore_allocator and bluefs_allocator parameters respectively) to stupid and restart OSDs. This should help. Thanks, Igor On 6/20/2018 6:41 PM, Raju Rangoju

Re: [ceph-users] issues with ceph nautilus version

2018-06-20 Thread Raju Rangoju
On 6/20/2018 6:41 PM, Raju Rangoju wrote: Hi, Recently I have upgraded my ceph cluster to version 14.0.0 - nautilus(dev) from ceph version 13.0.1, after this, I noticed some weird data usage numbers on the cluster. Here are the issues I'm seeing... 1. The data usage reported is much more

[ceph-users] issues with ceph nautilus version

2018-06-20 Thread Raju Rangoju
Hi, Recently I have upgraded my ceph cluster to version 14.0.0 - nautilus(dev) from ceph version 13.0.1, after this, I noticed some weird data usage numbers on the cluster. Here are the issues I'm seeing... 1. The data usage reported is much more than what is available usage: 16 EiB

Re: [ceph-users] permission errors rolling back ceph cluster to v13

2018-08-08 Thread Raju Rangoju
Thanks Greg. I think I have to re-install ceph v13 from scratch then. -Raju From: Gregory Farnum Sent: 09 August 2018 01:54 To: Raju Rangoju Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] permission errors rolling back ceph cluster to v13 On Tue, Aug 7, 2018 at 6:27 PM Raju Rangoju

[ceph-users] permission errors rolling back ceph cluster to v13

2018-08-07 Thread Raju Rangoju
Hi, I have been running into some connection issues with the latest ceph-14 version, so we thought the feasible solution would be to roll back the cluster to previous version (ceph-13.0.1) where things are known to work properly. I'm wondering if rollback/downgrade is supported at all ? After

[ceph-users] troubleshooting ceph rdma performance

2018-11-07 Thread Raju Rangoju
Hello All, I have been collecting performance numbers on our ceph cluster, and I had noticed a very poor throughput on ceph async+rdma when compared with tcp. I was wondering what tunings/settings should I do to the cluster that would improve the ceph rdma (async+rdma) performance. Currently,