Re: [ceph-users] Why rbd rn did not clean used pool?

2018-08-25 Thread Konstantin Shalygin
Configuration: rbd - erasure pool rbdtier - tier pool for rbd ceph osd tier add-cache rbd rbdtier 549755813888 ceph osd tier cache-mode rbdtier writeback Create new rbd block device: rbd create --size 16G rbdtest rbd feature disable rbdtest object-map fast-diff deep-flatten rbd device map

Re: [ceph-users] Ceph-Deploy error on 15/71 stage

2018-08-25 Thread Eugen Block
Hi, take a look into the logs, they should point you in the right direction. Since the deployment stage fails at the OSD level, start with the OSD logs. Something's not right with the disks/partitions, did you wipe the partition from previous attempts? Regards, Eugen Zitat von Jones de

[ceph-users] Why rbd rn did not clean used pool?

2018-08-25 Thread Fyodor Ustinov
Hi! Configuration: rbd - erasure pool rbdtier - tier pool for rbd ceph osd tier add-cache rbd rbdtier 549755813888 ceph osd tier cache-mode rbdtier writeback Create new rbd block device: rbd create --size 16G rbdtest rbd feature disable rbdtest object-map fast-diff deep-flatten rbd device map

Re: [ceph-users] cephfs kernel client hangs

2018-08-25 Thread Zhenshi Zhou
Hi, This time, osdc: REQUESTS 0 homeless 0 LINGER REQUESTS monc: have monmap 2 want 3+ have osdmap 4545 want 4546 have fsmap.user 0 have mdsmap 446 want 447+ fs_cluster_id -1 mdsc: 649065 mds0setattr #12e7e5a Anything useful? Yan, Zheng 于2018年8月25日周六 上午7:53写道: > Are there

Re: [ceph-users] ceph-fuse slow cache?

2018-08-25 Thread Stefan Kooman
Quoting Gregory Farnum (gfar...@redhat.com): > Hmm, these aren't actually the start and end times to the same operation. > put_inode() is literally adjusting a refcount, which can happen for reasons > ranging from the VFS doing something that drops it to an internal operation > completing to a

Re: [ceph-users] Unexpected behaviour after monitors upgrade from Jewel to Luminous

2018-08-25 Thread Adrien Gillard
The issue is finally resolved. Upgrading to Luminous was the way to go. Unfortunately, we did not set 'ceph osd require-osd-release luminous' immediately so we did not activate the luminous functionnalities that saved us. I think the new mechanisms to manage and prune past intervals[1] allowed

Re: [ceph-users] radosgw: need couple of blind (indexless) buckets, how-to?

2018-08-25 Thread Konstantin Shalygin
Thank you very much! If anyone would like to help update these docs, I would be happy to help with guidance/review. I was make a try half year ago - http://tracker.ceph.com/issues/23081 k ___ ceph-users mailing list ceph-users@lists.ceph.com