Configuration:
rbd - erasure pool
rbdtier - tier pool for rbd
ceph osd tier add-cache rbd rbdtier 549755813888
ceph osd tier cache-mode rbdtier writeback
Create new rbd block device:
rbd create --size 16G rbdtest
rbd feature disable rbdtest object-map fast-diff deep-flatten
rbd device map
Hi,
take a look into the logs, they should point you in the right direction.
Since the deployment stage fails at the OSD level, start with the OSD
logs. Something's not right with the disks/partitions, did you wipe
the partition from previous attempts?
Regards,
Eugen
Zitat von Jones de
Hi!
Configuration:
rbd - erasure pool
rbdtier - tier pool for rbd
ceph osd tier add-cache rbd rbdtier 549755813888
ceph osd tier cache-mode rbdtier writeback
Create new rbd block device:
rbd create --size 16G rbdtest
rbd feature disable rbdtest object-map fast-diff deep-flatten
rbd device map
Hi,
This time, osdc:
REQUESTS 0 homeless 0
LINGER REQUESTS
monc:
have monmap 2 want 3+
have osdmap 4545 want 4546
have fsmap.user 0
have mdsmap 446 want 447+
fs_cluster_id -1
mdsc:
649065 mds0setattr #12e7e5a
Anything useful?
Yan, Zheng 于2018年8月25日周六 上午7:53写道:
> Are there
Quoting Gregory Farnum (gfar...@redhat.com):
> Hmm, these aren't actually the start and end times to the same operation.
> put_inode() is literally adjusting a refcount, which can happen for reasons
> ranging from the VFS doing something that drops it to an internal operation
> completing to a
The issue is finally resolved.
Upgrading to Luminous was the way to go. Unfortunately, we did not set
'ceph osd require-osd-release luminous' immediately so we did not
activate the luminous functionnalities that saved us.
I think the new mechanisms to manage and prune past intervals[1]
allowed
Thank you very much! If anyone would like to help update these docs, I
would be happy to help with guidance/review.
I was make a try half year ago - http://tracker.ceph.com/issues/23081
k
___
ceph-users mailing list
ceph-users@lists.ceph.com