[ceph-users] librbd::image::CreateRequest: 0x55e4fc8bf620 handle_create_id_object: error creating RBD id object

2018-11-04 Thread Dengke Du
Hi all ceph: 13.2.2 When run command:     rbd create libvirt-pool/dimage --size 10240 Error happen:     rbd: create error: 2018-11-04 23:54:56.224 7ff22e7fc700 -1 librbd::image::CreateRequest: 0x55e4fc8bf620 handle_create_id_object: error creating RBD id object: (95) Operation not

Re: [ceph-users] Filestore to Bluestore migration question

2018-11-04 Thread Hector Martin
On 03/11/2018 06:03, Hayashida, Mami wrote: ceph-volume lvm activate --all ... --> Activating OSD ID 67 FSID 17cd6755-76f9-4160-906c-XX Running command: mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-67 --> Absolute path not found for executable: restorecon --> Ensure $PATH environment variable

Re: [ceph-users] Should OSD write error result in damaged filesystem?

2018-11-04 Thread Bryan Henderson
>OSD write errors are not usual events: any issues with the underlying >storage are expected to be handled by RADOS, and write operations to >an unhealthy cluster should block, rather than returning an error. It >would not be correct for CephFS to throw away metadata updates in the >case of

Re: [ceph-users] Snapshot cephfs data pool from ceph cmd

2018-11-04 Thread John Spray
On Sat, Nov 3, 2018 at 3:43 PM Rhian Resnick wrote: > > is it possible to snapshot the cephfs data pool? CephFS snapshots operate on a per-directory level (rather than per pool), but you can make snapshots of the root of the filesystem if you wish. John >

Re: [ceph-users] Should OSD write error result in damaged filesystem?

2018-11-04 Thread John Spray
On Sat, Nov 3, 2018 at 7:28 PM Bryan Henderson wrote: > > I had a filesystem rank get damaged when the MDS had an error writing the log > to the OSD. Is damage expected when a log write fails? > > According to log messages, an OSD write failed because the MDS attempted > to write a bigger chunk

Re: [ceph-users] cephfs-data-scan

2018-11-04 Thread Sergey Malinin
Keep in mind that in order for the workers not to overlap each other you need to set the total number of workers (worker_m) to nodes*20, and assign each node with it’s own processing range (worker_n). On Nov 4, 2018, 03:43 +0300, Rhian Resnick , wrote: > Sounds like we are going to restart with