I noticed that in my scenario, when I mount cephfs via the kernel module,
it directly copies to one or three of the OSDs. And the writing speed of
the client is higher than the speed of replication and auto scaling This
causes the writing operation to stop as soon as those OSDs are filled, and
the error of free space is not available What should be done to solve this
problem? Is there a way to increase the speed of scaling or moving objects
in OSD? Or a way to mount cephfs that does not have these problems?
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to