If your client needs to be able to handle the writes like that on its own,
RBDs might be the more appropriate use case. You lose the ability to have
multiple clients accessing the data as easily as with CephFS, but you would
gain the features you're looking for.
On Tue, Feb 12, 2019 at 1:43 PM Gr
On Tue, Feb 12, 2019 at 5:10 AM Hector Martin wrote:
> On 12/02/2019 06:01, Gregory Farnum wrote:
> > Right. Truncates and renames require sending messages to the MDS, and
> > the MDS committing to RADOS (aka its disk) the change in status, before
> > they can be completed. Creating new files wil
On 12/02/2019 06:01, Gregory Farnum wrote:
Right. Truncates and renames require sending messages to the MDS, and
the MDS committing to RADOS (aka its disk) the change in status, before
they can be completed. Creating new files will generally use a
preallocated inode so it's just a network round
On Thu, Feb 7, 2019 at 3:31 AM Hector Martin wrote:
> On 07/02/2019 19:47, Marc Roos wrote:
> >
> > Is this difference not related to chaching? And you filling up some
> > cache/queue at some point? If you do a sync after each write, do you
> > have still the same results?
>
> No, the slow operat
On 07/02/2019 19:47, Marc Roos wrote:
Is this difference not related to chaching? And you filling up some
cache/queue at some point? If you do a sync after each write, do you
have still the same results?
No, the slow operations are slow from the very beginning. It's not about
filling a buff
@lists.ceph.com
Subject: [ceph-users] CephFS overwrite/truncate performance hit
I'm seeing some interesting performance issues with file overwriting on
CephFS.
Creating lots of files is fast:
for i in $(seq 1 1000); do
echo $i; echo test > a.$i
done
Deleting lots of files is fas
I'm seeing some interesting performance issues with file overwriting on
CephFS.
Creating lots of files is fast:
for i in $(seq 1 1000); do
echo $i; echo test > a.$i
done
Deleting lots of files is fast:
rm a.*
As is creating them again.
However, repeatedly creating the same file over