That's much slower than I'd expect, although FUSE can be slower in
metadata operations than the kernel client is. If it's convenient, you
could gather some logs with debug client = 20 (on the client, for
ceph-fuse) and debug mds = 20 (on the mds, for both the ceph-fuse
and kernel tests) and post
I have done some quick tests with FUSE too: it seems to me that, both with
the old and with the new kernel, FUSE is approx. five times slower than
kernel driver for both reading files and getting stats.
I don't know whether it is just me or if it is expected.
On Wed, Jun 17, 2015 at 2:56 AM,
Thanks everyone,
update: I tried running on node A:
# vmtouch -ev /storage/
# sync; sync
The problem persisted; one minute needed to 'ls -Ral' the dir (from node B).
After that I ran on node A:
# echo 2 /proc/sys/vm/drop_caches
And everything became suddenly fast on node B. ls, du, tar, all
It is not only ls; even du or tar are extremely slow.
Example with tar (from node B):
# time tar c /storage/test10/installed-tests/pts/pgbench-1.5.1//dev/null
real1m45.291s
user0m0.023s
sys 0m0.143s
While on the node that originally wrote the dir (node A):
# time tar c
On 16/06/2015 12:11, negillen negillen wrote:
This is quite a problem because we have several applications that need
to access a large number of files and when we set them to work on
CephFS latency skyrockets.
The question of what access means here is key. Especially, whether
you need the
Fixed! At least looks like fixed.
It seems that after migrating every node (both servers and clients) from
kernel 3.10.80-1 to 4.0.4-1 the issue disappeared.
Now I get decent speeds both for reading files and for getting stats from
every node.
Thanks everyone!
On Tue, Jun 16, 2015 at 1:00 PM,
Thank you very much for your reply!
Is there anything I can do to go around that? e.g. setting access caps to
be released after a short while? Or is there a command to manually release
access caps (so that I could run it in cron)?
This is quite a problem because we have several applications that
On Mon, Jun 15, 2015 at 11:34 AM, negillen negillen negil...@gmail.com wrote:
Hello everyone,
something very strange is driving me crazy with CephFS (kernel driver).
I copy a large directory on the CephFS from one node. If I try to perform a
'time ls -alR' on that directory it gets executed
On Tue, Jun 16, 2015 at 12:11 PM, negillen negillen negil...@gmail.com wrote:
Thank you very much for your reply!
Is there anything I can do to go around that? e.g. setting access caps to be
released after a short while? Or is there a command to manually release
access caps (so that I could
Thanks again,
even 'du' performance is terrible on node B (testing on a directory taken
from Phoronix):
# time du -hs /storage/test9/installed-tests/pts/pgbench-1.5.1/
73M /storage/test9/installed-tests/pts/pgbench-1.5.1/
real0m21.044s
user0m0.010s
sys 0m0.067s
Reading the
: [ceph-users] CephFS: 'ls -alR' performance terrible unless Linux
cache flushed
Thanks again,
even 'du' performance is terrible on node B (testing on a directory taken from
Phoronix):
# time du -hs /storage/test9/installed-tests/pts/pgbench-1.5.1/
73M /storage/test9/installed-tests/pts
Have you tried just running “sync;sync” on the originating node? Does that
achieve the same thing or not? (I guess it could/should).
Jan
On 16 Jun 2015, at 13:37, negillen negillen negil...@gmail.com wrote:
Thanks again,
even 'du' performance is terrible on node B (testing on a
12 matches
Mail list logo