I am also noticing some delays working with nfs over cephfs. Especially when
making an initial connection. For instance, I run the following:
# time for i in {0..10} ; do time touch /tmp/cephfs/test-$i ; done
where /tmp/cephfs is the nfs mount point running over cephfs
I am noticing that the first touch file is created after about 20-30 seconds.
All the following files files are created with no delay.
If I run the command once again, all files are created pretty quickly without
any delay. However, if I wait 20-30 minutes and run the command again, the
delay with the first file is back again.
Has anyone experienced similar issues?
Andrei
----- Original Message -----
> From: "Andrei Mikhailovsky" <[email protected]>
> To: "ceph-users" <[email protected]>
> Sent: Friday, 28 November, 2014 9:08:17 AM
> Subject: [ceph-users] Giant + nfs over cephfs hang tasks
> Hello guys,
> I've got a bunch of hang tasks of the nfsd service running over the
> cephfs (kernel) mounted file system. Here is an example of one of
> them.
> [433079.991218] INFO: task nfsd:32625 blocked for more than 120
> seconds.
> [433080.029685] Not tainted 3.15.10-031510-generic #201408132333
> [433080.068036] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
> disables this message.
> [433080.144235] nfsd D 000000000000000a 0 32625 2 0x00000000
> [433080.144241] ffff8801a94dba78 0000000000000002 ffff8801a94dba38
> ffff8801a94dbfd8
> [433080.144244] 0000000000014540 0000000000014540 ffff880673d63260
> ffff880491d264c0
> [433080.144247] ffff8801a94dba78 ffff88067fd14e40 ffff880491d264c0
> ffffffff8115dff0
> [433080.144250] Call Trace:
> [433080.144260] [<ffffffff8115dff0>] ? __lock_page+0x70/0x70
> [433080.144274] [<ffffffff81778449>] schedule+0x29/0x70
> [433080.144279] [<ffffffff8177851f>] io_schedule+0x8f/0xd0
> [433080.144282] [<ffffffff8115dffe>] sleep_on_page+0xe/0x20
> [433080.144286] [<ffffffff81778be2>] __wait_on_bit+0x62/0x90
> [433080.144288] [<ffffffff8115eacb>] ? find_get_pages_tag+0xcb/0x170
> [433080.144291] [<ffffffff8115e160>] wait_on_page_bit+0x80/0x90
> [433080.144296] [<ffffffff810b54a0>] ?
> wake_atomic_t_function+0x40/0x40
> [433080.144299] [<ffffffff8115e334>]
> filemap_fdatawait_range+0xf4/0x180
> [433080.144302] [<ffffffff8116027d>]
> filemap_write_and_wait_range+0x4d/0x80
> [433080.144315] [<ffffffffa06bf1b8>] ceph_fsync+0x58/0x200 [ceph]
> [433080.144330] [<ffffffff813308f5>] ? ima_file_check+0x35/0x40
> [433080.144337] [<ffffffff812028c8>] vfs_fsync_range+0x18/0x30
> [433080.144352] [<ffffffffa03ee491>] nfsd_commit+0xb1/0xd0 [nfsd]
> [433080.144363] [<ffffffffa03fb787>] nfsd4_commit+0x57/0x60 [nfsd]
> [433080.144370] [<ffffffffa03fcf9e>] nfsd4_proc_compound+0x54e/0x740
> [nfsd]
> [433080.144377] [<ffffffffa03e8e05>] nfsd_dispatch+0xe5/0x230 [nfsd]
> [433080.144401] [<ffffffffa03205a5>] svc_process_common+0x345/0x680
> [sunrpc]
> [433080.144413] [<ffffffffa0320c33>] svc_process+0x103/0x160 [sunrpc]
> [433080.144418] [<ffffffffa03e895f>] nfsd+0xbf/0x130 [nfsd]
> [433080.144424] [<ffffffffa03e88a0>] ? nfsd_destroy+0x80/0x80 [nfsd]
> [433080.144428] [<ffffffff81091439>] kthread+0xc9/0xe0
> [433080.144431] [<ffffffff81091370>] ? flush_kthread_worker+0xb0/0xb0
> [433080.144434] [<ffffffff8178567c>] ret_from_fork+0x7c/0xb0
> [433080.144437] [<ffffffff81091370>] ? flush_kthread_worker+0xb0/0xb0
> I am using Ubuntu 12.04 servers with 3.15.10 kernel and ceph Giant.
> Thanks
> Andrei
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com